close

[Solved] AnalysisException: u”cannot resolve ‘name’ given input columns: [ list] in sqlContext in spark

Hello Guys, How are you all? Hope You all Are Fine. Today I get the following error AnalysisException: u”cannot resolve ‘name’ given input columns: [ list] in sqlContext in spark in python. So Here I am Explain to you all the possible solutions here.

Without wasting your time, Let’s start This Article to Solve This Error.

How AnalysisException: u”cannot resolve ‘name’ given input columns: [ list] in sqlContext in spark Error Occurs?

Today I get the following error AnalysisException: u”cannot resolve ‘name’ given input columns: [ list] in sqlContext in spark in python.

How To Solve AnalysisException: u”cannot resolve ‘name’ given input columns: [ list] in sqlContext in spark Error ?

  1. How To Solve AnalysisException: u”cannot resolve 'name' given input columns: [ list] in sqlContext in spark Error ?

    To Solve AnalysisException: u”cannot resolve 'name' given input columns: [ list] in sqlContext in spark Error I found the issue: some of the column names contain white spaces before the name itself. So

  2. AnalysisException: u”cannot resolve 'name' given input columns: [ list] in sqlContext in spark

    To Solve AnalysisException: u”cannot resolve 'name' given input columns: [ list] in sqlContext in spark Error I found the issue: some of the column names contain white spaces before the name itself. So

Solution 1

I found the issue: some of the column names contain white spaces before the name itself. So

data = data.select(" timedelta", " shares").map(lambda r: LabeledPoint(r[1], [r[0]])).toDF()

worked. I could catch the white spaces using

assert " " not in ''.join(df.columns)  

Now I am thinking of a way to remove the white spaces. Any idea is much appreciated!

Solution 2


Because header contains spaces or tabs,remove spaces or tabs and try

1) My example script

from pyspark.sql import SparkSession
spark = SparkSession \
.builder \
.appName("Python Spark SQL basic example") \
.config("spark.some.config.option", "some-value") \
.getOrCreate()

df=spark.read.csv(r'test.csv',header=True,sep='^')
print("#################################################################")
print df.printSchema()
df.createOrReplaceTempView("test")
re=spark.sql("select max_seq from test")
print(re.show())
print("################################################################")

2) Input file,here ‘max_seq ‘ contains space so we are getting bellow exception

Trx_ID^max_seq ^Trx_Type^Trx_Record_Type^Trx_Date

Traceback (most recent call last):
  File "D:/spark-2.1.0-bin-hadoop2.7/bin/test.py", line 14, in <module>
    re=spark.sql("select max_seq from test")
  File "D:\spark-2.1.0-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\sql\session.py", line 541, in sql
  File "D:\spark-2.1.0-bin-hadoop2.7\python\lib\py4j-0.10.4-src.zip\py4j\java_gateway.py", line 1133, in __call__
  File "D:\spark-2.1.0-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\sql\utils.py", line 69, in deco
pyspark.sql.utils.AnalysisException: u"cannot resolve '`max_seq`' given input columns: [Venue_City_Name, Trx_Type, Trx_Booking_Status_Committed, Payment_Reference1, Trx_Date, max_seq , Event_ItemVariable_Name, Amount_CurrentPrice, cinema_screen_count, Payment_IsMyPayment, r

2) Remove space after ‘max_seq’ column then it will work fine

Trx_ID^max_seq^Trx_Type^Trx_Record_Type^Trx_Date


17/03/20 12:16:25 INFO DAGScheduler: Job 3 finished: showString at <unknown>:0, took 0.047602 s
17/03/20 12:16:25 INFO CodeGenerator: Code generated in 8.494073 ms
  max_seq
    10
    23
    22
    22
only showing top 20 rows

None
##############################################################

Summery

It’s all About this issue. Hope all solution helped you a lot. Comment below Your thoughts and your queries. Also, Comment below which solution worked for you? Thank You.

Also, Read