questions/apache-spark/page/2
Option c) Run time error - A READ MORE
Option D: String class READ MORE
Hey, @Ritu, I am getting error in your ...READ MORE
Hi@akhtar, When we try to retrieve the data ...READ MORE
After executing your code, there is an ...READ MORE
rror: expected class or object definition sc.parallelize(Array(1L,("SFO")),(2L,("ORD")),(3L,("DFW")))) ^ one error ...READ MORE
What is the output of the following ...READ MORE
Hi, @Ritu, List(5,100,10) is printed. The take method returns the first n elements in ...READ MORE
option d, Runtime error READ MORE
spark do not have any concept of ...READ MORE
Hi@khyati, You are getting this type of output ...READ MORE
Hi@dani, As you said you are a beginner ...READ MORE
Hi@akhtar, To convert pyspark dataframe into pandas dataframe, ...READ MORE
Hi@Ganendra, I am not sure what's the issue, ...READ MORE
Hi@Manas, You can read your dataset from CSV ...READ MORE
Hi@Shllpa, In general, we get the 401 status code ...READ MORE
This type of error tends to occur ...READ MORE
Hi@akhtar, I think your HDFS cluster is not ...READ MORE
var d=rdd2col.rdd.map(x=>x.split(",")) or val names=rd ...READ MORE
Hi@Srinath, It seems you didn't set Hadoop for ...READ MORE
Hi@Ganendra, As you said you launched a multinode cluster, ...READ MORE
Hi@Neha, You can find all the job status ...READ MORE
package com.dataguise.test; import java.io.IOException; import java.util.concurrent.CountDownLatch; import java.util.concurrent.TimeUnit; import org.apache.spark.SparkContext; import org.apache.spark.SparkJobInfo; import ...READ MORE
Hi@Deepak, In your test class you passed empid ...READ MORE
Hi@akhtar, This error occurs because your python version ...READ MORE
The missing driver is the JDBC one ...READ MORE
I have used a header-less csv file ...READ MORE
Hi, You can follow the below-given steps to ...READ MORE
When using the Java substring() method, a ...READ MORE
Hi@Rishi, Yes, it is possible. If executor no. ...READ MORE
Hi@Rishi, Yes, number of spark tasks can be ...READ MORE
Hi, @Amey, You can go through this regarding ...READ MORE
Hi@akhtar, I think you got this error due to version mismatch ...READ MORE
Hello, Your problem is here: val df_merge_final = df_merge .withColumn("version_key", ...READ MORE
Both 'filter' and 'where' in Spark SQL ...READ MORE
df.orderBy($"col".desc) - this works as well READ MORE
Hi@akhtar, In /etc/spark/conf/spark-defaults.conf, append the path of your custom ...READ MORE
Hi, Use this below given code, it will ...READ MORE
Hi@abdul, Hadoop 3.0.1 has lots of new features. ...READ MORE
Hi@akhtar, In your error, it shows that you ...READ MORE
Hi@Amey, It depends on your use case. Both ...READ MORE
Hi@Amey, You can enable WebHDFS to do this ...READ MORE
Hi@akhtar, Currently, you are running with the default ...READ MORE
Hi@ akhtar, Both map() and mapPartitions() are the ...READ MORE
from pyspark.sql.types import FloatType fname = [1.0,2.4,3.6,4.2,45.4] df=spark.createDataFrame(fname, ...READ MORE
Hi@akhtar, You may resolve this exception, by increasing the ...READ MORE
Hi@akhtar, Regrading to this error, I think the ...READ MORE
Hey, @KK, You can fix this issue may be ...READ MORE
Hi@akhtar, I also got this error. I am able to ...READ MORE
Hi@akhtar There are lots of online courses available ...READ MORE
OR
At least 1 upper-case and 1 lower-case letter
Minimum 8 characters and Maximum 50 characters
Already have an account? Sign in.