questions/big-data-hadoop/page/26
rmf is a hadoop file system command and you ...READ MORE
You can get the column names by ...READ MORE
You can try something like this: ...READ MORE
You have imported org.apache.hadoop.mapred.FileOutputFormat, you need to import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat So, make ...READ MORE
You can define a Combiner function, which does the ...READ MORE
"The easiest way to get started is ...READ MORE
Have you set a password for MySQL? ...READ MORE
First enter into grunt shell: $ ./pig –x ...READ MORE
Could you please give me a brief ...READ MORE
In Hive we can create a sequence ...READ MORE
Seems like the host IP is not ...READ MORE
The “conf/storm.yaml” file contains configurations of Storm. And ...READ MORE
As mentioned in the error, there's another ...READ MORE
The reason you are getting hadoop as ...READ MORE
Well, what you can do is use ...READ MORE
It is not the problem about permission ...READ MORE
You can use this: df.write .option("header", "true") ...READ MORE
select * from tablename where DOJ> '2018-01-01' ...READ MORE
You can use a combination of cat and put command. Something ...READ MORE
Try using put command with stdin as ...READ MORE
To remove this warning, you can use: hdfs ...READ MORE
You might not have set the connector. ...READ MORE
Try this: { Configuration config ...READ MORE
Seems like you have not specified few ...READ MORE
MPI is a communication protocol for programming ...READ MORE
The Yahoo distribution is a version of ...READ MORE
The Mapper class belongs to package org.apache.hadoop.mapreduce ...READ MORE
By convention, the pig script that you ...READ MORE
On which version of hadoop do you ...READ MORE
The main difference between these two is ...READ MORE
Seems like rack awareness is not configure. Try ...READ MORE
Check your /etc/hosts file, the format should be like ...READ MORE
As the error suggests and you have ...READ MORE
There surely is a command to check ...READ MORE
The COUNT function returns the number of ...READ MORE
DataNodes are the commodity hardware only as ...READ MORE
It stores metadata for Hive tables (like their schema ...READ MORE
The Secondary namenode is mainly used as a ...READ MORE
spark-csv is part of core Spark functionality ...READ MORE
I think you are using new version ...READ MORE
You can browse the hdfs and see the ...READ MORE
Seems like a hive version problem. insert operation is ...READ MORE
Make sure you are running from the ...READ MORE
Hey. It's definitely not a stupid question. ...READ MORE
No, the files after the reduce phase are ...READ MORE
Here's where you can find the file: /etc/hadoop/[service ...READ MORE
Hey George! This error comes whenever you use ...READ MORE
You need to create a new user ...READ MORE
hadoop fs -cat /example2/doc1 | wc -l READ MORE
You can create a file directly in ...READ MORE
OR
At least 1 upper-case and 1 lower-case letter
Minimum 8 characters and Maximum 50 characters
Already have an account? Sign in.