questions/big-data-hadoop/page/29
Hey @supriya. Seems like you have not set ...READ MORE
Hello. "The system never lies :-P". The service ...READ MORE
If your block size is 64 MB, ...READ MORE
hadoop jar hadoop-multiple-streaming.jar \ ...READ MORE
You can use the get_json_object function to parse the ...READ MORE
You can use the following commands in ...READ MORE
/user/cloudera/data1 is not a directory, it is ...READ MORE
Follow these steps: Stop namenode Delete the datanode directory ...READ MORE
There are a few options for backup ...READ MORE
Step 1: Create includes file in /home/hadoop ...READ MORE
This can be solved making use of ...READ MORE
To find this file, your HADOOP_CONF_DIR env ...READ MORE
When the application master fails, each file ...READ MORE
Multiple files are not stored in a ...READ MORE
Sqoop stores metadata in a repository and ...READ MORE
ACID stands for Atomicity, Consistency, Isolation, and Durability. Until ...READ MORE
Hey. You can use the following commands ...READ MORE
You can do that by selecting the ...READ MORE
Input Processing Hive's execution engine (referred to as ...READ MORE
The main difference between HDFS High Availability ...READ MORE
You need to sort RDD and take ...READ MORE
Follow these steps: Step 1: Import all these hadoop ...READ MORE
Pig can be used in two modes: 1) ...READ MORE
hdfs dfs -put input_file_name output_location READ MORE
Initially in Hadoop 1.x, the NameNode was ...READ MORE
Hadoop framework divides a large file into ...READ MORE
Try to restart the mysqld server and then login: sudo ...READ MORE
A MapReduce job usually splits the input data-set into ...READ MORE
The command you are using is wrong. ...READ MORE
Yes. It is not necessary to set ...READ MORE
Follow these steps: First start hadoop daemons: cd $HADOOP_HOME/sbin ./start-all.sh Now ...READ MORE
Distributed Cache is an important feature provided ...READ MORE
hadoop dfsadmin -safemode leave READ MORE
Sqoop is used to transfer any data ...READ MORE
Yes, one can build “Spark” for a specific ...READ MORE
Check the ip address mentioned in core-site.xml ...READ MORE
There is no jobtracker in hadoop 2.2.0 YARN framework. ...READ MORE
Hi, wordcount example failing on edureka vm(VM is ...READ MORE
Try this: val new_records = sc.newAPIHadoopRDD(hadoopConf,classOf[ ...READ MORE
You can re-install openssh-client and openssh-server: $ sudo ...READ MORE
Seems like it is running on default ...READ MORE
The mapreduce task happens in the following ...READ MORE
mapper.py #!/usr/bin/python import sys #Word Count Example # input comes from ...READ MORE
Both Spark and Hadoop MapReduce are used ...READ MORE
You can use hdfs fsck / to ...READ MORE
First make sure you have ant installed ...READ MORE
We would like to say that the ...READ MORE
When you copy a file from the ...READ MORE
First check if all daemons are running: sudo ...READ MORE
There could be more than one reason ...READ MORE
OR
At least 1 upper-case and 1 lower-case letter
Minimum 8 characters and Maximum 50 characters
Already have an account? Sign in.