How to control logging functionality in Hadoop

0 votes

Hadoop uses default log4j.properties file for controlling logs. My use case is to control logs generated by my classes.

Hadoop daemons like JobTrackerTaskTrackerNameNode and DataNode daemon processes use log4j.properties file from their respective host node’s hadoop-conf-directory. The rootLogger is set to “INFO,console” which logs all message at level INFO to the console.

I trigger hadoop jobs using Oozie Workflow. I tried passing my custom log4j.properties file to the job by setting -Dlog4j.configuration=path/to/log4j.properties system property, but it is not working. Still, it takes log4j properties from the default one.

I am not supposed to touch default log4j.properties file.

I am using Oozie-v3.1.3-incubating, hadoop-v0.20 and cloudera CDH-v4.0.1.

How can I override the default log4j.properties file ?? or How can I control logs for my classes ??

Nov 12, 2018 in Big Data Hadoop by Neha
• 6,300 points
4,410 views

1 answer to this question.

0 votes

 Logs are distributed across your cluster, but by logging them to the rootLogger, you should be able to see them via the job tracker.

If you want to utilize rolling files then you have a difficult time retrieving those files later (again because they are distributed across your task nodes).

If you want to dynamically set log levels, this should be simple enough:

public static Logger log = Logger.getLogger(MyMapper.class);

@Override
protected void setup(Context context) throws IOException,
        InterruptedException {
    log.setLevel(Level.WARN);
}

If you want to add you own appenders, then you should be able to do this programmatically

answered Nov 12, 2018 by Frankie
• 9,830 points

Related Questions In Big Data Hadoop

0 votes
1 answer

How to run Hadoop in Docker containers?

Hi, You can run Hadoop in Docker container. Follow ...READ MORE

answered Jan 24, 2020 in Big Data Hadoop by MD
• 95,460 points
2,204 views
0 votes
7 answers

How to run a jar file in hadoop?

I used this command to run my ...READ MORE

answered Dec 10, 2018 in Big Data Hadoop by Dasinto
26,569 views
0 votes
1 answer

How to configure secondary namenode in Hadoop 2.x ?

bin/hadoop-daemon.sh start [namenode | secondarynamenode | datanode ...READ MORE

answered Apr 6, 2018 in Big Data Hadoop by kurt_cobain
• 9,350 points
1,816 views
0 votes
1 answer

What is -cp command in hadoop? How it works?

/user/cloudera/data1 is not a directory, it is ...READ MORE

answered Oct 17, 2018 in Big Data Hadoop by Frankie
• 9,830 points
4,169 views
+1 vote
1 answer

Hadoop Mapreduce word count Program

Firstly you need to understand the concept ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 11,380 points
11,029 views
0 votes
1 answer

hadoop.mapred vs hadoop.mapreduce?

org.apache.hadoop.mapred is the Old API  org.apache.hadoop.mapreduce is the ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 11,380 points
2,537 views
+2 votes
11 answers

hadoop fs -put command?

Hi, You can create one directory in HDFS ...READ MORE

answered Mar 16, 2018 in Big Data Hadoop by nitinrawat895
• 11,380 points
108,836 views
0 votes
1 answer

How to format the output being written by MapReduce in Hadoop?

Here is a simple code demonstrate the ...READ MORE

answered Sep 5, 2018 in Big Data Hadoop by Frankie
• 9,830 points
2,594 views
0 votes
1 answer

What is Custom partitioner in Hadoop? How to write partition function ?

Don't think that in Hadoop the same ...READ MORE

answered Sep 18, 2018 in Big Data Hadoop by Frankie
• 9,830 points
1,564 views
webinar REGISTER FOR FREE WEBINAR X
REGISTER NOW
webinar_success Thank you for registering Join Edureka Meetup community for 100+ Free Webinars each month JOIN MEETUP GROUP