Hadoop is an open-source software framework for storing data and running applications on clusters of commodity hardware. A daemon is a program that runs continuously and exists for the purpose of handling periodic service requests that a computer system expects to receive. The daemon program forwards the requests to other processes as appropriate. Hadoop has five such daemons:
-
NameNode: NameNode works on the Master System. The primary purpose of Namenode is to manage all the MetaData.
-
Secondary NameNode: Secondary NameNode is used for taking the hourly backup of the data. In case the Hadoop cluster fails, or crashes, the secondary Namenode will take the hourly backup or checkpoints of that data and store this data into a file.
-
DataNode: DataNode is a program that runs on the slave system that serves the read/write request from the client.
-
JobTracker: JobTracker is basically a MapReduce Daemon. It is a Master Process. Each cluster can have a single JobTracker and mul- tiple TaskTrackers.
-
TaskTracker: TaskTracker is again a MapReduce Daemon. It is a Slave Pro- cess. Within a cluster there can be multiple TaskTracker. It is the responsibility of the TaskTracker to execute all the tasks assigned by the JobTracker.
Each daemons runs separately in its own JVM.
For details, You can get a better understanding with the Hadoop big data course.