Following is a summarized list of common issues that can be encountered while Given below are issues you might get when setting up a multi node Hadoop cluster., and how to resolve them:
Table of Contents |
---|
Starting and stopping a Hadoop cluster
Go to the node where namenode is installed. Execute $HADOOP_HOME/bin/start-all.sh. HADOOP_HOME is the location where Hadoop was installed.
In order to check whether the expected Hadoop processes are running in a node, jps
(part of Sun’s Java since v1.5.0) command can be used.
...
Execute $HADOOP_HOME/bin/stop-all.sh to stop all the nodes in the cluster. This command should be issued from the node where cluster was started.
If there are any errors, examine the log files in the HADOOP_HOME/logs/
directory.
Namenode is in safe mode
The following error comes up when the Namenode is safe mode:
...
HADOOP_HOME/bin/hadoop dfsadmin -safemode leave
Namenode is not getting started
Sometimes Namenode fails to start if it's data directories have been deleted or corrupted. Usually these directories are configured by dfs.name.dir and dfs.data.dir properties in HADOOP_HOME/conf/hdfs-site.xml. Make sure those directories are readable and writable for Hadoop user.
...
/tmp/hadoop-${user.name}
which is cleaned after every reboot. So if namenode data is created inside /tmp, Namenode will fail to start after a node restart.Datanode is not getting started - java.io.IOException: Incompatible namespaceIDs
logs/hadoop-hadoop-datanode-.log
), sometimes you might have affected by issue HDFS-107 (formerly known as HADOOP-1212). Due to this error, Datanode fails to start....