...
Execute $HADOOP_HOME/bin/stop-all.sh to stop all the nodes in the cluster. This command should be issued from the node where cluster was started.
If there are any errors, examine the log files in the HADOOP_HOME/logs/
directory.
Namenode is in safe mode
The following error comes up when the Namenode is
...
safe mode:
Code Block |
---|
org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /tmp/hadoop-hadoop/mapred/system. Name node is in safe mode. |
...
The reported blocks 319128 needs additional 7183 blocks to reach the threshold 0.9990 of total blocks 326638. |
...
Safe mode will be turned off automatically. |
This error comes when the Namenode is safe mode. NameNode is in safemode until configured percent of blocks reported to be online by the data nodes. It can be configured by parameter dfs.namenode.safemode.threshold-pct in the hdfs-site.xml For small / development clusters, where you have very few blocks - it makes sense to make this parameter lower then its default 0.9999f value. Otherwise 1 missing block can lead to system to hang in safemode.
...
HADOOP_HOME/bin/hadoop dfsadmin -safemode leave
Namenode is not getting started
...
- Stop the full cluster, i.e. both MapReduce and HDFS layers.
- Delete the data directory on the problematic DataNode: the directory is specified by
dfs.data.dir
inconf/hdfs-site.xml.
- Reformat the NameNode. WARNING: all HDFS data is lost during this process!
- Restart the cluster.
...