Hadoop stand alone installation and configuration tutorial

  • 2020-04-01 01:08:46
  • OfStack

Stand-alone installation is mainly used for program logic debugging. Installation steps are basically distributed installation, including environment variables, main Hadoop configuration files, SSH configuration, etc. The main difference is the configuration file: the configuration of the slaves needs to be modified, and if dfs.replication is greater than 1 in a distributed installation, it needs to be changed to 1 because there is only one datanode.
Please refer to:
http://acooly.iteye.com/blog/1179828
In single machine installation, use one machine to do namenode and JobTracker, which are also datanode and TaskTracker, of course, they are also SecondaryNameNode.
The main configuration files core-site. XML,hdfs-site. XML,mapred-site.
 
<property> 
<name>dfs.replication</name> 
<value>1</value> 
</property> 

The main difference lies in the configuration of slaves. In the distributed installation, there are multiple other machines as datanodes, while the single-machine mode machine is a datanode, so modify the configuration file of slaves as the domain name of the machine. If: the name of the machine is hadoop11, then:
[hadoop @ hadoop11 ~] $cat hadoop/conf/slaves
hadoop11
After completing the configuration, start:
 
$ start-all.sh 
$ jps 
15556 Jps 
15111 JobTracker 
15258 TaskTracker 
15014 SecondaryNameNode 
14861 DataNode 
14712 NameNode 

Run the DEMO
$echo word1 word2 word2 word3 word3 word3 > words
$cat words
Word1 word2 word2 word3 word3 word3
$hadoop dfsadmin -safemode leave
$hadoop fs-copyfromlocal words /single/input/words
$hadoop fs-cat /single/input/words
12/02/17 19:47:44 INFO security. Groups: Group mapping impl = org). Apache hadoop. Security. ShellBasedUnixGroupsMapping; CacheTimeout = 300000
Word1 word2 word2 word3 word3 word3
$hadoop jar hadoop-0.21.0/hadoop-mapred-examples-0.21.0.jar wordcount /single/input /single/output
.
$hadoop fs-ls /single/output
.
Hadoop supergroup 0 2012-02-17 19:50 /single/output/_SUCCESS
-rw-r-r-1 hadoop supergroup 24 2012-02-17 19:50 /single/output/part-r-00000
$hadoop fs-cat /single/output/part-r-00000
.
Word1 1
Word2 2
Word3 3

Related articles: