本文以圖文結(jié)合的方式詳細(xì)介紹了Hadoop 2.x偽分布式環(huán)境搭建的全過程,供大家參考,具體內(nèi)容如下
1、修改hadoop-env.sh、yarn-env.sh、mapred-env.sh
方法:使用notepad++(beifeng用戶)打開這三個文件
添加代碼:export JAVA_HOME=/opt/modules/jdk1.7.0_67
2、修改core-site.xml、hdfs-site.xml、yarn-site.xml、mapred-site.xml配置文件
1)修改core-site.xml
configuration> property> name>fs.defaultFS/name> value>hdfs://Hadoop-senior02.beifeng.com:8020/value> /property> property> name>hadoop.tmp.dir/name> value>/opt/modules/hadoop-2.5.0/data/value> /property> /configuration>
2)修改hdfs-site.xml
configuration> property> name>dfs.replication/name> value>1/value> /property> property> name>dfs.namenode.http-address/name> value>Hadoop-senior02.beifeng.com:50070/value> /property> /configuration>
3)修改yarn-site.xml
configuration> property> name>yarn.nodemanager.aux-services/name> value>mapreduce_shuffle/value> /property> property> name>yarn.resourcemanager.hostname/name> value>Hadoop-senior02.beifeng.com/value> /property> property> name>yarn.log-aggregation-enable/name> value>true/value> /property> property> name>yarn.log-aggregation.retain-seconds/name> value>86400/value> /property> /configuration>
4)修改mapred-site.xml
configuration> property> name>mapreduce.framework.name/name> value>yarn/value> /property> property> name>mapreduce.jobhistory.webapp.address/name> value>0.0.0.0:19888/value> /property> /configuration>
3、啟動hdfs
1)格式化namenode:$ bin/hdfs namenode -format
2)啟動namenode:$sbin/hadoop-daemon.sh start namenode
3)啟動datanode:$sbin/hadoop-daemon.sh start datanode
4)hdfs監(jiān)控web頁面:http://hadoop-senior02.beifeng.com:50070
4、啟動yarn
1)啟動resourcemanager:$sbin/yarn-daemon.sh start resourcemanager
2)啟動nodemanager:sbin/yarn-daemon.sh start nodemanager
3)yarn監(jiān)控web頁面:http://hadoop-senior02.beifeng.com:8088
5、測試wordcount jar包
1)定位路徑:/opt/modules/hadoop-2.5.0
2)代碼測試:bin/yarn jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.0.jar wordcount /input/sort.txt /output6/
運(yùn)行過程:
16/05/08 06:39:13 INFO client.RMProxy: Connecting to ResourceManager at Hadoop-senior02.beifeng.com/192.168.241.130:8032
16/05/08 06:39:15 INFO input.FileInputFormat: Total input paths to process : 1
16/05/08 06:39:15 INFO mapreduce.JobSubmitter: number of splits:1
16/05/08 06:39:15 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1462660542807_0001
16/05/08 06:39:16 INFO impl.YarnClientImpl: Submitted application application_1462660542807_0001
16/05/08 06:39:16 INFO mapreduce.Job: The url to track the job: http://Hadoop-senior02.beifeng.com:8088/proxy/application_1462660542807_0001/
16/05/08 06:39:16 INFO mapreduce.Job: Running job: job_1462660542807_0001
16/05/08 06:39:36 INFO mapreduce.Job: Job job_1462660542807_0001 running in uber mode : false
16/05/08 06:39:36 INFO mapreduce.Job: map 0% reduce 0%
16/05/08 06:39:48 INFO mapreduce.Job: map 100% reduce 0%
16/05/08 06:40:04 INFO mapreduce.Job: map 100% reduce 100%
16/05/08 06:40:04 INFO mapreduce.Job: Job job_1462660542807_0001 completed successfully
16/05/08 06:40:04 INFO mapreduce.Job: Counters: 49
3)結(jié)果查看:bin/hdfs dfs -text /output6/par*
運(yùn)行結(jié)果:
hadoop 2
jps 1
mapreduce 2
yarn 1
6、MapReduce歷史服務(wù)器
1)啟動:sbin/mr-jobhistory-daemon.sh start historyserver
2)web ui界面:http://hadoop-senior02.beifeng.com:19888
7、hdfs、yarn、mapreduce功能
1)hdfs:分布式文件系統(tǒng),高容錯性的文件系統(tǒng),適合部署在廉價(jià)的機(jī)器上。
hdfs是一個主從結(jié)構(gòu),分為namenode和datanode,其中namenode是命名空間,datanode是存儲空間,datanode以數(shù)據(jù)塊的形式進(jìn)行存儲,每個數(shù)據(jù)塊128M
2)yarn:通用資源管理系統(tǒng),為上層應(yīng)用提供統(tǒng)一的資源管理和調(diào)度。
yarn分為resourcemanager和nodemanager,resourcemanager負(fù)責(zé)資源調(diào)度和分配,nodemanager負(fù)責(zé)數(shù)據(jù)處理和資源
3)mapreduce:MapReduce是一種計(jì)算模型,分為Map(映射)和Reduce(歸約)。
map將每一行數(shù)據(jù)處理后,以鍵值對的形式出現(xiàn),并傳給reduce;reduce將map傳過來的數(shù)據(jù)進(jìn)行匯總和統(tǒng)計(jì)。
以上就是本文的全部內(nèi)容,希望對大家的學(xué)習(xí)有所幫助。
標(biāo)簽:瀘州 柳州 荊門 淮安 威海 那曲 江蘇 景德鎮(zhèn)
巨人網(wǎng)絡(luò)通訊聲明:本文標(biāo)題《Hadoop 2.x偽分布式環(huán)境搭建詳細(xì)步驟》,本文關(guān)鍵詞 Hadoop,2.x,偽,分布式,環(huán)境,;如發(fā)現(xiàn)本文內(nèi)容存在版權(quán)問題,煩請?zhí)峁┫嚓P(guān)信息告之我們,我們將及時(shí)溝通與處理。本站內(nèi)容系統(tǒng)采集于網(wǎng)絡(luò),涉及言論、版權(quán)與本站無關(guān)。