在/etc/hadoop/hadoop-env.sh中添加
export JAVA_HOME=/opt/module/jdk1.8.0_144
<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoop101:9000</value>
</property>
<!-- 指定Hadoop运行时产生文件的存储目录 -->
<property>
<name>hadoop.tmp.dir</name>
<value>/opt/module/hadoop-2.7.2/data/tmp</value>
</property>
配置hdfs-site.xml
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
2.启动集群
主节点Namenode(不可以总是格式化,集群的id会变化)
bin/hdfs namenode -format
启动NameNode
sbin/hadoop-demon.sh start namenode
齐心协力Datanode
sbin/hadoop-daemon.sh start datanode
查处集群
jps
小心:jps是JDK的命令,Hadoop是java写的,也需要安装JDK.
http://hadoop101:50070/dfshealth.html#tab-overview
伪分布式启动hdfs(Hadoop Dir file System)
创建文件input
bin/hdfs dfs -ls /user/dev1/input/
bin/hdfs dfs -cat /user/dev1/input/word.txt
运行MapReduce
bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.2.jar wordcount /user/dev1/input/ /user/dev1/output
查看成功的结果:
bin/hdfs dfs -cat /user/dev1/output/*
配置yarn-site.xml
在dev1下面操作
sudo vim yarn-site.xml
<property>
<name>yarn.log-aggregation-enable</name>
<value>true</value>
</property>
<!-- 日志保留时间设置7天 -->
<property>
<name>yarn.log-aggregation.retain-seconds</name>
<value>604800</value>
</property>
启动NodeManager,ResourceManager和HistoryManager
sbin/yarn-daemon.sh start resourcemanager
sbin/yarn-daemon.sh start nodemanager
sbin/mr-jobhistory-daemon.sh start historyserver
jps
6150 NodeManager
5912 ResourceManager
6284 JobHistoryServer
6317 Jps
执行WordCount
bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.2.jar wordcount /user/dev1/input /user/dev1/output
全部的代码附上
history
1 java -version
2 ls
3 cd etc
4 ls
5 cd hadoop/
6 ls
7 vi yarn-site.xml
8 sbin/yarn-daemon.sh start resourcemanager
9 sbin/yarn-daemon.sh start nodemanager
10 sbin/mr-jobhistory-daemon.sh start historyserver
11 sbin/yarn-daemon.sh start resourcemanager
12 cd ..
13 ls
14 sbin/yarn-daemon.sh start resourcemanager
15 sbin/yarn-daemon.sh start nodemanager
16 sbin/mr-jobhistory-daemon.sh start historyserver
17 jps
18 bin/hdfs dfs -rm -R /user/dev1/output
19 bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-example
s-2.7.2.jar wordcount /user/dev1/input /user/dev1/output
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 [email protected] 举报,一经查实,本站将立刻删除。