- 下面开始安装步骤
- 首先将安装包传输到目录下,并解压
- tar -zxvf hadoop-3.1.3.tar.gz -C /usr/local/soft/
- 解压完成后可以进行改名
-
mv hadoop-3.1.3 hadoop
- 配置环境变量
- export HADOOP_HOME=/usr/local/soft/hadoop (这里需要是你自己的安装目录)
- export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/sbin:$HADOOP_HOME/bin
- 进入hadoop的配置文件目录下开始配置
- cd /usr/local/soft/hadoop/etc/hadoop/
- core-site.xml文件
- vim core.site.xml
<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://master:9000</value> </property> <property> <name>io.file.buffer.size</name> <value>131072</value> </property> <property> <name>hadoop.tmp.dir</name> <value>file:/usr/local/soft/hadoop/tmp/</value> </property> </configuration>
<configuration> <property> <name>dfs.namenode.name.dir</name> <value>/usr/local/soft/hadoop/tmp</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>/usr/local/soft/hadoop/data/data</value> </property> <property> <name>dfs.replication</name> <value>3</value> </property> <property> <name>dfs.namenode.secondary.http.address</name> <value>master:9870</value> </property> <property> <name>dfs.permissions</name> <value>false</value> </property> </configuration>
-
- yarn-site.xml
- vim yarn-site.xml
<configuration> <!-- Site specific YARN configuration properties --> <property> <name>yarn.resourcemanager.hostname</name> <value>master</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> </configuration>
- vim mapred-site.xml
<configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>yarn.app.mapreduce.am.env</name> <value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value> </property> <property> <name>mapreduce.map.env</name> <value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value> </property> <property> <name>mapreduce.reduce.env</name> <value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value> </property> <property> <name>mapred.job.tracker</name> <value>master:9001</value> </property> <property> <name>mapreduce.map.memory.mb</name> <value>1536</value> </property> <property> <name>mapreduce.map.java.opts</name> <value>-Xmx1024M</value> </property> <property> <name>mapreduce.reduce.memory.mb</name> <value>3072</value> </property> <property> <name>mapreduce.reduce.java.opts</name> <value>-Xmx2560M</value> </property> <property> <name>mapreduce.jobhistory.address</name> <value>master:10020</value> </property> <property> <name>mapreduce.jobhistory.webapp.address</name> <value>master:19888</value> </property> </configuration>
HDFS_DatanODE_USER=root HADOOP_SECURE_DN_USER=hdfs HDFS_NAMENODE_USER=root HDFS_SECONDARYNAMENODE_USER=root
#!/usr/bin/env bash YARN_RESOURCEMANAGER_USER=root HADOOP_SECURE_DN_USER=yarn YARN_NODEMANAGER_USER=root
- 进行格式化 (我是在分发之后分别对三台进行格式化的)
- cd /usr/local/soft/hadoop/sbin
- ./hdfs namenode -format
- 对文件进行分发。
- scp -r /usr/local/soft/hadoop slave1:/usr/local/soft/
- scp -r /usr/local/soft/hadoop slave2:/usr/local/soft/
- 配置文件也不要忘记
- scp -r /etc/profile slave1:/etc/profile
-
scp -r /etc/profile slave2:/etc/profile
- 启动虚拟机
- ./start-all.sh
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 [email protected] 举报,一经查实,本站将立刻删除。