阅读588 返回首页    go 阿里云 go 技术社区[云栖]


Hadoop 2.x 集群环境搭建

基础环境设置:

1.配置/etc/sysconfig/network-scripts/ifcfg-ens33 绑定ip
2.配置主机名ip解析,编辑 /etc/hosts
3.修改主机名,编辑/etc/sysconfig/network ,添加一行HOSTNAME=hostname
4.关闭iptables,selinux,firewalld
5.安装jdk,配置$JAVA_HOME
6.解压hadoop2.x 至/opt/app下,配置$HADOOP_HOME
7.所有主机之间设置ssh免验证登陆,包括本机自己ssh也要配置 (3台机器都有同一个用户,beifeng)

hadoop 2.x 分布式部署方案

HOSTNAME IPADDR HDFS YARN MAPREDUCE

hadoop-master 192.168.1.129 NameNode,DataNode NodeManager Job_History_server
hadoop-slave1 192.168.1.130 DataNode ResourceManager,NodeManager
hadoop-slave2 192.168.1.131 SecondaryNameNode,DataNode NodeManager

hadoop 2.x 各守护进程相关配置文件

hdfs:
hadoop-env.sh --> 配置$JAVA_HOME
core-site.xml --> 配置NameNode节点(fs.defaultFS)
配置Hadoop的临时目录(tmp.dir)
hdfs-site.xml --> 配置SecondaryNameNode(dfs.namenode.secondary.http-address)
slaves --> 配置DataNode节点的ip/hostname

yarn:
yarn-env.sh --> 配置$JAVA_HOME
yarn-site.xml --> 配置ResourceManager节点
配置日志聚集(yarn.log-aggregetion-enable)
配置MapReduce的shuffle(yarn.nodemanager.aux-services----mapreduce_shuffle )
slaves --> 配置NodeManager节点的ip/hostname

mapreduce:
mapred-site.xml --> 配置job history
配置在yarn上运行mapreduce

在hadoop-master节点配置hdfs、yarn及mapreduce

1.配置hdfs
(一般配置好javahome不用再配置hadoop-env.sh)
a.$HADOOP_HOME/etc/hadoop/core-site.xml



fs.defaultFS
hdfs://hadoop-master:8020


hadoop.tmp.dir
/opt/data/tmp

b.$HADOOP_HOME/etc/hadoop/hdfs-site.xml

不需要配置分片


dfs.namenode.secondary.http-address
https://hadoop-slave2:50090

c.$HADOOP_HOME/etc/hadoop/slaves

同时配置了NodeManager的节点地址

hadoop-master
hadoop-slave1
hadoop-slave2

2.配置yarn

a.yarn-site.xml


yarn.resourcemanager.hostname
hadoop-slave1


yarn.nodemanager.aux-services
mapreduce_shuffle


yarn.log-aggregation-enable
true


yarn.log-aggregation.retain-seconds
640800

3.配置MapReduce

a.mapred-site.xml


mapreduce.framework.name
yarn


mapreduce.jobhistory.address
hadoop-master:10020


mapreduce.jobhistory.webapp.address
hadoop-master:19888

拷贝hadoop到hadoop-slave1,slave2

scp -r $HADOOP_HOME hadoop-slave1:/opt/app
scp -r $HADOOP_HOME hadoop-slave2:/opt/app

启动hadoop 集群

1.在hadoop-master上首次初始格式化namenode节点
hdfs namenode -format

2.启动hdfs集群
start-dfs.sh

3.启动yarn集群
start-yarn.sh

4.启动job-history server
mr-jobhistory-daemon.sh start historyserver

5.各节点查看运行状态
jps

END

最后更新:2017-07-13 07:28:08

  上一篇:go  CLH 锁
  下一篇:go  国际商业美术设计师阿里云开发首页