[BigDataHadoop:Hadoop&OLAP数据库管理系统.V16] [Deployment.OLAP数据库管理系统][|Kylin:sparkcore高可用配置|]
一、高可用配置:spark standalone集群配置
### --- 修改 spark-env.sh 文件,并分发到集群中
[root@hadoop01 ~]# vim $SPARK_HOME/conf/spark-env.sh
# export SPARK_MASTER_HOST=hadoop01 # 注释掉这2行内容
# export SPARK_MASTER_PORT=7077 # 注释掉这2行内容
~~~ # 最后一行添加如下内容
export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=hadoop01,hadoop02,hadoop03 -Dspark.deploy.zookeeper.dir=/spark"
~~~ # 发送到其它节点
[root@hadoop01 ~]# rsync-script $SPARK_HOME/conf/spark-env.sh
二、启动集群并验证### --- 启动 Spark 集群hadoop01
~~~ # 在Hadoop01节点上重启spark服务:需要启动hdfs/yarn/zookeeper服务
[root@hadoop01 ~]# stop-all-spark.sh
[root@hadoop01 ~]# start-all-spark.sh
~~~ # 查看服务进程
[root@hadoop00 ~]# jps
Hadoop01 Worker
Hadoop02 Master Worker # 此时master节点在Hadoop02上
Hadoop03 Worker
三、浏览器输入:http://hadoop01:8080/显示为:ALIVE### --- 在Hadoop02上启动master服务
[root@hadoop02 ~]# start-master.sh
~~~ # 查看服务进程
[root@hadoop00 ~]# jps
Hadoop01 Master Worker # 备用master节点在Hadoop01上
Hadoop02 Master Worker # 此时master节点在Hadoop02上
Hadoop03 Worker
四、进入浏览器输入:http://hadoop02:8080/,此时 Master 的状态为:STANDBY### --- 停止集群状态
~~~ # 停止spark集群
[root@hadoop02 ~]# stop-all-spark.sh
~~~ # 停止zk服务
[root@hadoop02 ~]# ./zk-all.sh stop
七、zookeeper说明### --- 高可用(ZK、Local Flile;在ZK中记录集群的状态)
[root@hadoop02 ~]# zkCli.sh
[zk: localhost:2181(CONNECTED) 1] ls / # 记录的位置
[zookeeper, spark]
[zk: localhost:2181(CONNECTED) 2] ls /spark # 选举的信息
[leader_election, master_status]
===============================END===============================
Walter Savage Landor:strove with none,for none was worth my strife.Nature I loved and, next to Nature, Art:I warm'd both hands before the fire of life.It sinks, and I am ready to depart ——W.S.Landor
来自为知笔记(Wiz)
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 [email protected] 举报,一经查实,本站将立刻删除。