4.2.1 案例目标
(2)使用3台机器搭建ZooKeeper集群。
(3)使用ZooKeeper集群
4.2.2 案例分析
1.规划节点
ZooKeeper集群系统的节点规划
IP | 主机名 | 节点 |
192.168.213.161 | zookeeper1 | 集群节点 |
192.168.213.162 | zookeeper2 | 集群节点 |
192.168.213.163 | zookeeper3 | 集群节点 |
登录OpenStack平台,使用提供的CentOS_7.2_x86_64_XD.qcow2镜像,flavor使用2vCPU/4GB内存/50GB硬盘创建云主机。使用提供的zookeeper-3.4.14.tar.gz包和gpmall-repo文件夹,安装ZooKeeper服务
4.2.3 实施案例
(1)基础环境配置
使用secureCRT 对三台云主机进行连接
3个节点修改主机名为zookeeper1、zookeeper2、zookeeper3,命令如下:
zookeeper1节点:
[root@localhost ~]# hostnamectl set-hostname zookeeper1 |
zookeeper2节点:
[root@localhost ~]# hostnamectl set-hostname zookeeper2 |
zookeeper3节点:
[root@localhost ~]# hostnamectl set-hostname zookeeper3 |
修改完之后重新连接secureCRT,并查看主机名:
zookeeper1节点:
[root@zookeeper1 ~]# hostnamectl
Static hostname: zookeeper1
Icon name: computer-vm
Chassis: vm
Machine ID: dae72fe0cc064eb0b7797f25bfaf69df
Boot ID: c642ea4be7d349d0a929e557f23ce3dc
Virtualization: kvm
Operating System: CentOS Linux 7 (Core)
CPE OS Name: cpe:/o:centos:centos:7
Kernel: Linux 3.10.0-229.el7.x86_64
Architecture: x86_64
zookeeper2节点:
[root@zookeeper2 ~]# hostnamectl
Static hostname: zookeeper2
Icon name: computer-vm
Chassis: vm
Machine ID: dae72fe0cc064eb0b7797f25bfaf69df
Boot ID: cfcaf92af7a44028a098dc4792b441f4
Virtualization: kvm
Operating System: CentOS Linux 7 (Core)
CPE OS Name: cpe:/o:centos:centos:7
Kernel: Linux 3.10.0-229.el7.x86_64
Architecture: x86_64
zookeeper3节点:
[root@zookeeper3 ~]# hostnamectl
Static hostname: zookeeper3
Icon name: computer-vm
Chassis: vm
Machine ID: dae72fe0cc064eb0b7797f25bfaf69df
Boot ID: cff5bbd45243451e88d14e1ec75098c0
Virtualization: kvm
Operating System: CentOS Linux 7 (Core)
CPE OS Name: cpe:/o:centos:centos:7
Kernel: Linux 3.10.0-229.el7.x86_64
Architecture: x86_64
2.配置hosts文件
3个节点修改/etc/hosts 文件,3个节点均修改成如下代码所示:
# vi /etc/hosts
172.16.51.23 zookeeper1
172.16.51.32 zookeeper2
172.16.51.41 zookeeper3
# mv /etc/yum.repos.d/* /media/
在3个节点上创建/etc/yum.repo.d/local.repo,文件内容如下:
# cat /etc/yum.repos.d/local.repo
[gpmall]
name=gpmall
baseurl=file:///opt/gpmall-repo
gpgcheck=0
enabled=1
# yum clean all
# yum list
3个节点安装Java JDK环境,3个节点均执行命令如下:
# yum install -y java-1.8.0-openjdk java-1.8.0-openjdk-devel
# java -version
openjdk version "1.8.0_222"
OpenJDK Runtime Environment (build 1.8.0_222-b10)
OpenJDK 64-Bit Server VM (build 25.222-b10, mixed mode)
# tar -zxvf zookeeper-3.4.14.tar.gz
[root@zookeeper1 conf]# vi zoo.cfg
[root@zookeeper1 conf]# grep -n '^'[a-Z] zoo.cfg
2:tickTime=2000
5:initLimit=10
8:syncLimit=5
12:dataDir=/tmp/zookeeper
14:clientPort=2181
29:server.1=172.16.51.23:2888:3888
30:server.2=172.16.51.32:2888:3888
31:server.3=172.16.51.41:2888:3888
命令解析:
l initLimit:ZooKeeper集群模式下包含多个zk进程,其中一个进程为leader,余下的进程为follower。当follower最初与leader建立连接时,它们之间会传输相当多的数据,尤其是follower的数据落后leader很多。initLimit配置follower与leader之间建立连接后进行同步的最长时间。
l syncLimit:配置follower和leader之间发送消息,请求和应答的最大时间长度。
l tickTime:tickTime则是上述两个超时配置的基本单位,例如对于initLimit,其配置值为5,说明其超时时间为 2000ms * 5 = 10秒。
l dataDir:其配置的含义跟单机模式下的含义类似,不同的是集群模式下还有一个myid文件。myid文件的内容只有一行,且内容只能为1 - 255之间的数字,这个数字亦即上面介绍server.id中的id,表示zk进程的id。
注意:zookeeper2和zookeeper3节点的操作与修改的配置和zookeeper1节点一样。
创建myid文件,命令如下:
zookeeper1节点:
[root@zookeeper1 ~]# mkdir /tmp/zookeeper
[root@zookeeper1 ~]# vi /tmp/zookeeper/myid
[root@zookeeper1 ~]# cat /tmp/zookeeper/myid
1
zookeeper2节点:
[root@zookeeper2 ~]# mkdir /tmp/zookeeper
[root@zookeeper2 ~]# vi /tmp/zookeeper/myid
[root@zookeeper2 ~]# cat /tmp/zookeeper/myid
2
zookeeper3节点:
[root@zookeeper3 ~]# mkdir /tmp/zookeeper
[root@zookeeper3 ~]# vi /tmp/zookeeper/myid
[root@zookeeper3 ~]# cat /tmp/zookeeper/myid
3
zookeeper1节点:
[root@zookeeper1 bin]# ./zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /root/zookeeper-3.4.14/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[root@zookeeper1 bin]# ./zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /root/zookeeper-3.4.14/bin/../conf/zoo.cfg
Mode: follower
zookeeper2节点:
[root@zookeeper2 bin]# ./zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /root/zookeeper-3.4.14/bin/../conf/zoo.cfg
Starting zookeeper ... already running as process 10175.
[root@zookeeper2 bin]# ./zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /root/zookeeper-3.4.14/bin/../conf/zoo.cfg
Mode: leader
zookeeper3节点:
[root@zookeeper3 bin]# ./zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /root/zookeeper-3.4.14/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[root@zookeeper3 bin]# ./zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /root/zookeeper-3.4.14/bin/../conf/zoo.cfg
Mode: follower
可以看到,3个节点,zookeeper2为leader,其他的都是follower。
至此,zookeeper集群配置完毕
注意:查看状态出现问题时,所有节点都启动一下,再查看状态。
因篇幅问题不能全部显示,请点此查看更多更全内容