网站建设资讯

NEWS

网站建设资讯

Kubernetes二进制部署之多节点部署-创新互联

此实验开始前必须要先部署单节master的k8s群集

单节点部署博客地址:
https://blog.51cto.com/14449528/2469980

成都创新互联公司凭借专业的设计团队扎实的技术支持、优质高效的服务意识和丰厚的资源优势,提供专业的网站策划、成都网站建设、成都网站设计、网站优化、软件开发、网站改版等服务,在成都十年的网站建设设计经验,为成都上千家中小型企业策划设计了网站。

多master群集架构图:

Kubernetes二进制部署之多节点部署


master2部署

1、优先关闭master2的防火墙服务

[root@master2 ~]# systemctl stop firewalld.service
[root@master2 ~]# setenforce 0

2、在master1上操作,复制kubernetes目录、server组件到master2

[root@master1 k8s]# scp -r /opt/kubernetes/ root@192.168.18.140:/opt
[root@master1 k8s]# scp /usr/lib/systemd/system/{kube-apiserver,kube-controller-manager,kube-scheduler}.service root@192.168.18.140:/usr/lib/systemd/system/

3、修改master02中的配置文件

[root@master2 ~]# cd /opt/kubernetes/cfg/
[root@master2 cfg]# vim kube-apiserver
5 --bind-address=192.168.18.140 \
7 --advertise-address=192.168.18.140 \
#第5和7行IP地址需要改为master2的地址

4、拷贝master1上已有的etcd证书给master2使用

(注意:master2一定要有etcd证书,否则apiserver服务无法启动)

[root@master1 k8s]# scp -r /opt/etcd/ root@192.168.18.132:/opt/
root@192.168.18.132's password:
etcd                                                      100%  516   535.5KB/s   00:00
etcd                                                      100%   18MB  90.6MB/s   00:00
etcdctl                                                   100%   15MB  80.5MB/s   00:00
ca-key.pem                                                100% 1675     1.4MB/s   00:00
ca.pem                                                    100% 1265   411.6KB/s   00:00
server-key.pem                                            100% 1679     2.0MB/s   00:00
server.pem                                                100% 1338   429.6KB/s   00:00

5、启动master2中的三个组件服务

[root@master2 cfg]# systemctl start kube-apiserver.service        ##开启服务
[root@master2 cfg]# systemctl enable kube-apiserver.service    ##服务开机自启
[root@master2 cfg]# systemctl start kube-controller-manager.service
[root@master2 cfg]# systemctl enable kube-controller-manager.service
[root@master2 cfg]# systemctl start kube-scheduler.service
[root@master2 cfg]# systemctl enable kube-scheduler.service

6、修改环境变量

[root@master2 cfg]# vim /etc/profile
export PATH=$PATH:/opt/kubernetes/bin/  ##添加环境变量
[root@master2 cfg]# source /etc/profile      ##刷新配置文件
[root@master2 cfg]# kubectl get node        ##查看群集节点信息
NAME             STATUS   ROLES    AGE   VERSION
192.168.18.129   Ready       21h   v1.12.3
192.168.18.130   Ready       22h   v1.12.3
#此时可以看到node1和node2的加入情况

------此时master2部署完毕------

Nginx负载均衡部署

lb01和lb02进行相同操作

安装nginx服务,把nginx.sh和keepalived.conf脚本拷贝到家目录

[root@localhost ~]# ls
anaconda-ks.cfg       keepalived.conf  公共  视频  文档  音乐
initial-setup-ks.cfg  nginx.sh         模板  图片  下载  桌面
[root@lb1 ~]# systemctl stop firewalld.service
[root@lb1 ~]# setenforce 0
[root@lb1 ~]# vim /etc/yum.repos.d/nginx.repo
[nginx]
name=nginx repo
baseurl=http://nginx.org/packages/centos/7/$basearch/
gpgcheck=0
##重新加载yum仓库
[root@lb1 ~]# yum list
##安装nginx服务
[root@lb1 ~]# yum install nginx -y

[root@lb1 ~]# vim /etc/nginx/nginx.conf
##在12行下插入stream模块
stream {

   log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
    access_log  /var/log/nginx/k8s-access.log  main;

    upstream k8s-apiserver {
        server 192.168.18.128:6443;     #此处为master1的ip地址
        server 192.168.18.140:6443;     #此处为master2的ip地址
    }
    server {
                listen 6443;
                proxy_pass k8s-apiserver;
    }
    }

##检测语法
[root@lb1 ~]# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
##修改主页进行区分
[root@lb1 ~]# cd /usr/share/nginx/html/
[root@lb1 html]# ls
50x.html  index.html
[root@lb1 html]# vim index.html
14 

Welcome to mater nginx!

#14行中添加master以作区分 [root@lb2 ~]# cd /usr/share/nginx/html/ [root@lb2 html]# ls 50x.html index.html [root@lb1 html]# vim index.html 14

Welcome to backup nginx!

#14行中添加backup以作区分 ##启动服务 [root@lb1 ~]# systemctl start nginx [root@lb2 ~]# systemctl start nginx

浏览器验证访问,输入192.168.18.150,可以访问master的nginx主页
Kubernetes二进制部署之多节点部署

浏览器验证访问,输入192.168.18.151,可以访问backup的nginx主页
Kubernetes二进制部署之多节点部署

keepalived安装部署

lb01和lb02操作相同

1、安装keeplived

[root@lb1 html]# yum install keepalived -y

2、修改配置文件

[root@lb1~]# ls
anaconda-ks.cfg       keepalived.conf  公共  视频  文档  音乐
initial-setup-ks.cfg  nginx.sh         模板  图片  下载  桌面
[root@lb1 ~]# cp keepalived.conf /etc/keepalived/keepalived.conf
cp:是否覆盖"/etc/keepalived/keepalived.conf"? yes

[root@lb1 ~]# vim /etc/keepalived/keepalived.conf 
#lb01是Master配置如下:
! Configuration File for keepalived

global_defs {
   # 接收邮件地址
   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   }
   # 邮件发送地址
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id NGINX_MASTER
}

vrrp_script check_nginx {
    script "/etc/nginx/check_nginx.sh"
}

vrrp_instance VI_1 {
    state MASTER 
    interface ens33
    virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
    priority 100    # 优先级,备服务器设置 90  
    advert_int 1    # 指定VRRP 心跳包通告间隔时间,默认1秒
    authentication {  
        auth_type PASS
        auth_pass 1111
    }   
    virtual_ipaddress {
        192.168.18.100/24
    }
    track_script {
        check_nginx
    }
}

#lb02是Backup配置如下:! Configuration File for keepalived

global_defs {
   # 接收邮件地址
   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   }
   # 邮件发送地址
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id NGINX_MASTER
}

vrrp_script check_nginx {
    script "/etc/nginx/check_nginx.sh"
}

vrrp_instance VI_1 {
    state BACKUP 
    interface ens33
    virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
    priority 90    # 优先级,备服务器设置 90  
    advert_int 1    # 指定VRRP 心跳包通告间隔时间,默认1秒
    authentication {  
        auth_type PASS
        auth_pass 1111
    }   
    virtual_ipaddress {
        192.168.18.100/24
    }
    track_script {
        check_nginx
    }
}

3、制作管理脚本

[root@lb1 ~]# vim /etc/nginx/check_nginx.sh

count=$(ps -ef |grep nginx |egrep -cv "grep|$$")

if [ "$count" -eq 0 ];then
        systemctl stop keepalived
fi

4、赋予执行权限并开启服务

[root@lb1 ~]# chmod +x /etc/nginx/check_nginx.sh
[root@lb1 ~]# systemctl start keepalived

5、查看地址信息
lb01地址信息

[root@lb1 ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:ba:e6:18 brd ff:ff:ff:ff:ff:ff
    inet 192.168.18.150/24 brd 192.168.35.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet 192.168.18.100/24 scope global secondary ens33             ##漂移地址在lb01中 
       valid_lft forever preferred_lft forever
    inet6 fe80::6ec5:6d7:1b18:466e/64 scope link tentative dadfailed 
       valid_lft forever preferred_lft forever
    inet6 fe80::2a3:b621:ca01:463e/64 scope link tentative dadfailed 
       valid_lft forever preferred_lft forever
    inet6 fe80::d4e2:ef9e:6820:145a/64 scope link tentative dadfailed 
       valid_lft forever preferred_lft forever
3: virbr0:  mtu 1500 qdisc noqueue state DOWN qlen 1000
    link/ether 52:54:00:14:39:99 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic:  mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 1000
    link/ether 52:54:00:14:39:99 brd ff:ff:ff:ff:ff:ff

lb02地址信息

[root@lb2 ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:1d:ec:b0 brd ff:ff:ff:ff:ff:ff
    inet 192.168.18.151/24 brd 192.168.35.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::6ec5:6d7:1b18:466e/64 scope link tentative dadfailed 
       valid_lft forever preferred_lft forever
    inet6 fe80::2a3:b621:ca01:463e/64 scope link tentative dadfailed 
       valid_lft forever preferred_lft forever
    inet6 fe80::d4e2:ef9e:6820:145a/64 scope link tentative dadfailed 
       valid_lft forever preferred_lft forever
3: virbr0:  mtu 1500 qdisc noqueue state DOWN qlen 1000
    link/ether 52:54:00:14:39:99 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic:  mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 1000
    link/ether 52:54:00:14:39:99 brd ff:ff:ff:ff:ff:ff

6、测试故障时转移切换
使Ib01故障,验证地址漂移

[root@lb1 ~]# pkill nginx
[root@lb1 ~]# systemctl status nginx
● nginx.service - nginx - high performance web server
   Loaded: loaded (/usr/lib/systemd/system/nginx.service; disabled; vendor preset: disabled)
   Active: failed (Result: exit-code) since 六 2020-02-08 16:54:45 CST; 11s ago
     Docs: http://nginx.org/en/docs/
  Process: 13156 ExecStop=/bin/kill -s TERM $MAINPID (code=exited, status=1/FAILURE)
 Main PID: 6930 (code=exited, status=0/SUCCESS)
 [root@localhost ~]# systemctl status keepalived.service             #keepalived服务也随之关闭,说明nginx中的check_nginx.sh生效
● keepalived.service - LVS and VRRP High Availability Monitor
   Loaded: loaded (/usr/lib/systemd/system/keepalived.service; disabled; vendor preset: disabled)
   Active: inactive (dead)

查看Ib01地址:

[root@lb1 ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:ba:e6:18 brd ff:ff:ff:ff:ff:ff
    inet 192.168.18.150/24 brd 192.168.35.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::6ec5:6d7:1b18:466e/64 scope link tentative dadfailed 
       valid_lft forever preferred_lft forever
    inet6 fe80::2a3:b621:ca01:463e/64 scope link tentative dadfailed 
       valid_lft forever preferred_lft forever
    inet6 fe80::d4e2:ef9e:6820:145a/64 scope link tentative dadfailed 
       valid_lft forever preferred_lft forever
3: virbr0:  mtu 1500 qdisc noqueue state DOWN qlen 1000
    link/ether 52:54:00:14:39:99 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic:  mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 1000
    link/ether 52:54:00:14:39:99 brd ff:ff:ff:ff:ff:ff

查看Ib02地址:

[root@Ib2 ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:1d:ec:b0 brd ff:ff:ff:ff:ff:ff
    inet 192.168.18.151/24 brd 192.168.35.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet 192.168.18.100/24 scope global secondary ens33                #漂移地址转移到lb02中
       valid_lft forever preferred_lft forever
    inet6 fe80::6ec5:6d7:1b18:466e/64 scope link tentative dadfailed 
       valid_lft forever preferred_lft forever
    inet6 fe80::2a3:b621:ca01:463e/64 scope link tentative dadfailed 
       valid_lft forever preferred_lft forever
    inet6 fe80::d4e2:ef9e:6820:145a/64 scope link tentative dadfailed 
       valid_lft forever preferred_lft forever
3: virbr0:  mtu 1500 qdisc noqueue state DOWN qlen 1000
    link/ether 52:54:00:14:39:99 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic:  mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 1000
    link/ether 52:54:00:14:39:99 brd ff:ff:ff:ff:ff:ff

恢复操作,在Ib01中先后启动nginx服务与keepalived服务

[root@localhost ~]# systemctl start nginx
[root@localhost ~]# systemctl start keepalived.service 
[root@localhost ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:ba:e6:18 brd ff:ff:ff:ff:ff:ff
    inet 192.168.35.104/24 brd 192.168.35.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet 192.168.35.200/24 scope global secondary ens33               #漂移地址又转移回lb01中
       valid_lft forever preferred_lft forever
    inet6 fe80::6ec5:6d7:1b18:466e/64 scope link tentative dadfailed 
       valid_lft forever preferred_lft forever
    inet6 fe80::2a3:b621:ca01:463e/64 scope link tentative dadfailed 
       valid_lft forever preferred_lft forever
    inet6 fe80::d4e2:ef9e:6820:145a/64 scope link tentative dadfailed 
       valid_lft forever preferred_lft forever
3: virbr0:  mtu 1500 qdisc noqueue state DOWN qlen 1000
    link/ether 52:54:00:14:39:99 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic:  mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 1000
    link/ether 52:54:00:14:39:99 brd ff:ff:ff:ff:ff:ff

因为漂移地址是在lb01上,所以访问漂移地址时现实的nginx首页应该是包含master的
Kubernetes二进制部署之多节点部署

node节点绑定VIP地址

1、修改node节点配置文件统一VIP

[root@localhost ~]# vim /opt/kubernetes/cfg/bootstrap.kubeconfig
[root@localhost ~]# vim /opt/kubernetes/cfg/kubelet.kubeconfig
[root@localhost ~]# vim /opt/kubernetes/cfg/kube-proxy.kubeconfig

#全部都改为VIP地址

server: https://192.168.18.100:6443

2、替换完成直接自检并重启服务

[root@node1 ~]# cd /opt/kubernetes/cfg/
[root@node1 cfg]# grep 100 *
bootstrap.kubeconfig:    server: https://192.168.18.100:6443
kubelet.kubeconfig:    server: https://192.168.18.100:6443
kube-proxy.kubeconfig:    server: https://192.168.18.100:6443

[root@node1 cfg]# systemctl restart kubelet.service
[root@node1 cfg]# systemctl restart kube-proxy.service

3、在lb01上查看nginx的k8s日志

[root@lb1 ~]# tail /var/log/nginx/k8s-access.log
192.168.18.130 192.168.18.128:6443 - [07/Feb/2020:14:18:54 +0800] 200 1119
192.168.18.130 192.168.18.140:6443 - [07/Feb/2020:14:18:54 +0800] 200 1119
192.168.18.129 192.168.18.128:6443 - [07/Feb/2020:14:18:57 +0800] 200 1120
192.168.18.129 192.168.18.140:6443 - [07/Feb/2020:14:18:57 +0800] 200 1120

4、在master1上操作

#测试创建pod
[root@master1 ~]# kubectl run nginx --image=nginx
kubectl run --generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead.
deployment.apps/nginx created

#查看状态
[root@master1 ~]# kubectl get pods
NAME                    READY   STATUS              RESTARTS   AGE
nginx-dbddb74b8-7hdfj   0/1     ContainerCreating   0          32s
#此时状态为ContainerCreating正在创建中

[root@master1 ~]# kubectl get pods
NAME                    READY   STATUS    RESTARTS   AGE
nginx-dbddb74b8-7hdfj   1/1     Running   0          73s
#此时状态为Running,表示创建完成,运行中

#注意:日志问题
[root@master1 ~]# kubectl logs nginx-dbddb74b8-7hdfj
Error from server (Forbidden): Forbidden (user=system:anonymous, verb=get, resource=nodes, subresource=proxy) ( pods/log nginx-dbddb74b8-7hdfj)
#此时日志不可看,需要开启权限

#绑定群集中的匿名用户赋予管理员权限
[root@master1 ~]# kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous
clusterrolebinding.rbac.authorization.k8s.io/cluster-system-anonymous created
[root@master1 ~]# kubectl logs nginx-dbddb74b8-7hdfj        #此时就不会报错了

查看pod网络#
[root@master1 ~]# kubectl get pods -o wide
NAME                  READY     STATUS    RESTARTS   AGE      IP            NODE         NOMINATED NODE
nginx-dbddb74b8-7hdfj   1/1     Running   0          20m   172.17.32.2   192.168.18.129  

5、在对应网段的node1节点上操作可以直接访问

[root@node1 ~]# curl 172.17.32.2



Welcome to nginx!



Welcome to nginx!

If you see this page, the nginx web server is successfully installed and working. Further configuration is required.

For online documentation and support please refer to nginx.org.
Commercial support is available at nginx.com.

Thank you for using nginx.

#此时看到的就是容器中nginx的信息

访问就会产生日志,我们就可以回到master1上查看日志

[root@master1 ~]# kubectl logs nginx-dbddb74b8-7hdfj
172.17.32.1 - - [07/Feb/2020:06:52:53 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.29.0" "-"
#此时就可以看到node1使用网关(172.17.32.1)进行访问的记录

网站栏目:Kubernetes二进制部署之多节点部署-创新互联
网页路径:http://njwzjz.com/article/gcghj.html