kubeadm部署多master节点高可用k8s集群

kubeadm部署多master节点高可用k8s集群

Scroll Down

kubernets的信息是保存在etcd中,etcd采用多数决定,所以如果需要冗余,至少3台etc服务器(从成本考虑采用单数,比如5台和6台效果相同,均只允许2台服务器宕机)
k8s多master高可用架构图
kubeadmhatopologystackedetcd.png

环境准备

准备了8台虚拟机ip为192.168.200.211-218分别对应主机名k8s01-k8s08,
其中k8s01,k8s03,k8s06为master,其余为woker节点,系统均为centos7.5
前端采用HAProxy(192.168.200.130)作为代理服务器,代理6443/TCP端口
(本文未采用主备,如需主备通过haproxy+keepalived vip实现就可以)

主机名IP角色
k8s01.axhome.local192.168.200.203master
k8s02.axhome.local192.168.200.204node
k8s03.axhome.local192.168.200.205master
k8s04.axhome.local192.168.200.206node
k8s05.axhome.local192.168.200.207node
k8s06.axhome.local192.168.200.208master
k8s07.axhome.local192.168.200.209node
k8s08.axhome.local192.168.200.210node
haproxy192.168.200.135HaProxy

系统初始化配置

系统参数调整

添加host解析

所有节点

[root@k8s01 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.200.203 k8s01 k8s01.axhome.local
192.168.200.204 k8s02 k8s02.axhome.local
192.168.200.205 k8s03 k8s03.axhome.local
192.168.200.206 k8s04 k8s04.axhome.local
192.168.200.207 k8s05 k8s05.axhome.local
192.168.200.208 k8s06 k8s06.axhome.local
192.168.200.209 k8s07 k8s07.axhome.local
192.168.200.210 k8s08 k8s08.axhome.local

关闭swap

# 临时关闭
[root@k8s01 ~]# swapoff -a  
# 永久关闭
[root@k8s01 ~]# sed -ri 's/.*swap.*/#&/' /etc/fstab
[root@k8s01 ~]# sysctl --system    

禁用防火墙

[root@k8s01 ~]# systemctl stop firewalld
[root@k8s01 ~]# systemctl disable firewalld

关闭selinux

# 永久关闭
[root@k8s01 ~]# sed -i 's/enforcing/disabled/' /etc/selinux/config
# 临时关闭
[root@k8s01 ~]# setenforce 0  

修改内核模块(flannel需要)

[root@k8s01 ~]# cat /etc/sysctl.d/k8s.conf      
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1   
vm.swappiness=0

docker安装

[root@k8s01 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2 epel-release
[root@k8s01 ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@k8s01 ~]# yum install docker-ce
[root@k8s01 ~]# systemctl start docker
[root@k8s01 ~]# systemctl enable docker

默认是国外的源,下载会慢,建议配置国内镜像仓库

[root@k8s01 ~]# cat /etc/docker/daemon.json
{
"registry-mirrors": ["https://ik8akj45.mirror.aliyuncs.com"]
}

部署apiserver的负载均衡器(Load Balancer)

这里使用haproxy,nginx如果打上tcp的补丁也可以,可自行选择,LB一定要在K8S集群部署前正确完成,LB的地址端口会在集群部署中作为apiserver的参数写入配置文件
在192.168.200.135上部署了haproxy作为LB,apiserver的地址也就是:https://192.168.200.135:6443

[root@haproxy ~]# yum install haproxy -y
cat /etc/haproxy/haproxy.cfg
global
ulimit-n 51200
defaults
log global
mode tcp
option dontlognull
timeout connect 1000
timeout client 150000
timeout server 150000
frontend port-in
bind *:6443
default_backend port-out
backend port-out
server test01 192.168.200.203:6443 maxconn 20480 check inter 10s
server test02 192.168.200.205:6443 maxconn 20480 check inter 10s
server test03 192.168.200.208:6443 maxconn 20480 check inter 10s

集群部署

添加kubernets源

google源需要翻墙才能访问,可以用阿里的源,但版本比较旧,默认情况下是exclude掉kube的包,防止误升级,所以下面yum安装时需要添加disableexcludes参数

[root@k8s01 ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

安装所需软件包

[root@k8s01 ~]# yum -y install kubeadm-1.15.4-0.x86_64 kubelet-1.15.4-0.x86_64 kubectl-1.15.4-0.x86_64 --disableexcludes=kubernetes
[root@k8s01 ~]# systemctl enable kubelet && systemctl start kubelet

创建kubeadmin-config.yaml文件

在第一台(192.168.200.203)准备安装master的主机上,建立如下文件kubeadmin-config.yaml

[root@k8s01 ~]# cat <<EOF> kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
kubernetesVersion: v1.15.4
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
apiServer:
  certSANs:
  - "192.168.200.135"
controlPlaneEndpoint: "192.168.200.135:6443"
networking:
  podSubnet: 10.244.0.0/16
EOF

LOAD_BALANCER_DNS LOAD_BALANCER_PORT 根据真实情况修改(192.168.200.130:6443), kubernetesVersion 根据实际使用的版本填写(可通过kubeadm version查看)

[root@k8s01 ~]# kubeadm init --config=kubeadm-config.yaml

完成后会出现如下类似内容

Your Kubernetes master has initialized successfully!
 
To start using your cluster, you need to run the following as a regular user:
 
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
 
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
 
You can now join any number of machines by running the following on each node
as root:
kubeadm join 192.168.200.135:6443 --token ct8r7t.pxkyqobx8kntxhv8 --discovery-token-ca-cert-hash sha256:5ccb2a264ad334ff8921913fe93d29a4c2d6a9acdb9a09d8d2d9e12aedc94d53

安装fannel网络插件

[root@k8s01 ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml

部署其它的master主机

拷贝相关证书文件到其他准备安装的master主机上,主要是一些证书,需要每台master均要部署,可利用如下两个脚本进行拷贝 control_plane_ips就是剩余需要需要作为master的主机

USER=root # customizable
CONTROL_PLANE_IPS="192.168.200.213 192.168.200.217"
for host in ${CONTROL_PLANE_IPS}; do
    scp /etc/kubernetes/pki/ca.crt "${USER}"@$host:
    scp /etc/kubernetes/pki/ca.key "${USER}"@$host:
    scp /etc/kubernetes/pki/sa.key "${USER}"@$host:
    scp /etc/kubernetes/pki/sa.pub "${USER}"@$host:
    scp /etc/kubernetes/pki/front-proxy-ca.crt "${USER}"@$host:
    scp /etc/kubernetes/pki/front-proxy-ca.key "${USER}"@$host:
    scp /etc/kubernetes/pki/etcd/ca.crt "${USER}"@$host:etcd-ca.crt
    scp /etc/kubernetes/pki/etcd/ca.key "${USER}"@$host:etcd-ca.key
    scp /etc/kubernetes/admin.conf "${USER}"@$host:
done

在准备部署的主机上执行,需要看情况编辑,主要是用户home目录

USER=root # customizable
mkdir -p /etc/kubernetes/pki/etcd
mv /home/${USER}/ca.crt /etc/kubernetes/pki/
mv /home/${USER}/ca.key /etc/kubernetes/pki/
mv /home/${USER}/sa.pub /etc/kubernetes/pki/
mv /home/${USER}/sa.key /etc/kubernetes/pki/
mv /home/${USER}/front-proxy-ca.crt /etc/kubernetes/pki/
mv /home/${USER}/front-proxy-ca.key /etc/kubernetes/pki/
mv /home/${USER}/etcd-ca.crt /etc/kubernetes/pki/etcd/ca.crt
mv /home/${USER}/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key
mv /home/${USER}/admin.conf /etc/kubernetes/admin.conf

拷贝完成后可以执行 export KUBECONFIG=/etc/kubernetes/admin.conf 这样在master 均可访问apiserver 执行kubectl命令

建立第二台master服务器,就是前面需要记录的下来的语句,添加--experimental-control-plane这个表示作为master,也就是control-plane

[root@k8s01 ~]# kubeadm join 192.168.200.135:6443 --token ct8r7t.pxkyqobx8kntxhv8 --discovery-token-ca-cert-hash sha256:5ccb2a264ad334ff8921913fe93d29a4c2d6a9acdb9a09d8d2d9e12aedc94d53 --experimental-control-plane

完成后可同样步骤建立第三台

work节点加入

worker节点依次执行

[root@k8s01 ~]# kubeadm join 192.168.200.135:6443 --token ct8r7t.pxkyqobx8kntxhv8 --discovery-token-ca-cert-hash sha256:5ccb2a264ad334ff8921913fe93d29a4c2d6a9acdb9a09d8d2d9e12aedc94d53

查看集群中node情况

[root@k8s01 ~]# kubectl get node
NAME                 STATUS   ROLES    AGE    VERSION
k8s01.axhome.local   Ready    master   258d   v1.15.4
k8s02.axhome.local   Ready    <none>   114d   v1.15.4
k8s03.axhome.local   Ready    master   258d   v1.15.4
k8s04.axhome.local   Ready    <none>   240d   v1.15.4
k8s05.axhome.local   Ready    <none>   242d   v1.15.4
k8s06.axhome.local   Ready    master   258d   v1.15.4
k8s07.axhome.local   Ready    <none>   242d   v1.15.4
k8s08.axhome.local   Ready    <none>   137d   v1.15.4

worker节点执行kubectl

加入集群的work节点是无法执行kubectl get命令,需要从master拷贝admin.config文件

[root@k8s01 ~]# mkdir -p $HOME/.kube
[root@k8s01 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s01 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
# 拷贝config到各个work节点
[root@k8s01 ~]# scp -r .kube/config root@192.168.200.204:.kube/
[root@k8s02 ~]# kubectl get node
NAME                 STATUS   ROLES    AGE    VERSION
k8s01.axhome.local   Ready    master   258d   v1.15.4
k8s02.axhome.local   Ready    <none>   114d   v1.15.4
k8s03.axhome.local   Ready    master   258d   v1.15.4
k8s04.axhome.local   Ready    <none>   240d   v1.15.4
k8s05.axhome.local   Ready    <none>   242d   v1.15.4
k8s06.axhome.local   Ready    master   258d   v1.15.4
k8s07.axhome.local   Ready    <none>   242d   v1.15.4
k8s08.axhome.local   Ready    <none>   137d   v1.15.4

kubectl命令自动补全

[root@k8s01 ~]# yum install -y bash-completion*
#写入环境变量
[root@k8s01 ~]# source <(kubectl completion bash)
[root@k8s01 ~]# echo "source <(kubectl completion bash)" >> ~/.bashrc