环境说明

  • k8s版本:1.20.0
  • docker版本: 19.03.9
  • kubeadm版本:1.24.0

kubernets的信息是保存在etcd中,etcd采用多数决定,所以如果需要冗余,至少3台etc服务器(从成本考虑采用单数,比如5台和6台效果相同,均只允许2台服务器宕机)
k8s多master高可用架构图
kubeadmhatopologystackedetcd.png

准备了5台虚拟机ip为192.168.200.203-207分别对应主机名k8s01-k8s08, 其中k8s01,k8s02,k8s03为master,其余为woker节点,系统均为centos7.6 前端采用Nginx(192.168.200.130)作为代理服务器,代理6443/TCP端口 (本文未采用主备,如需主备通过haproxy+keepalived vip实现就可以)

主机名 IP 角色
k8s01.k8s.local 192.168.200.203 master
k8s02.k8s.local 192.168.200.204 master
k8s03.k8s.local 192.168.200.205 master
k8s04.k8s.local 192.168.200.206 node
k8s05.k8s.local 192.168.200.207 node
master-lb 192.168.200.135 nginx组件监听地址

说明:

  • master节点为3台实现高可用,并且通过envoy进行代理master流量实现高可用,master也安装node组件
  • node节点为2台
  • nginx在所有节点安装,监听127.0.0.1:16443端口
  • 系统使用centos7.X

基础环境配置

所有节点配置hosts

[root@k8s01 ~]# cat >>/etc/hosts<<EOF
192.168.200.203 k8s01.k8s.local
192.168.200.204 k8s02.k8s.local
192.168.200.205 k8s03.k8s.local
192.168.200.206 k8s04.k8s.local
192.168.200.207 k8s05.k8s.local
EOF

所有节点关闭防火墙、selinux、dnsmasq、swap

#关闭防火墙
[root@k8s01 ~]# systemctl disable --now firewalld
#关闭dnsmasq
[root@k8s01 ~]# systemctl disable --now dnsmasq
#关闭postfix
[root@k8s01 ~]# systemctl  disable --now postfix
#关闭NetworkManager
[root@k8s01 ~]# systemctl disable --now NetworkManager
#关闭selinux
[root@k8s01 ~]# sed -ri 's/(^SELINUX=).*/\1disabled/' /etc/selinux/config
[root@k8s01 ~]# setenforce 0
#关闭swap
[root@k8s01 ~]# sed -ri 's@(^.*swap *swap.*0 0$)@#\1@' /etc/fstab
[root@k8s01 ~]# swapoff -a

所有节点修改资源限制

[root@k8s01 ~]# cat > /etc/security/limits.conf <<EOF
*       soft        core        unlimited
*       hard        core        unlimited
*       soft        nproc       1000000
*       hard        nproc       1000000
*       soft        nofile      1000000
*       hard        nofile      1000000
*       soft        memlock     32000
*       hard        memlock     32000
*       soft        msgqueue    8192000
EOF

ssh认证

[root@k8s01 ~]# yum install -y sshpass
ssh-keygen -f /root/.ssh/id_rsa -P ''
export IP="192.168.200.203 192.168.200.204 192.168.200.205 192.168.200.206 192.168.200.207"
export SSHPASS=123456
for HOST in $IP;do
     sshpass -e ssh-copy-id -o StrictHostKeyChecking=no $HOST
done

升级系统以及内核

详见CentOS7升级内核版本

安装基础软件

#安装基础软件
[root@k8s01 ~]# yum install curl conntrack ipvsadm ipset iptables jq sysstat libseccomp rsync wget jq psmisc vim net-tools telnet -y

优化journald日志

[root@k8s01 ~]# mkdir -p /var/log/journal
[root@k8s01 ~]# mkdir -p /etc/systemd/journald.conf.d
[root@k8s01 ~]# cat >/etc/systemd/journald.conf.d/99-prophet.conf <<EOF
[Journal]
# 持久化保存到磁盘
Storage=persistent
# 压缩历史日志
Compress=yes
SyncIntervalSec=5m
RateLimitInterval=30s
RateLimitBurst=1000
# 最大占用空间 1G
SystemMaxUse=1G
# 单日志文件最大 10M
SystemMaxFileSize=10M
# 日志保存时间 2 周
MaxRetentionSec=2week
# 不将日志转发到 syslog
ForwardToSyslog=no
EOF
[root@k8s01 ~]# systemctl restart systemd-journald && systemctl enable systemd-journald

配置kubernetes的yum源

[root@k8s01 ~]# cat > /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=kubernetes
baseurl=https://mirrors.cloud.tencent.com/kubernetes/yum/repos/kubernetes-el7-x86_64
gpgcheck=0
EOF
#测试
[root@k8s01 ~]# yum list --showduplicates | grep kubeadm

安装docker

kubernetes1.24.+版本说明:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md
1.24.+版本如果使用docker作为容器运行时,需要额外安装cri-docker插件,下载地址:https://github.com/Mirantis/cri-dockerd/releases

#所有节点安装
[root@k8s01 ~]# yum install container-selinux -y
[root@k8s01 ~]# yum install docker-ce-19.03.13-3.el7.x86_64 -y
[root@k8s01 ~]# systemctl enable --now docker
#验证
[root@k8s01 ~]# docker info

#配置docker,温馨提示:由于新版kubelet建议使用systemd,所以可以把docker的CgroupDriver改成systemd
[root@k8s01 ~]# cat >/etc/docker/daemon.json <<EOF
{
   "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
[root@k8s01 ~]# systemctl restart docker
#验证
[root@k8s01 ~]# docker info
...
 Cgroup Driver: systemd
...

安装cri-docker,需要先下载rpm包

[root@k8s01 ~]# wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.2.3/cri-dockerd-0.2.3-3.el7.x86_64.rpm
[root@k8s01 ~]# rpm -ivh cri-dockerd-0.2.3-3.el7.x86_64.rpm
#修改配置
[root@k8s01 ~]# vim /usr/lib/systemd/system/cri-docker.service
ExecStart=/usr/bin/cri-dockerd --container-runtime-endpoint fd:// --network-plugin=cni --pod-infra-container-image=qazwsxqwe123/pause:3.7
#启动
[root@k8s01 ~]# systemctl enable --now cri-docker.socket
[root@k8s01 ~]# systemctl enable --now cri-docker.service
#验证
[root@k8s01 ~]# systemctl status cri-docker

安装kubernetes组件安装

#所有节点安装kubeadm
[root@k8s01 ~]# yum list kubeadm.x86_64 --showduplicates | sort -r #查看所有版本
#安装1.24.2
[root@k8s01 ~]#  yum install kubeadm-1.24.2-0 kubelet-1.24.2-0 kubectl-1.24.2-0 -y
#设置kubelet
[root@k8s01 ~]#  DOCKER_CGROUPS=$(docker info | grep 'Cgroup' | cut -d' ' -f4)
[root@k8s01 ~]#  cat >/etc/sysconfig/kubelet<<EOF
KUBELET_EXTRA_ARGS="--cgroup-driver=$DOCKER_CGROUPS"
EOF
#重启kubelet
[root@k8s01 ~]#  systemctl daemon-reload && systemctl restart kubelet && systemctl enable kubelet

安装高可用组件nginx

#解压
[root@nginx ~]#  tar xf nginx.tar.gz -C /usr/bin/
#生成配置文件
[root@nginx ~]#  mkdir /etc/nginx -p
[root@nginx ~]#  mkdir /var/log/nginx -p
[root@nginx ~]#  cat >/etc/nginx/nginx.conf<<EOF 
user root;
worker_processes 1;

error_log  /var/log/nginx/error.log warn;
pid /var/log/nginx/nginx.pid;

events {
    worker_connections  3000;
}

stream {
    upstream apiservers {
        server 192.168.200.203:6443  max_fails=2 fail_timeout=3s;
        server 192.168.200.204:6443  max_fails=2 fail_timeout=3s;
        server 192.168.200.205:6443  max_fails=2 fail_timeout=3s;
    }

    server {
        listen 127.0.0.1:6443;
        proxy_connect_timeout 1s;
        proxy_pass apiservers;
    }
}
EOF
#生成启动文件
[root@nginx ~]# cat >/etc/systemd/system/nginx.service <<EOF
[Unit]
Description=nginx proxy
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=forking
ExecStartPre=/usr/bin/nginx -c /etc/nginx/nginx.conf -p /etc/nginx -t
ExecStart=/usr/bin/nginx -c /etc/nginx/nginx.conf -p /etc/nginx
ExecReload=/usr/bin/nginx -c /etc/nginx/nginx.conf -p /etc/nginx -s reload
PrivateTmp=true
Restart=always
RestartSec=15
StartLimitInterval=0
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF
#启动
[root@nginx ~]#  systemctl enable --now nginx.service
#验证
[root@nginx ~]# ss -ntl | grep 6443
LISTEN     0      511    127.0.0.1:6443                    *:*

k8s组件安装

初始化配置文件

#生成kubeadm文件
[root@k8s01 ~]#  cat >kubeadm-config.yaml<<EOF
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.24.2
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
apiServer:
  certSANs:
  - 127.0.0.1
controlPlaneEndpoint: "127.0.0.1:6443"
networking:
  # This CIDR is a Calico default. Substitute or remove for your CNI provider.
  podSubnet: "10.244.0.0/16"
  serviceSubnet: 10.96.0.0/16
  dnsDomain: cluster.local
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
nodeRegistration:
  criSocket: "/var/run/cri-dockerd.sock"
EOF

初始化k8s集群

在一个master节点执行即可

[root@k8s01 ~]#  kubeadm init --config kubeadm-config.yaml  --upload-certs
#出现以下信息表示正常
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join 127.0.0.1:6443 --token 46uf0z.ff0hirzur8m219u5 \
	--discovery-token-ca-cert-hash sha256:f9d573576a51756c97f5d5a30f8b0d69340ec07f750131232d80c4dfcaa3bb82 \
	--control-plane --certificate-key ea2d459317eb0136274caa3d262d12439fcc3a90d58a67fec8bb1dba773f15a4

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 127.0.0.1:6443 --token 46uf0z.ff0hirzur8m219u5 \
	--discovery-token-ca-cert-hash sha256:f9d573576a51756c97f5d5a30f8b0d69340ec07f750131232d80c4dfcaa3bb82

初始化master

[root@k8s02 ~]#  kubeadm join 127.0.0.1:6443 --token eo4ss8.1g4eqkvztlcve6qr \
 --discovery-token-ca-cert-hash sha256:59b45d6cd9d1da04db50f687620ba19c1f67e04ad4484f8553f9b3b656560e49 \
 --control-plane --certificate-key ba787e9ecc748023e8011b91a081b62283fadfc15582ff11b0a5c5f92f6c552f \
 --cri-socket unix:///var/run/cri-dockerd.sock

初始化node节点

[root@k8s04 ~]# kubeadm join 127.0.0.1:6443 --token eo4ss8.1g4eqkvztlcve6qr \
 --discovery-token-ca-cert-hash sha256:59b45d6cd9d1da04db50f687620ba19c1f67e04ad4484f8553f9b3b656560e49 \
 --cri-socket unix:///var/run/cri-dockerd.sock

其他组件安装

calico网络组件安装

#下载yaml文件
[root@k8s01 ~]#  wget https://docs.projectcalico.org/v3.15/manifests/calico-typha.yaml

#修改配置
[root@k8s01 ~]#  cat calico-typha.yaml
...
            - name: CALICO_IPV4POOL_CIDR
              value: "10.244.0.0/16"          #修改成pod的网段
...              

#创建calico
[root@k8s01 ~]#  kubectl apply -f calico-typha.yaml

#安装calicoctl工具
[root@k8s01 ~]#  curl -O -L  https://github.com/projectcalico/calicoctl/releases/download/v3.15.5/calicoctl
[root@k8s01 ~]#  chmod +x calicoctl 
[root@k8s01 ~]#  mv calicoctl /usr/bin/

#配置calicoctl
[root@k8s01 ~]#  mkdir /etc/calico -p
[root@k8s01 ~]#  cat >/etc/calico/calicoctl.cfg <<EOF
apiVersion: projectcalico.org/v3
kind: CalicoAPIConfig
metadata:
spec:
  datastoreType: "kubernetes"
  kubeconfig: "/root/.kube/config"
EOF

#验证
[root@k8s01 ~]#  calicoctl node status
Calico process is running.
IPv4 BGP status
+---------------+-------------------+-------+----------+-------------+
| PEER ADDRESS  |     PEER TYPE     | STATE |  SINCE   |    INFO     |
+---------------+-------------------+-------+----------+-------------+
| 192.168.200.203 | node-to-node mesh | up  | 13:03:48 | Established |
| 192.168.200.204 | node-to-node mesh | up  | 13:03:48 | Established |
| 192.168.200.205 | node-to-node mesh | up  | 13:03:48 | Established |
| 192.168.200.206 | node-to-node mesh | up  | 13:03:47 | Established |
| 192.168.200.207 | node-to-node mesh | up  | 13:03:47 | Established |
+---------------+-------------------+-------+----------+-------------+
IPv6 BGP status
No IPv6 peers found.

安装Metrics-server

#下载yaml文件
[root@k8s01 ~]# wget https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.6.1/components.yaml
#需要修改配置
        - --cert-dir=/tmp
        - --secure-port=4443
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        - --kubelet-insecure-tls  #添加这个
        
#部署
[root@k8s01 ~]#  kubectl apply -f components.yaml

#验证
[root@k8s01 ~]# kubectl top node 
NAME            CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
k8s01.k8s.local    105m         5%     1316Mi          34%       
k8s02.k8s.local     95m          4%     1097Mi          28%       
k8s03.k8s.local     95m          4%     1105Mi          28%       
k8s04.k8s.local     71m          1%     846Mi           22%       
k8s05.k8s.local     78m          1%     803Mi           21%

安装dashboard

注意1.24版本的sa服务账户创建完成后不会在像以前那样自动创建服务令牌secret资源,需要自己手动创建

#下载yaml文件
[root@k8s01 ~]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.6.0/aio/deploy/recommended.yaml

#修改yaml文件
[root@k8s01 ~]# vim recommended.yaml
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort  #添加
  ports:
    - port: 443 
      targetPort: 8443
      nodePort: 30001  #添加
  selector:
    k8s-app: kubernetes-dashboard

#创建
[root@k8s01 ~]# kubectl apply -f dashboard.yaml

#创建用户
[root@k8s01 ~]# cat >admin.yaml<<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
---
apiVersion: v1
kind: Secret
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
  annotations:
    kubernetes.io/service-account.name: "admin-user"
type: kubernetes.io/service-account-token
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard
EOF
[root@k8s01 ~]#  kubectl apply -f admin.yaml

#获取用户token
[root@k8s01 ~]# kubectl describe secrets -n kubernetes-dashboard admin-user-token-fwd2n

修改kube-proxy工作模式为ipvs

将Kube-proxy改为ipvs模式,因为在初始化集群的时候注释了ipvs配置,所以需要自行修改一下:

#在控制节点修改configmap
[root@k8s01 ~]# kubectl edit cm -n kube-system kube-proxy
mode: "ipvs"  #默认没有值是iptables工作模式

#更新kube-proxy的pod
[root@k8s01 ~]# kubectl patch daemonset kube-proxy -p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}" -n kube-system

#验证工作模式
[root@k8s01 ~]# curl 127.0.0.1:10249/proxyMode
ipvs
文章作者: 鲜花的主人
版权声明: 本站所有文章除特别声明外,均采用 CC BY-NC-SA 4.0 许可协议。转载请注明来自 爱吃可爱多
Kubernetes Kubernetes
喜欢就支持一下吧
打赏
微信 微信
支付宝 支付宝