kubernetes正常情况下,所有pod中数据都不持久化保存,如果pod被删除pod中的数据也将被删除,所以对于一些有状态服务需要,需要后端存储进行存储数据比如mysql,redis等。nfs-providioner是一个自动配置卷程序,它使用现有的和已配置的 NFS 服务器来支持通过持久卷声明动态配置 Kubernetes 持久卷

  • PV以 namespace-pvcName-pvName的命名格式提供(在NFS服务器上)
  • PV回收的时候以 archieved-namespace-pvcName-pvName 的命名格式(在NFS服务器上)

搭建nfs

nfs-server:192.168.200.233
nfs-client: 192.168.200.203-210

服务端安装NFS服务步骤:

所有节点安装nfs

[root@nfs ~]# yum -y install nfs-common nfs-utils

nfs-server创建共享目录

[root@nfs ~]# mkdir /storage

授权共享目录

[root@nfs ~]# chmod 666 /storage

编辑exports文件

[root@nfs ~]# cat /etc/exports
/storage 192.168.200.0/24(rw,no_root_squash,no_all_squash,sync)

重新加载NFS服务,使配置文件生效

[root@nfs ~]# systemctl reload nfs

启动rpc和nfs(注意顺序)

[root@nfs ~]# systemctl start rpcbind
[root@nfs ~]# systemctl start nfs

作为准备工作,我们已经在nfs-server节点上搭建了一个 NFS 服务器,目录为 /storage:

[root@nfs storage]# showmount -e
Export list for nfs:
/storage 192.168.200.0/24

NFS客户端挂载配置

使用showmount命令查看nfs服务器共享信息。输出格式为“共享的目录名称 允许使用客户端地址

[root@k8s01 nfs-provisioner]# showmount -e 192.168.200.233
Export list for 192.168.200.233:
/storage 192.168.200.0/24

showmount命令的用法

参数 作用
-e 显示NFS服务器的共享列表
-a 显示本机挂载的文件资源的情况NFS资源的情况
-v 显示版本号

k8s部署nfs-provisioner

[root@k8s01 nfs-provisioner]# vim storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: managed-nfs-storage
provisioner: nfs-storage-231 # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
  archiveOnDelete: "false"
[root@k8s01 nfs-provisioner]# vim rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io
[root@k8s01 nfs-provisioner]# vim deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          #image: k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
          image: eipwork/nfs-subdir-external-provisioner:v4.0.2   #国内镜像
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: nfs-storage-231  #与storageclass一致
            - name: NFS_SERVER    #nfs服务器地址
              value: 192.168.200.233
            - name: NFS_PATH  #nfs共享目录
              value: /storage
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.200.233  #nfs服务器地址
            path: /storage   #nfs共享目录

部署

[root@k8s01 ~]# kubectl apply -f rbac.yaml
[root@k8s01 ~]# kubectl apply -f storageclass.yaml
[root@k8s01 ~]# kubectl apply -f deployment.yaml
[root@k8s01 ~]# kubectl apply -f test-claim.yaml

测试

[root@k8s01 nfs-provisioner]# cat test-claim.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-claim1
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
  storageClassName: managed-nfs-storage

#执行创建
[root@k8s01 nfs-provisioner]# kubectl apply -f  test-claim.yaml
persistentvolumeclaim/test-claim1 created

provisioner高可用

生产环境中应该尽可能的避免单点故障,因此此处考虑provisioner的高可用架构 更新后的provisioner配置如下

[root@k8s01 nfs-provisioner]# vim deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
spec:
  replicas: 3 #高可用,配置为3个副本
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2   #镜像
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: nfs-storage-231
            - name: ENABLE_LEADER_ELECTION  #设置高可用允许选举
              value: "True"
            - name: NFS_SERVER    
              value: 192.168.200.233
            - name: NFS_PATH  
              value: /storage
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.200.233
            path: /storage
            
#启动nfs-provisioner
[root@k8s01 nfs-provisioner]# kubectl apply -f dep.yaml 
deployment.apps/nfs-client-provisioner configured

#验证
[root@k8s01 nfs-provisioner]# kubectl get pod
NAME                                     READY   STATUS    RESTARTS   AGE
nfs-client-provisioner-d6d7d4848-2m65f   1/1     Running   0          62s
nfs-client-provisioner-d6d7d4848-f8f68   1/1     Running   0          62s
nfs-client-provisioner-d6d7d4848-tbhr5   1/1     Running   0          62s

#查看日志
[root@k8s01 nfs-provisioner]# kubectl logs nfs-client-provisioner-d6d7d4848-2m65f 
I0930 04:13:27.384415       1 leaderelection.go:242] attempting to acquire leader lease  default/k8s-sigs.io-nfs-subdir-external-provisioner...
I0930 04:13:45.009672       1 leaderelection.go:252] successfully acquired lease default/k8s-sigs.io-nfs-subdir-external-provisioner
#这里可以看到已经选出来一个leader
I0930 04:13:45.009997       1 event.go:278] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"default", Name:"k8s-sigs.io-nfs-subdir-external-provisioner", UID:"46abdaa8-ddde-43e0-87ab-1e95b9eac418", APIVersion:"v1", ResourceVersion:"647213", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' nfs-client-provisioner-d6d7d4848-2m65f_c776eb51-3b56-4a72-8f9b-b36f3c88f2d8 became leader
I0930 04:13:45.010144       1 controller.go:820] Starting provisioner controller k8s-sigs.io/nfs-subdir-external-provisioner_nfs-client-provisioner-d6d7d4848-2m65f_c776eb51-3b56-4a72-8f9b-b36f3c88f2d8!
I0930 04:13:45.110597       1 controller.go:869] Started provisioner controller k8s-sigs.io/nfs-subdir-external-provisioner_nfs-client-provisioner-d6d7d4848-2m65f_c776eb51-3b56-4a72-8f9b-b36f3c88f2d8!

配置子目录

删除之前创建的storageclass.yaml,添加pathPattern参数,然后重新生成sc

[root@k8s01 nfs-provisioner]# vim storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: managed-nfs-storage
provisioner: nfs-storage-231 
parameters:
  archiveOnDelete: "false"
  pathPattern: "${.PVC.namespace}/${.PVC.annotations.nfs.io/storage-path}"
  
重新创建存储类
[root@k8s01 nfs-provisioner]# kubectl apply -f storageclass.yaml 
storageclass.storage.k8s.io/managed-nfs-storage created

再次测试

[root@k8s01 nfs-provisioner]# cat test-claim.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-claim1
  annotations:
    nfs.io/storage-path: "test-path-two" # not required, depending on whether this annotation was shown in the storage class description
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
  storageClassName: managed-nfs-storage
  
[root@k8s01 nfs-provisioner]#  kubectl apply -f  test-claim.yaml

查看结果

[root@k8s01 storage]# pwd
/storage
[root@k8s01 storage]# tree -L 2 .
.
└── default
    └── test-claim1
2 directories, 0 files

发现子目录的确已经生成,并且子目录的层级是以"命名空间/注解名称"为规则的。刚好符合了上面StorageClass中定义的pathPattern规则

报错

在操作过程中遇到describe pod发现报错如下

Mounting arguments: --description=Kubernetes transient mount for /data/kubernetes/kubelet/pods/2ca70aa9-433c-4d10-8f87-154ec9569504/volumes/kubernetes.io~nfs/nfs-client-root --scope -- mount -t nfs 172.16.41.7:/data/nfs_storage /data/kubernetes/kubelet/pods/2ca70aa9-433c-4d10-8f87-154ec9569504/volumes/kubernetes.io~nfs/nfs-client-root
Output: Running scope as unit: run-rdcc7cfa6560845969628fc551606e69d.scope
mount: /data/kubernetes/kubelet/pods/2ca70aa9-433c-4d10-8f87-154ec9569504/volumes/kubernetes.io~nfs/nfs-client-root: bad option; for several filesystems (e.g. nfs, cifs) you might need a /sbin/mount.<type> helper program.
  Warning  FailedMount  10s  kubelet, node1.ayunw.cn  MountVolume.SetUp failed for volume "nfs-client-root" : mount failed: exit status 32
Mounting command: systemd-run

原因是pod被调度到的节点上没有安装nfs客户端,只需要安装一下nfs客户端nfs-utils即可

文章作者: 鲜花的主人
版权声明: 本站所有文章除特别声明外,均采用 CC BY-NC-SA 4.0 许可协议。转载请注明来自 爱吃可爱多
Kubernetes Kubernetes
喜欢就支持一下吧
打赏
微信 微信
支付宝 支付宝