服务器租用 > 服务器相关 > linux > → 正文内容

kubernetes使用NFS作为存储的操作步骤(保姆式分享

发布时间:2022-12-25 11:24 整理发布:鸿网互联

目录


前言

nfs 卷能将 NFS (网络文件系统) 挂载到Pod 中。 不像 emptyDir 那样会在删除 Pod 的同时也会被删除,nfs 卷的内容在删除 Pod 时会被保存,卷只是被卸载。 这意味着 nfs 卷可以被预先填充数据,并且这些数据可以在 Pod 之间共享。

nfs支持多个客户端挂载,可以创建多个pod,挂载同一个nfs服务器共享出来的目录;但是nfs如果宕机了,数据也就丢失了,所以需要使用分布式存储,常见的分布式存储有???glusterfs???和???cephfs??。


环境说明

安装NFS服务端

$ yum install -y nfs-utils $ systemctl start nfs $ systemctl enable nfs $ systemctl status nfs$ chkconfig nfs on //设置为开机自启注意:正在将请求转发到“systemctl enable nfs.service”。[root@sc-node2 ~]# mkdir -p /data/nfs/efk #创建共享目录[root@sc-node2 ~]# cat /etc/exports/data/nfs/efk 192.168.2.0/24(rw,no_root_squash)[root@sc-node2 ~]# exportfs -arv //使配置文件生效exporting 192.168.2.0/24:/data/v1[root@sc-node2 ~]# systemctl restart nfs[root@sc-node2 ~]# showmount -e localhost //检查共享目录信息Export list for localhost:/data/nfs/efk 192.168.2.0/24 .

安装客户端驱动

客户端即是kubernetes集群中的每个节点,每个节点都需要本步骤操作。

[root@sc-node2 ~]# yum -y install nfs-utils[root@sc-node2 ~]# systemctl start nfs-utils[root@sc-node2 ~]# systemctl enable nfs-utils [root@sc-node2 ~]# systemctl status nfs-utils

创建运行nfs-provisioner的sa账号

nfs的外部供应商将使用该SA账号访问相关的资源。

[root@sc-master1 ~]# vim nfs-provisioner-sa.yamlapiVersion: v1kind: ServiceAccountmetadata: name: nfs-provisioner namespace: kube-login[root@sc-master1 ~]# kubectl apply -f nfs-provisioner-sa.yaml

对sa账号做rbac授权

[root@sc-master1 ~]# vim nfs-rbac.yamlkind: ClusterRoleapiVersion: rbac.authorization.k8s.io/v1metadata: name: nfs-provisioner-runnerrules: - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["create", "update", "patch"] - apiGroups: [""] resources: ["services", "endpoints"] verbs: ["get"] - apiGroups: ["extensions"] resources: ["podsecuritypolicies"] resourceNames: ["nfs-provisioner"] verbs: ["use"]---kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata: name: run-nfs-provisionersubjects: - kind: ServiceAccount name: nfs-provisioner namespace: kube-loggingroleRef: kind: ClusterRole name: nfs-provisioner-runner apiGroup: rbac.authorization.k8s.io---kind: RoleapiVersion: rbac.authorization.k8s.io/v1metadata: name: leader-locking-nfs-provisioner namespace: kube-loggingrules: - apiGroups: [""] resources: ["endpoints"] verbs: ["get", "list", "watch", "create", "update", "patch"]---kind: RoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata: name: leader-locking-nfs-provisioner namespace: kube-loggingsubjects: - kind: ServiceAccount name: nfs-provisioner namespace: kube-loggingroleRef: kind: Role name: leader-locking-nfs-provisioner apiGroup: rbac.authorization.k8s.io[root@sc-master1 ~]# kubectl apply -f nfs-rbac.yaml

创建外部存储提供商

通过deployment创建pod用来运行nfs-provisioner

[root@sc-master1 ~]# vim nfs-provisioner-deploy.yamlapiVersion: apps/v1kind: Deploymentmetadata: name: nfs-provisioner namespace: kube-loggingspec: selector: matchLabels: app: nfs-provisioner replicas: 1 strategy: #策略 type: Recreate template: metadata: labels: app: nfs-provisioner spec: serviceAccount: nfs-provisioner #sa账号名称 containers: - name: nfs-provisioner image: registry.cn-beijing.aliyuncs.com/mydlq/nfs-subdir-external-provisioner:v4.0.0 imagePullPolicy: IfNotPresent volumeMounts: - name: nfs-client-root mountPath: /persistentvolumes env: - name: PROVISIONER_NAME value: example.com/nfs - name: NFS_SERVER value: 192.168.2.30 # nfs服务端IP地址 - name: NFS_PATH value: /data/nfs/efk # 共享目录 volumes: - name: nfs-client-root nfs: server: 192.168.2.30 # nfs服务端IP地址 path: /data/nfs/efk # 共享目录[root@sc-master1 ~]# kubectl apply -f nfs-provisioner-deploy.yaml

Pod服务状态正常,并不代表Pod就可以正常使用,还需要看Pod的日志是否有报错信息!

[root@sc-master1 ~]# kubectl -n kube-loggin get podNAME READY STATUS RESTARTS AGEnfs-proversitioner-7b4c6cc9bf-s48ld 1/1 Running 5 4d9h[root@sc-master1 ~]# kubectl -n harbor logs nfs-proversitioner-7b4c6cc9bf-s48ld I0414 05:26:47.215510 1 leaderelection.go:242] attempting to acquire leader lease harbor/example.com-nfs...I0414 05:27:04.812118 1 leaderelection.go:252] successfully acquired lease harbor/example.com-nfsI0414 05:27:04.812518 1 event.go:278] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"harbor", Name:"example.com-nfs", UID:"595c1061-5f59-4723-a8ac-02ba2fb4e0e0", APIVersion:"v1", ResourceVersion:"1536933", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' nfs-proversitioner-7b4c6cc9bf-s48ld_146316d3-5696-4016-b0ba-2f0491ad6314 became leaderI0414 05:27:04.812579 1 controller.go:820] Starting provisioner controller example.com/nfs_nfs-proversitioner-7b4c6cc9bf-s48ld_146316d3-5696-4016-b0ba-2f0491ad6314!I0414 05:27:04.913020 1 controller.go:869] Started provisioner controller example.com/nfs_nfs-proversitioner-7b4c6cc9bf-s48ld_146316d3-5696-4016-b0ba-2f0491ad6314!

Pod中使用NFS服务

Pod内可直接使用NFS服务,不需要nfs外部供应商,但是需要手动创建共享目录,自动化场景下不推荐,更多的是使用StorageClass去动态的划分PV。

volumes:- name: test-volume nfs: path: /data/nfs # nfs服务端共享目录 server: 192.168.2.30 # nfs服务器地址

使用NFS创建StorageClass

NFS外部存储的供应商部署完成后,就可以基于NFS使用StorageClass动态划分PV卷了。

[root@sc-master1 ~]# vim nfs-storageclass.yamlapiVersion: storage.k8s.io/v1kind: StorageClassmetadata: name: do-block-storage namespace: kube-loggingprovisioner: example.com/nfs [root@sc-master1 ~]# kubectl apply -f nfs-storageclass.yaml

注意:provisioner处写的example.com/nfs应该跟安装nfs provisioner时候的env下的PROVISIONER_NAME的value值保持一致,如下:

env:- name: PROVISIONER_NAME value: example.com/nfs

使用StorageClass动态划分pv卷

[root@sc-master1]# cat test-pvc.yaml kind: PersistentVolumeClaimapiVersion: v1metadata: name: test-claim1spec: accessModes: ["ReadWriteMany"] # 卷的访问模式 resources: requests: storage: 1Gi # 卷的大小 storageClassName: do-block-storage # storageClass的名称[root@sc-master1]# kubectl apply -f claim.yaml # 更新资源清单persistentvolumeclaim/test-claim1 created

pod中挂载使用卷

spec:... volumes: - name: nfs-pvc persistentVolumeClaim: claimName: test-claim1

kubernetes使用NFS作为存储的操作步骤(保姆式分享)_运维

【本文转自:日本cn2服务器 http://www.558idc.com/jap.html提供,感恩】

您可能感兴趣的文章: