开始前的想法.
前面测试pv&&pvc的部署和简单配置应用,实现pod应用数据存储到pvc并且和pod解耦的目的.
前面操作是全手动操作,手动创建pv,手动创建pvc,如果集群pod少,这样操作可以.
假如集群有1000个以上的pod,每个pod都需要使用pvc存储数据,如果只能手动去一个个创建pv,pvc,工作量不可想像.
如果可以创建pod的时候,创建pod的用户定义pvc,然后集群能够根据用户的pvc需求创建pv,实现动态的pv&&pvc创建分配.
kubernetes支持对接存储动态创建分配pv&&pvc.
这是本次测试的目的.
2.
测试环境
实验环境,存储用nfs简单部署测试.
3.
nfs部署
略
参考前面的文档
pod应用数据存储解耦pv&&pvc
4.
storage classes
官方文档:
https://kubernetes.io/docs/concepts/storage/storage-classes/
kubernetes支持用storage classes对接存储,实现动态pv&&pvc创建分配.
kubernetes内置支持对接很多存储类型,比如cephfs,glusterfs等等,具体参考官方文档.
kubernetes内置不支持对接nfs存储类型.需要使用外部的插件.
外部插件参考文档:
https://github.com/kubernetes-incubator/external-storage
nfs插件配置文档:
https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client
nfs-client-provisioner是一个kubernetes的简易NFS的外部provisioner,本身不提供NFS,需要现有的NFS服务器提供存储
5.
nfs存储配置文件
[[email protected] nfs]# ls class.yaml? deployment.yaml? rbac.yaml? test-claim.yaml? test-pod.yaml
5.1
class.yaml
[[email protected] nfs]# cat class.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass Metadata: ? name: managed-nfs-storage provisioner: fuseim.pri/ifs # or choose another name,must match deployment‘s env PROVISIONER_NAME‘ parameters: ? archiveOnDelete: "false"
创建一个storageclass
kind: StorageClass
新建的storageclass名字为:managed-nfs-storage?? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ??
name: managed-nfs-storage
?
provisioner直译为供应者,结合实际这里应该是指storageclass的对接存储类程序名字(个人理解),这个名字必须和deplotment.yaml的PROVISIONER_NAME变量值相同. ? ? ? ? ? ? ? ? ? ? ?
provisioner: fuseim.pri/ifs ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ??
[[email protected] nfs]# kubectl apply -f class.yaml storageclass.storage.k8s.io "managed-nfs-storage" created
[[email protected] nfs]# kubectl get storageclass NAME? ? ? ? ? ? ? ? ? PROVISIONER? ? ? AGE managed-nfs-storage?? fuseim.pri/ifs?? 7s
5.2
deployment.yaml
[[email protected] nfs]# cat deployment.yaml apiVersion: v1 kind: ServiceAccount Metadata: ? name: nfs-client-provisioner --- kind: Deployment apiVersion: extensions/v1beta1 Metadata: ? name: nfs-client-provisioner spec: ? replicas: 1 ? strategy: ? ? type: Recreate ? template: ? ? Metadata: ? ? ? labels: ? ? ? ? app: nfs-client-provisioner ? ? spec: ? ? ? serviceAccountName: nfs-client-provisioner ? ? ? containers: ? ? ? ? - name: nfs-client-provisioner ? ? ? ? ? image: quay.io/external_storage/nfs-client-provisioner:latest ? ? ? ? ? volumeMounts: ? ? ? ? ? ? - name: nfs-client-root ? ? ? ? ? ? ? mountPath: /persistentvolumes ? ? ? ? ? env: ? ? ? ? ? ? - name: PROVISIONER_NAME ? ? ? ? ? ? ? value: fuseim.pri/ifs ? ? ? ? ? ? - name: NFS_SERVER ? ? ? ? ? ? ? value: 10.10.10.60 ? ? ? ? ? ? - name: NFS_PATH ? ? ? ? ? ? ? value: /ifs/kubernetes ? ? ? volumes: ? ? ? ? - name: nfs-client-root ? ? ? ? ? nfs: ? ? ? ? ? ? server: 10.10.10.60 ? ? ? ? ? ? path: /ifs/kubernetes [[email protected] nfs]#
创建sa,名字为:nfs-client-provisioner
apiVersion: v1 kind: ServiceAccount Metadata: ? name: nfs-client-provisioner
pod名字和使用的镜像
containers: ? ? ? ? - name: nfs-client-provisioner ? ? ? ? ? image: quay.io/external_storage/nfs-client-provisioner:latest
pod里挂载的路径
?volumeMounts: ? ? ? ? ? ? - name: nfs-client-root ? ? ? ? ? ? ? mountPath: /persistentvolumes
pod读取的变量,这里需要修改成本地nfs的地址和路径
?env: ? ? ? ? ? ? - name: PROVISIONER_NAME ? ? ? ? ? ? ? value: fuseim.pri/ifs ? ? ? ? ? ? - name: NFS_SERVER ? ? ? ? ? ? ? value: 10.10.10.60 ? ? ? ? ? ? - name: NFS_PATH ? ? ? ? ? ? ? value: /ifs/kubernetes
nfs服务的地址和路径,需要修改成本地nfs的地址和路径
?volumes: ? ? ? ? - name: nfs-client-root ? ? ? ? ? nfs: ? ? ? ? ? ? server: 10.10.10.60 ? ? ? ? ? ? path: /ifs/kubernetes?
修改后的deployment.yaml文件,只是修改了nfs的地址和目录
[[email protected] nfs]# cat deployment.yaml apiVersion: v1 kind: ServiceAccount Metadata: ? name: nfs-client-provisioner --- kind: Deployment apiVersion: extensions/v1beta1 Metadata: ? name: nfs-client-provisioner spec: ? replicas: 1 ? strategy: ? ? type: Recreate ? template: ? ? Metadata: ? ? ? labels: ? ? ? ? app: nfs-client-provisioner ? ? spec: ? ? ? serviceAccountName: nfs-client-provisioner ? ? ? containers: ? ? ? ? - name: nfs-client-provisioner ? ? ? ? ? image: quay.io/external_storage/nfs-client-provisioner:latest ? ? ? ? ? volumeMounts: ? ? ? ? ? ? - name: nfs-client-root ? ? ? ? ? ? ? mountPath: /persistentvolumes ? ? ? ? ? env: ? ? ? ? ? ? - name: PROVISIONER_NAME ? ? ? ? ? ? ? value: fuseim.pri/ifs ? ? ? ? ? ? - name: NFS_SERVER ? ? ? ? ? ? ? value: 192.168.32.130 ? ? ? ? ? ? - name: NFS_PATH ? ? ? ? ? ? ? value: /mnt/k8s ? ? ? volumes: ? ? ? ? - name: nfs-client-root ? ? ? ? ? nfs: ? ? ? ? ? ? server: 192.168.32.130 ? ? ? ? ? ? path: /mnt/k8s
[[email protected] nfs]# kubectl apply -f deployment.yaml serviceaccount "nfs-client-provisioner" created deployment.extensions "nfs-client-provisioner" created [[email protected] nfs]#
[[email protected] nfs]# kubectl get pod NAME? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? READY? ?? STATUS? ? RESTARTS?? AGE nfs-client-provisioner-65bf6bd464-qdzcj?? 1/1? ? ?? Running?? 0? ? ? ? ? 1m
[[email protected] nfs]# kubectl describe pod nfs-client-provisioner-65bf6bd464-qdzcj Name:? ? ? ? ? ? ?? nfs-client-provisioner-65bf6bd464-qdzcj Namespace:? ? ? ? ? default Priority:? ? ? ? ?? 0 PriorityClassName:? <none> Node:? ? ? ? ? ? ?? k8s-master3/192.168.32.130 Start Time:? ? ? ?? Wed,24 Jul 2019 14:44:11 +0800 Labels:? ? ? ? ? ?? app=nfs-client-provisioner ? ? ? ? ? ? ? ? ? ? pod-template-hash=65bf6bd464 Annotations:? ? ? ? <none> Status:? ? ? ? ? ?? Running IP:? ? ? ? ? ? ? ?? 172.30.35.3 Controlled By:? ? ? replicaset/nfs-client-provisioner-65bf6bd464 Containers: ? nfs-client-provisioner: ? ? Container ID:?? docker://67329cd9ca608223cda961a1bfe11524f2586e8e1ccba45ad57b292b1508b575 ? ? Image:? ? ? ? ? quay.io/external_storage/nfs-client-provisioner:latest ? ? Image ID:? ? ?? docker-pullable://quay.io/external_storage/[email protected]:022ea0b0d69834b652a4c53655d78642ae23f0324309097be874fb58d09d2919 ? ? Port:? ? ? ? ?? <none> ? ? Host Port:? ? ? <none> ? ? State:? ? ? ? ? Running ? ? ? Started:? ? ? Wed,24 Jul 2019 14:45:52 +0800 ? ? Ready:? ? ? ? ? True ? ? Restart Count:? 0 ? ? Environment: ? ? ? PROVISIONER_NAME:? fuseim.pri/ifs ? ? ? NFS_SERVER:? ? ? ? 192.168.32.130 ? ? ? NFS_PATH:? ? ? ? ? /mnt/k8s ? ? Mounts: ? ? ? /persistentvolumes from nfs-client-root (rw) ? ? ? /var/run/secrets/kubernetes.io/serviceaccount from nfs-client-provisioner-token-4n4jn (ro) Conditions: ? Type? ? ? ? ? ? ? Status ? Initialized? ? ?? True ? Ready? ? ? ? ? ?? True ? ContainersReady?? True ? PodScheduled? ? ? True Volumes: ? nfs-client-root: ? ? Type:? ? ? NFS (an NFS mount that lasts the lifetime of a pod) ? ? Server:? ? 192.168.32.130 ? ? Path:? ? ? /mnt/k8s ? ? ReadOnly:? false ? nfs-client-provisioner-token-4n4jn: ? ? Type:? ? ? ? Secret (a volume populated by a Secret) ? ? SecretName:? nfs-client-provisioner-token-4n4jn ? ? Optional:? ? false QoS Class:? ? ?? BestEffort Node-Selectors:? <none> Tolerations:? ?? node.kubernetes.io/not-ready:NoExecute for 300s ? ? ? ? ? ? ? ?? node.kubernetes.io/unreachable:NoExecute for 300s Events: ? Type? ? Reason? ?? Age?? From? ? ? ? ? ? ? ? ? Message ? ----? ? ------? ?? ----? ----? ? ? ? ? ? ? ? ? ------- ? normal? Scheduled? 2m? ? default-scheduler? ?? Successfully assigned default/nfs-client-provisioner-65bf6bd464-qdzcj to k8s-master3 ? normal? Pulling? ? 2m? ? kubelet,k8s-master3? pulling image "quay.io/external_storage/nfs-client-provisioner:latest" ? normal? Pulled? ?? 54s?? kubelet,k8s-master3? Successfully pulled image "quay.io/external_storage/nfs-client-provisioner:latest" ? normal? Created? ? 54s?? kubelet,k8s-master3? Created container ? normal? Started? ? 54s?? kubelet,k8s-master3? Started container [[email protected] nfs]#
5.3
rbac.yaml
指定sa:nfs-client-provisioner的权限
nfs-client-provisioner在deployment部署时,已经创建.
[[email protected] nfs]# cat rbac.yaml kind: ServiceAccount apiVersion: v1 Metadata: ? name: nfs-client-provisioner --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 Metadata: ? name: nfs-client-provisioner-runner rules: ? - apiGroups: [""] ? ? resources: ["persistentvolumes"] ? ? verbs: ["get","list","watch","create","delete"] ? - apiGroups: [""] ? ? resources: ["persistentvolumeclaims"] ? ? verbs: ["get","update"] ? - apiGroups: ["storage.k8s.io"] ? ? resources: ["storageclasses"] ? ? verbs: ["get","watch"] ? - apiGroups: [""] ? ? resources: ["events"] ? ? verbs: ["create","update","patch"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 Metadata: ? name: run-nfs-client-provisioner subjects: ? - kind: ServiceAccount ? ? name: nfs-client-provisioner ? ? namespace: default roleRef: ? kind: ClusterRole ? name: nfs-client-provisioner-runner ? apiGroup: rbac.authorization.k8s.io --- kind: Role apiVersion: rbac.authorization.k8s.io/v1 Metadata: ? name: leader-locking-nfs-client-provisioner rules: ? - apiGroups: [""] ? ? resources: ["endpoints"] ? ? verbs: ["get","patch"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 Metadata: ? name: leader-locking-nfs-client-provisioner subjects: ? - kind: ServiceAccount ? ? name: nfs-client-provisioner ? ? # replace with namespace where provisioner is deployed ? ? namespace: default roleRef: ? kind: Role ? name: leader-locking-nfs-client-provisioner ? apiGroup: rbac.authorization.k8s.io [@L_502_17@ nfs]#
[[email protected] nfs]# kubectl apply -f rbac.yaml serviceaccount "nfs-client-provisioner" unchanged clusterrole.rbac.authorization.k8s.io "nfs-client-provisioner-runner" created clusterrolebinding.rbac.authorization.k8s.io "run-nfs-client-provisioner" created role.rbac.authorization.k8s.io "leader-locking-nfs-client-provisioner" created rolebinding.rbac.authorization.k8s.io "leader-locking-nfs-client-provisioner" created [[email protected] nfs]#
检索下
[[email protected] nfs]# kubectl get clusterrole |grep nfs nfs-client-provisioner-runner? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? 2m [[email protected] nfs]# kubectl get role |grep nfs leader-locking-nfs-client-provisioner?? 2m [[email protected] nfs]# kubectl get rolebinding |grep nfs leader-locking-nfs-client-provisioner?? 2m [[email protected] nfs]# kubectl get clusterrolebinding |grep nfs run-nfs-client-provisioner? ? ? ? ? ? ? ? ? ? ? ? ? ?? 2m [[email protected] nfs]#
6.
测试
使用官方的test-claim.yaml测试
[[email protected] nfs]# cat test-claim.yaml kind: PersistentVolumeClaim apiVersion: v1 Metadata: ? name: test-claim ? annotations: ? ? volume.beta.kubernetes.io/storage-class: "managed-nfs-storage" spec: ? accessModes: ? ? - ReadWriteMany ? resources: ? ? requests: ? ? ? storage: 1Mi
读取执行test.claim.yaml文件的pv,pvc情况
[[email protected] nfs]# kubectl get pv No resources found. [[email protected] nfs]# kubectl get pvc No resources found. [[email protected] nfs]#
读取执行
[[email protected] nfs]# kubectl apply -f test-claim.yaml persistentvolumeclaim "test-claim" created
执行后的pv,pvc情况
[[email protected] nfs]# kubectl get pv NAME? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?? CAPACITY?? ACCESS MODES?? RECLaim POLICY?? STATUS? ? CLaim? ? ? ? ? ? ? ? STORAGECLASS? ? ? ? ? REASON? ? AGE pvc-4fb682ac-ade0-11e9-8401-000c29383c89?? 1Mi? ? ? ? RWX? ? ? ? ? ? Delete? ? ? ? ?? Bound? ?? default/test-claim?? managed-nfs-storage? ? ? ? ? ?? 6s [[email protected] nfs]# kubectl get pvc NAME? ? ? ?? STATUS? ? VOLUME? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?? CAPACITY?? ACCESS MODES?? STORAGECLASS? ? ? ? ? AGE test-claim?? Bound? ?? pvc-4fb682ac-ade0-11e9-8401-000c29383c89?? 1Mi? ? ? ? RWX? ? ? ? ? ? managed-nfs-storage?? 8s [[email protected] nfs]#
成功了.对接nfs存储类后,用户可以申请创建pvc,系统自动创建pv并绑定pvc.
检索nfs server的存储目录
[[email protected] k8s]# pwd /mnt/k8s [[email protected] k8s]# ls default-test-claim-pvc-4fb682ac-ade0-11e9-8401-000c29383c89 [[email protected] k8s]#
检索pod里的挂载目录
[[email protected] nfs]# kubectl exec -it nfs-client-provisioner-65bf6bd464-qdzcj ls /persistentvolumes default-test-claim-pvc-4fb682ac-ade0-11e9-8401-000c29383c89 [[email protected] nfs]#
7.
使用官方的test-pod.yaml测试
[[email protected] nfs]# cat test-pod.yaml kind: Pod apiVersion: v1 Metadata: ? name: test-pod spec: ? containers: ? - name: test-pod ? ? image: gcr.io/google_containers/busyBox:1.24 ? ? command: ? ? ? - "/bin/sh" ? ? args: ? ? ? - "-c" ? ? ? - "touch /mnt/SUCCESS && exit 0 || exit 1" ? ? volumeMounts: ? ? ? - name: nfs-pvc ? ? ? ? mountPath: "/mnt" ? restartPolicy: "Never" ? volumes: ? ? - name: nfs-pvc ? ? ? persistentVolumeClaim: ? ? ? ? claimName: test-claim [[email protected] nfs]#
[[email protected] nfs]# kubectl apply -f test-pod.yaml pod "test-pod" created
[[email protected] nfs]# kubectl get pod NAME? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? READY? ?? STATUS? ? ? RESTARTS?? AGE test-pod? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? 0/1? ? ?? Completed?? 0? ? ? ? ? 1m
pod启动后,在/mnt目录创建了文件SUCCESS
pvc挂载的pod目录就是/mnt
在nfs server目录可以看到test-pod创建的SUCCESS文件:
[[email protected] default-test-claim-pvc-4fb682ac-ade0-11e9-8401-000c29383c89]# pwd /mnt/k8s/default-test-claim-pvc-4fb682ac-ade0-11e9-8401-000c29383c89 [[email protected] default-test-claim-pvc-4fb682ac-ade0-11e9-8401-000c29383c89]# ls SUCCESS
检索nfs-client-provisioner
[[email protected] nfs]# kubectl exec -it nfs-client-provisioner-65bf6bd464-qdzcj ls /persistentvolumes/default-test-claim-pvc-4fb682ac-ade0-11e9-8401-000c29383c89 SUCCESS
8.
测试之后的一个疑问
删除pod,pvc存储的数据还在,删除pvc之后,pvc目录和存储的数据都丢失.
为了防止用户操作失误,是否可以保留一份备份呢?
答案是可以.
[[email protected] nfs]# cat class.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass Metadata: ? name: managed-nfs-storage provisioner: fuseim.pri/ifs # or choose another name,must match deployment‘s env PROVISIONER_NAME‘ parameters: ? archiveOnDelete: "false" [[email protected] nfs]# cat class.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass Metadata: ? name: managed-nfs-storage provisioner: fuseim.pri/ifs # or choose another name,must match deployment‘s env PROVISIONER_NAME‘ parameters: ? archiveOnDelete: "false"
archiveOnDelete: "false" ??
这个参数可以设置为false和true.
archiveOnDelete字面意思为删除时是否存档,false表示不存档,即删除数据,true表示存档,即重命名路径.
修改测试
[[email protected] nfs]# kubectl get storageclass NAME? ? ? ? ? ? ? ? ? PROVISIONER? ? ? AGE managed-nfs-storage? fuseim.pri/ifs? 1m [[email protected] nfs]# kubectl describe storageclass Name:? ? ? ? ? ? managed-nfs-storage IsDefaultClass:? No Annotations:? ? kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","Metadata":{"annotations":{},"name":"managed-nfs-storage","namespace":""},"parameters":{"archiveOnDelete":"true"},"provisioner":"fuseim.pri/ifs"} Provisioner:? ? ? ? ? fuseim.pri/ifs Parameters:? ? ? ? ? ? archiveOnDelete=true AllowVolumeExpansion:? <unset> MountOptions:? ? ? ? ? <none> ReclaimPolicy:? ? ? ? Delete VolumeBindingMode:? ? Immediate Events:? ? ? ? ? ? ? ? <none>
删除pod,pvc
[[email protected] nfs]# kubectl get pod NAME? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? READY? ? STATUS? ? ? RESTARTS? AGE test-pod? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? 0/1? ? ? Completed? 0? ? ? ? ? 6s [[email protected] nfs]# kubectl get pv,pvc NAME? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? CAPACITY? ACCESS MODES? RECLaim POLICY? STATUS? ? CLaim? ? ? ? ? ? ? ? STORAGECLASS? ? ? ? ? REASON? ? AGE persistentvolume/pvc-5a12cb0e-adeb-11e9-8401-000c29383c89? 1Mi? ? ? ? RWX? ? ? ? ? ? Delete? ? ? ? ? Bound? ? default/test-claim? managed-nfs-storage? ? ? ? ? ? 17s NAME? ? ? ? ? ? ? ? ? ? ? ? ? ? ? STATUS? ? VOLUME? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? CAPACITY? ACCESS MODES? STORAGECLASS? ? ? ? ? AGE persistentvolumeclaim/test-claim? Bound? ? pvc-5a12cb0e-adeb-11e9-8401-000c29383c89? 1Mi? ? ? ? RWX? ? ? ? ? ? managed-nfs-storage? 17s
[[email protected] nfs]# kubectl delete -f test-pod.yaml pod "test-pod" deleted [[email protected] nfs]# kubectl delete -f test-claim.yaml persistentvolumeclaim "test-claim" deleted
[[email protected] nfs]# kubectl get pv,pvc No resources found. [[email protected] nfs]#
?[[email protected] archived-default-test-claim-pvc-5a12cb0e-adeb-11e9-8401-000c29383c89]# pwd /mnt/k8s/archived-default-test-claim-pvc-5a12cb0e-adeb-11e9-8401-000c29383c89 [[email protected] archived-default-test-claim-pvc-5a12cb0e-adeb-11e9-8401-000c29383c89]# ls SUCCESS
切记用上archiveOnDelete:true
9.部署nfs存储之后,用户可以自行申请pvc.不再需要再一个个手动创建pv对应pvc的申请.其实还是有点不方便,可以不可以创建pod的时候就自动申请创建pvc,而不再需要再创建pod前先申请pvc然后再挂载进pod呢?这是statefulset里的volumeClaimTemplates的功能.下篇再来测试.
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 [email protected] 举报,一经查实,本站将立刻删除。