使用 Ceph RBD 作为K8s后端动态存储

一、Kubernetes信息

主机 系统 IP Kubernetes版本
master1 CentOS7.8.2003 192.168.0.14 v1.16.6
master2 CentOS7.8.2003 192.168.0.15 v1.16.6
master3 CentOS7.8.2003 192.168.0.16 v1.16.6
node1 CentOS7.8.2003 192.168.0.18 v1.16.6
node2 CentOS7.8.2003 192.168.0.19 v1.16.6
node3 CentOS7.8.2003 192.168.0.20 v1.16.6

二、创建存储池

使用默认的rbd池也是可以的,但是不推荐使用

在ceph管理或监控节点上创建一个新的动态卷:

1# 创建存储池
2ceph osd pool create kube 128 128
3pool 'kube' created
4
5# 获取认证信息
6ceph auth get-or-create client.kube mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=kube' -o ceph.client.kube.keyring

三、创建rbd-provisioner

如果集群是用kubeadm部署的,由于controller-manager官方镜像中没有rbd命令,所以我们要导入外部配置

 1cat > rbd-provisioner.yaml <<EOF
 2kind: ClusterRole 
 3apiVersion: rbac.authorization.k8s.io/v1 
 4metadata: 
 5  name: rbd-provisioner 
 6rules: 
 7  - apiGroups: [""] 
 8    resources: ["persistentvolumes"] 
 9    verbs: ["get", "list", "watch", "create", "delete"] 
10  - apiGroups: [""] 
11    resources: ["persistentvolumeclaims"] 
12    verbs: ["get", "list", "watch", "update"] 
13  - apiGroups: ["storage.k8s.io"] 
14    resources: ["storageclasses"] 
15    verbs: ["get", "list", "watch"] 
16  - apiGroups: [""] 
17    resources: ["events"] 
18    verbs: ["create", "update", "patch"] 
19  - apiGroups: [""] 
20    resources: ["services"] 
21    resourceNames: ["kube-dns","coredns"] 
22    verbs: ["list", "get"] 
23--- 
24kind: ClusterRoleBinding 
25apiVersion: rbac.authorization.k8s.io/v1 
26metadata: 
27  name: rbd-provisioner 
28subjects: 
29  - kind: ServiceAccount 
30    name: rbd-provisioner 
31    namespace: default 
32roleRef: 
33  kind: ClusterRole 
34  name: rbd-provisioner 
35  apiGroup: rbac.authorization.k8s.io 
36--- 
37apiVersion: rbac.authorization.k8s.io/v1 
38kind: Role 
39metadata: 
40  name: rbd-provisioner 
41rules: 
42- apiGroups: [""] 
43  resources: ["secrets"] 
44  verbs: ["get"] 
45- apiGroups: [""] 
46  resources: ["endpoints"] 
47  verbs: ["get", "list", "watch", "create", "update", "patch"] 
48--- 
49apiVersion: rbac.authorization.k8s.io/v1 
50kind: RoleBinding 
51metadata: 
52  name: rbd-provisioner 
53roleRef: 
54  apiGroup: rbac.authorization.k8s.io 
55  kind: Role 
56  name: rbd-provisioner 
57subjects: 
58  - kind: ServiceAccount 
59    name: rbd-provisioner 
60    namespace: default 
61--- 
62apiVersion: apps/v1 
63kind: Deployment 
64metadata: 
65  name: rbd-provisioner 
66spec: 
67  selector:
68    matchLabels:
69      app: rbd-provisioner 
70  replicas: 2
71  strategy: 
72    type: Recreate 
73  template: 
74    metadata: 
75      labels: 
76        app: rbd-provisioner 
77    spec: 
78      containers: 
79      - name: rbd-provisioner 
80        image: quay.mirrors.ustc.edu.cn/external_storage/rbd-provisioner:latest 
81        env: 
82        - name: PROVISIONER_NAME 
83          value: ceph.com/rbd 
84      serviceAccount: rbd-provisioner 
85--- 
86apiVersion: v1 
87kind: ServiceAccount 
88metadata: 
89  name: rbd-provisioner
90EOF

注意:rbd-provisioner的镜像要和ceph的版本适配。

如果集群是用二进制方式部署的,直接在master节点安装ceph-common即可。

四、安装ceph-common

4.1 添加ceph源

1cat > /etc/yum.repos.d/ceph.repo <<EOF
2[ceph-luminous-x64]
3name = ceph-luminous-x64
4baseurl = https://mirrors.tuna.tsinghua.edu.cn/ceph/rpm-luminous/el7/x86_64/
5enabled = 1
6gpgcheck = 1
7gkgkey = http://mirrors.tuna.tsinghua.edu.cn/ceph/keys/release.asc
8EOF

4.2 安装epel-release

1yum -y install epel-release
2
3# 使用国内epel源
4sed -e 's!^metalink=!#metalink=!g' \
5    -e 's!^#baseurl=!baseurl=!g' \
6    -e 's!//download\.fedoraproject\.org/pub!//mirrors.tuna.tsinghua.edu.cn!g' \
7    -e 's!http://mirrors\.tuna!https://mirrors.tuna!g' \
8    -i /etc/yum.repos.d/epel.repo /etc/yum.repos.d/epel-testing.repo

4.3 导入ceph key

1rpm --import http://mirrors.tuna.tsinghua.edu.cn/ceph/keys/release.asc

4.4 安装ceph-common

1yum -y install ceph-common

4.5 查看ceph版本

1ceph -v

五、为kubelet提供rbd命令

创建 secret

1ceph auth get-key client.admin | base64
2QVFEYW9EaGZPdDZyT1JBQXZhWDNDZytTcW9sbUhNZ1U4Ym1tWlE9PQ==

将获取的key作为secret的key

 1cat > ceph-secret.yaml <<EOF
 2apiVersion: v1
 3kind: Secret
 4metadata:
 5  name: ceph-secret
 6  namespace: default
 7data:
 8  key: QVFEYW9EaGZPdDZyT1JBQXZhWDNDZytTcW9sbUhNZ1U4Ym1tWlE9PQ==
 9type: kubernetes.io/rbd
10EOF

创建ceph-secret

1kubectl apply -f ceph-secret.yaml
2secret/ceph-secret created

验证刚刚创建的secret

1kubectl get secret ceph-secret -n kube-system
2NAME          TYPE                DATA   AGE
3ceph-secret   kubernetes.io/rbd   1      6s

创建ceph user secret

1ceph auth get-key client.kube | base64
2QVFDbHBUaGYxQkkxQVJBQUNVUUlIcTdJeVZoZzlYNWdaVjRTOGc9PQ==
 1cat > ceph-user-secret.yaml <<EOF
 2apiVersion: v1
 3kind: Secret
 4metadata:
 5  name: ceph-user-secret
 6  namespace: default
 7data:
 8  key: QVFDbHBUaGYxQkkxQVJBQUNVUUlIcTdJeVZoZzlYNWdaVjRTOGc9PQ==
 9type: kubernetes.io/rbd
10EOF

创建ceph user secret

1kubectl apply -f ceph-user-secret.yaml
2secret/ceph-user-secret created

验证刚创建的ceph user secret

1kubectl get secret ceph-user-secret
2NAME               TYPE                DATA   AGE
3ceph-user-secret   kubernetes.io/rbd   1      5s

六、创建Ceph RBD 动态存储类

 1cat > ceph-storageclass.yaml <<EOF
 2apiVersion: storage.k8s.io/v1
 3kind: StorageClass
 4metadata:
 5  name: ceph-rbd
 6  namespace: default
 7  annotations:
 8     storageclass.beta.kubernetes.io/is-default-class: "true"
 9provisioner: ceph.com/rbd
10reclaimPolicy: Retain
11parameters:
12  monitors: 192.168.0.250:6789,192.168.0.251:6789,192.168.0.252:6789 # <== monitor节点,','分隔
13  adminId: admin
14  adminSecretName: ceph-secret   # <== 前面创建的secret,type必须为`kubernetes.io/rbd`
15  adminSecretNamespace: default # <== secret所在名称空间
16  pool: kube         # <== 在mon节点创建的存储池
17  fsType: xfs        # <== 文件系统类型
18  userId: kube       # <== 前面使用`ceph auth get-or-create`创建的用户
19  userSecretName: ceph-user-secret
20  imageFormat: "2"
21  imageFeatures: "layering"
22EOF

创建存储类

1kubectl apply -f ceph-storageclass.yaml
2storageclass.storage.k8s.io/ceph-rbd created

七、pvc测试

创建 Persistent Volume Claim

 1cat > pvc-claim.yaml <<EOF
 2kind: PersistentVolumeClaim
 3apiVersion: v1
 4metadata:
 5  name: ceph-claim
 6spec:
 7  accessModes:
 8  - ReadWriteOnce
 9  resources:
10    requests:
11      storage: 1Gi
12EOF
1kubectl apply -f pvc-claim.yaml
2persistentvolumeclaim/ceph-claim created
1kubectl get pvc | grep ceph
2NAME         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
3ceph-claim   Bound    pvc-83749efe-050c-4dca-a65d-24967c351136   1Gi        RWO            ceph-rbd       3s

看到已经被成功的绑定到默认的存储类 ceph-rbd

pvc动态的创建类ceph rbd pv

创建 pod

 1cat > ceph-pod1.yaml <<EOF
 2apiVersion: v1
 3kind: Pod
 4metadata:
 5  name: ceph-pod1
 6spec:
 7  containers:
 8  - name: ceph-busybox
 9    image: busybox
10    command: ["sleep", "60000"]
11    volumeMounts:
12    - name: ceph-vol1
13      mountPath: /usr/share/busybox
14      readOnly: false
15  volumes:
16  - name: ceph-vol1
17    persistentVolumeClaim:
18      claimName: ceph-claim
19EOF
1kubectl apply -f ceph-pod1.yaml
2pod/ceph-pod1 created
3
4# 验证pod是否创建并Running起来
5kubectl get pod ceph-pod1
6NAME        READY   STATUS    RESTARTS   AGE
7ceph-pod1   1/1     Running   0          38s

八、使用RBD作为后端存储示例

安装helm3

1wget https://get.helm.sh/helm-v3.2.4-linux-amd64.tar.gz
2tar xf -C helm-v3.2.4-linux-amd64.tar.gz /usr/local/bin --strip-components 1 linux-amd64/helm

初始化

1helm init --service-account tiller --stable-repo-url http://mirror.azure.cn/kubernetes/charts/

安装mysql集群

获取mysql chart

1helm fetch stable/percona-xtradb-cluster --untar
2cd percona-xtradb-cluster

安装percona-xtradb-cluster

1helm install mysql . -f values.yaml --set persistence.enabled=true

查看pod状态

1kubectl get po -l app=mysql-pxc
2NAME          READY   STATUS    RESTARTS   AGE
3mysql-pxc-0   2/2     Running   0          42s
4mysql-pxc-1   2/2     Running   0          15s
5mysql-pxc-2   2/2     Running   0          9s

查看pvc

1kubectl get pvc
2NAME                     STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
3mysql-data-mysql-pxc-0   Bound    pvc-e7fabe2a-fb91-478c-b3dc-f80560d90e77   8Gi        RWO            ceph-rbd       55s
4mysql-data-mysql-pxc-1   Bound    pvc-1ad7cc93-f5eb-4bc8-a092-1a4f09e3b732   8Gi        RWO            ceph-rbd       33s
5mysql-data-mysql-pxc-2   Bound    pvc-65a47c92-6e6d-4659-9512-8f20ef5adc9a   8Gi        RWO            ceph-rbd       15s

九、参考

  1. OpenShift - Complete Example Using Ceph RBD for Dynamic Provisioning

  2. CSDN-K8S使用Ceph RBD作为后端存储