Ceph-12.2.13(luminous)安装

一、主机规划

主机名称 系统 IP 配置
ceph-node1 CentOS7.8.2003 10.10.10.47 8C16G50G+200G
ceph-node2 CentOS7.8.2003 10.10.10.48 8C16G50G+200G
ceph-node3 CentOS7.8.2003 10.10.10.49 8C16G50G+200G

磁盘规划

50G系统盘,200G磁盘为OSD

二、环境准备

2.1 关闭防火墙

在所有节点执行

1# 禁用selinux
2setenforce 0
3sed -i s'/SELINUX=enforcing/SELINUX=disabled'/g /etc/selinux/config
4
5# 关闭防火墙
6systemctl disable --now firewalld

2.2 修改hosts

 1cat > /etc/hosts <<EOF
 2127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
 3::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
 410.10.10.47 ceph-node1
 510.10.10.48 ceph-node2
 610.10.10.49 ceph-node3
 7EOF
 8
 9# 分发到各节点
10scp /etc/hosts ceph-node2:/etc
11scp /etc/hosts ceph-node3:/etc

2.3 ssh免密登录

ceph-node1

1# 生成私钥
2ssh-keygen -t rsa -b 2048 -P '' -f ~/.ssh/id_rsa
3
4# 分发到各节点
5ssh-copy-id ceph-node1
6ssh-copy-id ceph-node2
7ssh-copy-id ceph-node3

2.4 时间同步

在所有节点执行

1# 设置时区
2timedatectl set-timezone Asia/Shanghai
3# 配置时间同步
4yum -y install chrony
5systemctl enable --now chronyd

2.5 配置yum源

 1cat > /etc/yum.repos.d/ceph.repo <<EOF
 2[ceph-luminous-noarch]
 3name = ceph-luminous-noarch
 4baseurl = https://mirrors.tuna.tsinghua.edu.cn/ceph/rpm-luminous/el7/noarch/
 5enabled = 1
 6gpgcheck = 1
 7gkgkey = http://mirrors.tuna.tsinghua.edu.cn/ceph/keys/release.asc
 8[ceph-luminous-x64]
 9name = ceph-luminous-x64
10baseurl = https://mirrors.tuna.tsinghua.edu.cn/ceph/rpm-luminous/el7/x86_64/
11enabled = 1
12gpgcheck = 1
13gkgkey = http://mirrors.tuna.tsinghua.edu.cn/ceph/keys/release.asc
14EOF

三、Ceph部署

使用ceph-deploy工具部署

3.1 使用国内源

1export CEPH_DEPLOY_REPO_URL=http://mirrors.tuna.tsinghua.edu.cn/ceph/rpm-luminous/el7
2export CEPH_DEPLOY_GPG_URL=http://mirrors.tuna.tsinghua.edu.cn/ceph/keys/release.asc

3.2 安装ceph-deploy

ceph安装过程中依赖部分epel软件源

1yum -y install epel-release
2# 使用国内epel源
3sed -e 's!^metalink=!#metalink=!g' \
4    -e 's!^#baseurl=!baseurl=!g' \
5    -e 's!//download\.fedoraproject\.org/pub!//mirrors.tuna.tsinghua.edu.cn!g' \
6    -e 's!http://mirrors\.tuna!https://mirrors.tuna!g' \
7    -i /etc/yum.repos.d/epel.repo /etc/yum.repos.d/epel-testing.repo
8yum -y install ceph-deploy

3.3 创建工作目录

1mkdir my-cluster
2cd my-cluster

3.4 创建ceph集群,部署新的monitor节点

1ceph-deploy new ceph-node1 ceph-node2 ceph-node3

3.5 修改配置文件

增加public_networkcluster_network配置

1vim ceph.conf
2...
3public_network = 10.10.10.0/24
4cluster_network = 10.10.10.0/24

3.6 安装Ceph到各个节点

需要指定版本,不指定默认安装最新的版本

1ceph-deploy install --release luminous ceph-node1 ceph-node2 ceph-node3

3.7 查看ceph版本

1ceph --version
2ceph version 12.2.13 (584a20eb0237c657dc0567da126be145106aa47e) luminous (stable)

3.8 获取密钥key,会在my-cluster目录下生成几个key

1ceph-deploy mon create-initial

3.9 分发key

1ceph-deploy admin ceph-node1 ceph-node2 ceph-node3

3.10 初始化磁盘

1ceph-deploy osd create ceph-node1 --data /dev/vdb
2ceph-deploy osd create ceph-node2 --data /dev/vdb
3ceph-deploy osd create ceph-node3 --data /dev/vdb

3.11 查看osd设备列表

1ceph osd tree
2ID CLASS WEIGHT  TYPE NAME           STATUS REWEIGHT PRI-AFF
3-1       0.58589 root default
4-3       0.19530     host ceph-node1
5 0   hdd 0.19530         osd.0           up  1.00000 1.00000
6-5       0.19530     host ceph-node2
7 1   hdd 0.19530         osd.1           up  1.00000 1.00000
8-7       0.19530     host ceph-node3
9 2   hdd 0.19530         osd.2           up  1.00000 1.00000

3.12 给admin key赋权限

1chmod +r /etc/ceph/ceph.client.admin.keyring

3.13 创建ceph 管理进程服务

1ceph-deploy mgr create ceph-node1 ceph-node2 ceph-node3

3.14 检查健康状况

1ceph health
2HEALTH_OK

四、Ceph块存储

4.1 创建存储池

1rados mkpool rbd
2successfully created pool rbd

4.2 创建块设备

1rbd create rbd1 --size 1024

4.3 查看创建的rbd

 1rbd list
 2rbd1
 3
 4# 查看rbd细节
 5rbd --image rbd1 info
 6rbd image 'rbd1':
 7	size 1GiB in 256 objects
 8	order 22 (4MiB objects)
 9	block_name_prefix: rbd_data.103d6b8b4567
10	format: 2
11	features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
12	flags:
13	create_timestamp: Sat Jun 20 21:17:50 2020

4.4 映射到磁盘

1rbd feature disable rbd1 object-map fast-diff deep-flatten
2rbd map rbd/rbd1
3/dev/rbd0
4
5# 格式化磁盘
6mkfs.xfs /dev/rbd0
7# 挂载
8mount /dev/rbd0 /opt

4.5 删除块设备

1rbd rm rbd1
2Removing image: 100% complete...done.

4.6 删除存储池

默认情况下mon节点不允许删除pool

1rados rmpool rbd rbd --yes-i-really-really-mean-it
2Check your monitor configuration - `mon allow pool delete` is set to false by default, change it to true to allow deletion of pools
3# 修改ceph.conf
4vim /etc/ceph/ceph.conf
5...
6mon_allow_pool_delete = true
7
8# 重启ceph-mon.target
9systemctl restart ceph-mon.target

再次删除pool

1rados rmpool rbd rbd --yes-i-really-really-mean-it
2successfully deleted pool rbd

五、Ceph对象存储

5.1 创建对象存储网关

1ceph-deploy rgw create ceph-node1 ceph-node2 ceph-node3

创建完成之后默认监听7480端口。然后可以使用负载均衡的方式转发到后端服务。

5.2 创建s3用户

 1radosgw-admin user create --uid=admin --display-name=admin --email=admin@example.com
 2{
 3    "user_id": "admin",
 4    "display_name": "admin",
 5    "email": "admin@example.com",
 6    "suspended": 0,
 7    "max_buckets": 1000,
 8    "auid": 0,
 9    "subusers": [],
10    "keys": [
11        {
12            "user": "admin",
13            "access_key": "837H72BJ7KJ4ZO7Q7PJL",
14            "secret_key": "GYgDMcqxFI68A5K10sWlA2GF9cknohFPqUb6499b"
15        }
16    ],
17    "swift_keys": [],
18    "caps": [],
19    "op_mask": "read, write, delete",
20    "default_placement": "",
21    "placement_tags": [],
22    "bucket_quota": {
23        "enabled": false,
24        "check_on_raw": false,
25        "max_size": -1,
26        "max_size_kb": 0,
27        "max_objects": -1
28    },
29    "user_quota": {
30        "enabled": false,
31        "check_on_raw": false,
32        "max_size": -1,
33        "max_size_kb": 0,
34        "max_objects": -1
35    },
36    "temp_url_keys": [],
37    "type": "rgw"
38}

记住用户的 access_keysecret_key,后面需要用到这些信息用于访问s3服务。

5.3 删除用户

1radosgw-admin user rm --uid=admin

5.4 使用s3cmd客户端访问

安装s3cmd

1yum -y install s3cmd

配置s3客户端

 1s3cmd --configure \
 2        --access_key=837H72BJ7KJ4ZO7Q7PJL \
 3        --secret_key=GYgDMcqxFI68A5K10sWlA2GF9cknohFPqUb6499b \
 4        --host=10.10.10.47:7480 \
 5        --host-bucket=test-bucket \
 6        --no-ssl
 7
 8Enter new values or accept defaults in brackets with Enter.
 9Refer to user manual for detailed description of all options.
10
11Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables.
12Access Key [837H72BJ7KJ4ZO7Q7PJL]:
13Secret Key [GYgDMcqxFI68A5K10sWlA2GF9cknohFPqUb6499b]:
14Default Region [US]:
15
16Use "s3.amazonaws.com" for S3 Endpoint and not modify it to the target Amazon S3.
17S3 Endpoint [10.10.10.47:7480]:
18
19Use "%(bucket)s.s3.amazonaws.com" to the target Amazon S3. "%(bucket)s" and "%(location)s" vars can be used
20if the target S3 system supports dns based buckets.
21DNS-style bucket+hostname:port template for accessing a bucket [test-bucket]:
22
23Encryption password is used to protect your files from reading
24by unauthorized persons while in transfer to S3
25Encryption password:
26Path to GPG program [/usr/bin/gpg]:
27
28When using secure HTTPS protocol all communication with Amazon S3
29servers is protected from 3rd party eavesdropping. This method is
30slower than plain HTTP, and can only be proxied with Python 2.7 or newer
31Use HTTPS protocol [No]:
32
33On some networks all internet access must go through a HTTP proxy.
34Try setting it here if you can't connect to S3 directly
35HTTP Proxy server name:
36
37New settings:
38  Access Key: 837H72BJ7KJ4ZO7Q7PJL
39  Secret Key: GYgDMcqxFI68A5K10sWlA2GF9cknohFPqUb6499b
40  Default Region: US
41  S3 Endpoint: 10.10.10.47:7480
42  DNS-style bucket+hostname:port template for accessing a bucket: test-bucket
43  Encryption password:
44  Path to GPG program: /usr/bin/gpg
45  Use HTTPS protocol: False
46  HTTP Proxy server name:
47  HTTP Proxy server port: 0
48
49Test access with supplied credentials? [Y/n] y
50Please wait, attempting to list all buckets...
51Success. Your access key and secret key worked fine :-)
52
53Now verifying that encryption works...
54Not configured. Never mind.
55
56Save settings? [y/N] y
57Configuration saved to '/root/.s3cfg'

创建bucket

1s3cmd mb s3://test-bucket
2Bucket 's3://test-bucket/' created

查看bucket

1s3cmd ls
22020-06-20 14:54  s3://test-bucket

上传文件到bucket

1s3cmd put ceph.conf s3://test-bucket
2upload: 'ceph.conf' -> 's3://test-bucket/ceph.conf'  [1 of 1]
3 340 of 340   100% in    2s   163.23 B/s  done

查看bucket中的文件

1s3cmd ls s3://test-bucket
22020-06-20 14:55          340  s3://test-bucket/ceph.conf

更多s3cmd的操作可使用s3cmd -h查看或者通过访问s3cmd官网查看。

六、Ceph-Dashboard

Ceph 的监控可视化界面方案很多—-grafana、Kraken。但是从Luminous开始,Ceph 提供了原生的Dashboard功能,通过Dashboard可以获取Ceph集群的各种基本状态信息。

6.1 配置Dashboard

1# 开启mgr功能
2ceph mgr module enable dashboard
3
4# 生成并安装自签名的证书
5ceph dashboard create-self-signed-cert  
6
7# 创建一个dashboard登录用户名密码
8ceph dashboard ac-user-create guest 1q2w3e4r administrator 

6.2 修改默认配置

1# 指定集群dashboard的访问端口
2ceph config-key set mgr/dashboard/server_port 7000
3
4# 指定集群 dashboard的访问IP
5ceph config-key set mgr/dashboard/server_addr $IP 

6.3 开启Object Gateway管理功能

1# 创建rgw用户
2radosgw-admin user info --uid=admin
3
4# 提供Dashboard证书
5ceph dashboard set-rgw-api-access-key $access_key
6ceph dashboard set-rgw-api-secret-key $secret_key
7
8# 配置rgw主机名和端口
9ceph dashboard set-rgw-api-host 192.168.0.251

了解更多/usr/lib64/ceph/mgr/dashboard/README.rst