一, 安装

1 下载代码

1
git clone --single-branch --branch v1.10.0 https://github.com/rook/rook.git

2 修改配置文件

2.1 operator.yaml

1
2
3
4
5
6
7
8
# 修改镜像地址
ROOK_CSI_CEPH_IMAGE: "quay.io/cephcsi/cephcsi:v3.6.2"
ROOK_CSI_REGISTRAR_IMAGE: "ccr.ccs.tencentyun.com/ccops/csi-node-driver-registrar:v2.5.1"
ROOK_CSI_RESIZER_IMAGE: "ccr.ccs.tencentyun.com/ccops/csi-resizer:v1.4.0"
ROOK_CSI_PROVISIONER_IMAGE: "ccr.ccs.tencentyun.com/ccops/csi-provisioner:v3.1.1"
ROOK_CSI_SNAPSHOTTER_IMAGE: "ccr.ccs.tencentyun.com/ccops/csi-snapshotter:v6.0.1"
ROOK_CSI_ATTACHER_IMAGE: "ccr.ccs.tencentyun.com/ccops/csi-attacher:v3.4.0"
ROOK_CSI_NFS_IMAGE: "ccr.ccs.tencentyun.com/ccops/nfsplugin:v4.0.0"

2.2 cluster.yaml

这里只做了简单配置,详细配置参考文档

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
# 添加Ceph配置
kind: ConfigMap
apiVersion: v1
metadata:
name: rook-config-override
namespace: rook-ceph # namespace:cluster
data:
config: |
[global]
osd_pool_default_size = 1 # 一般为3或1,测试1,生产为3
mon_warn_on_pool_no_redundancy = false # 关闭强制配置副本
bdev_flock_retry = 20
bluefs_buffered_io = false
---
# 监控配置, 安装监控后启用,要不然报错
monitoring:
enabled: false
# 开启
mgr:
count: 2
allowMultiplePerNode: false
modules:
- name: pg_autoscaler
enabled: true
# 配置 dashboard
dashboard:
enabled: true
port: 8080 # 指定端口
ssl: false # 关闭https
# 开启host network, 注意端口占用
provider: host
# 配置存储
nodes:
- name: "10.1.1.1"
devices:
- name: "sdb"
- name: "10.1.1.2"
devices:
- name: "sdb"
- name: "10.1.1.3"
devices:
- name: "sdb"
# 配置调度节点(需提前打好标签)
placement:
mgr:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: role
operator: In
values:
- rook-mgr-node
osd:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: role
operator: In
values:
- rook-osd-node

3 部署

3.1 rook-operator

1
2
3
4
5
# cd rook-1.9.10/deploy/examples
# kubectl create -f crds.yaml -f common.yaml -f operator.yaml
# kubectl -n rook-ceph get pod
NAME READY STATUS RESTARTS AGE
rook-ceph-operator-6497755589-vs5rt 1/1 Running 0 2d23h### CephCluster

3.2 监控

参考

因为我们有自己的 Prometheus Operator, 而 rook ceph 也有 Operator 不能共存,所以无法集成到自己的 Prometheus , 采用 root ceph 单独部署 Prometheus 方案

3.3 部署 Prometheus

1
2
3
4
5
6
wget https://raw.githubusercontent.com/coreos/prometheus-operator/v0.40.0/bundle.yaml
# 修改
subjects:
- kind: ServiceAccount
name: prometheus-operator
namespace: kube-system # 要部署的命名空间,这个很重要,默认default
1
2
3
4
cd rook/deploy/examples/monitoring
kubectl create -f service-monitor.yaml
kubectl create -f prometheus.yaml
kubectl create -f prometheus-service.yaml

3.3.1 配置 Grafana

模板地址,全都加上,监控不怕多

3.3.1.1 安装插件
1
2
grafana-cli plugins install grafana-piechart-panel
grafana-cli plugins install vonage-status-panel
3.3.1.2 配置 Grafana

defaults.ini,grafana.ini 都要修改

1
2
3
4
5
6
7
8
[alerting]
execute_alerts = true
allow_embedding = true

[auth.anonymous]
enabled = true
org_name = Main Org.
org_role = Viewer
3.3.1.3 配置数据源

名字不要修改

3.3.2 Dashboard 集成 Grafana

1
2
3
4
5
6
kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- bash
# grafana url: https://xx.com:3000/
ceph dashboard set-grafana-api-url ${grafana url}
# 允许自签名证书
ceph dashboard set-grafana-api-ssl-verify False

3.3.3 查看

这部是后面操作完查看的

3.4 CephCluster

以下 pod 为正常

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
# kubectl create -f cluster.yaml
# kubectl -n rook-ceph get pod
NAME READY STATUS RESTARTS AGE
csi-cephfsplugin-7tqw8 2/2 Running 0 2d23h
csi-cephfsplugin-k22lg 2/2 Running 0 2d23h
csi-cephfsplugin-m2crv 2/2 Running 0 2d23h
csi-cephfsplugin-provisioner-5dbbcdf76-99tx5 5/5 Running 0 2d23h
csi-cephfsplugin-provisioner-5dbbcdf76-bqmz9 5/5 Running 0 2d23h
csi-rbdplugin-bdlwq 2/2 Running 0 2d23h
csi-rbdplugin-dsk5w 2/2 Running 0 2d23h
csi-rbdplugin-provisioner-7c5c5b699-9bzcq 5/5 Running 0 2d23h
csi-rbdplugin-provisioner-7c5c5b699-jwsm9 5/5 Running 0 2d23h
csi-rbdplugin-tljkh 2/2 Running 0 2d23h
rook-ceph-crashcollector-k8s-ceph-100-6b966fbfdd-hq2cf 1/1 Running 0 2d23h
rook-ceph-crashcollector-k8s-ceph-136-6b76f4c697-trxlf 1/1 Running 0 2d23h
rook-ceph-crashcollector-k8s-ceph-142-696bcc5974-8tvjf 1/1 Running 0 2d23h
rook-ceph-mgr-a-7cd79f499c-lqfgc 2/2 Running 0 2d17h
rook-ceph-mgr-b-5b5746f69f-wfd22 2/2 Running 0 2d17h
rook-ceph-mon-a-845779bff7-bz8gz 1/1 Running 0 2d23h
rook-ceph-mon-b-54dbc6c98d-m7psh 1/1 Running 0 2d23h
rook-ceph-mon-c-b7bcb5764-cjhvc 1/1 Running 0 2d23h
rook-ceph-operator-6497755589-vs5rt 1/1 Running 0 2d23h
rook-ceph-osd-0-5c69ff6cb4-g428b 1/1 Running 0 2d23h
rook-ceph-osd-1-ff44fddf7-pczt4 1/1 Running 0 2d23h
rook-ceph-osd-2-5868fd8dbd-jq4d4 1/1 Running 0 2d23h
rook-ceph-osd-prepare-k8s-ceph-100-kqtg8 0/1 Completed 0 2d19h
rook-ceph-osd-prepare-k8s-ceph-136-2w9lh 0/1 Completed 0 2d19h
rook-ceph-osd-prepare-k8s-ceph-142-8ljp5 0/1 Completed 0 2d19h
rook-ceph-tools-556cf9bd64-2h5kr 1/1 Running 0 2d23h

3.5 查看详情

1
kubectl describe CephCluster -n rook-ceph rook-ceph

4 工具

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
kubectl create -f toolbox.yaml
kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- bash
ceph status
cluster:
id: 6aa56874-3c30-4c29-abd2-5e4a1363f03c
health: HEALTH_OK

services:
mon: 3 daemons, quorum a,b,c (age 2d)
mgr: a(active, since 2d), standbys: b
osd: 3 osds: 3 up (since 2d), 3 in (since 2d)

data:
pools: 3 pools, 65 pgs
objects: 17 objects, 86 B
usage: 80 MiB used, 600 GiB / 600 GiB avail
pgs: 65 active+clean

5 dashboard

1
2
3
4
5
6
7
8
9
# ls dashboard-*
dashboard-external-https.yaml dashboard-external-http.yaml dashboard-ingress-https.yaml dashboard-loadbalancer.yaml
# 根据需求选择,这里用 dashboard-external-http.yaml,注意修改端口,我改成8080了
# kubectl apply -f dashboard-external-http.yaml
# kubectl -n rook-ceph get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
rook-ceph-mgr ClusterIP 10.43.145.9 <none> 9283/TCP 2d23h
rook-ceph-mgr-dashboard ClusterIP 10.43.137.127 <none> 7000/TCP 2d17h
rook-ceph-mgr-dashboard-external-http NodePort 10.43.249.105 <none> 7000:32041/TCP 2d19h

5.1 访问

用户名 admin
密码: kubectl -n rook-ceph get secret rook-ceph-dashboard-password -o jsonpath="{['data']['password']}" | base64 --decode && echo

二, 使用

1 块存储: RBD

1.1 创建 pool

1
2
3
4
5
6
7
8
9
10
11
12
13
14
apiVersion: ceph.rook.io/v1
kind: CephBlockPool
metadata:
name: ccpool
namespace: rook-ceph # namespace:cluster
spec:
enableRBDStats: true
failureDomain: osd
replicated:
size: 3 # 三个副本,后面测试高可用需要
# kubectl apply -f pool.yaml
# kubectl get CephBlockPool -n rook-ceph
NAME PHASE
ccpool Ready

1.2 查看

1
2
3
4
5
6
7
8
9
10
# kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- bash
[rook@rook-ceph-tools-556cf9bd64-2h5kr /]$ ceph df
--- RAW STORAGE ---
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 600 GiB 600 GiB 82 MiB 82 MiB 0.01
TOTAL 600 GiB 600 GiB 82 MiB 82 MiB 0.01

--- POOLS ---
POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL
ccpool 6 32 19 B 1 4 KiB 0 570 GiB

1.3 外部集群使用

CSI 部署与快照的使用参考:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
apiVersion: v1
kind: Secret
metadata:
name: csi-rbd-secret
namespace: kube-system
stringData:
userID: admin
userKey: AQA3exFjJIhtBBAAujNu5Xvc4o4DH7vY7JO9lg==
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: csi-rbd-sc
provisioner: rbd.csi.ceph.com
parameters:
clusterID: 6aa56874-3c30-4c29-abd2-5e4a1363f03c
pool: ccpool
imageFeatures: layering
csi.storage.k8s.io/provisioner-secret-name: csi-rbd-secret
csi.storage.k8s.io/provisioner-secret-namespace: kube-system
csi.storage.k8s.io/controller-expand-secret-name: csi-rbd-secret
csi.storage.k8s.io/controller-expand-secret-namespace: kube-system
csi.storage.k8s.io/node-stage-secret-name: csi-rbd-secret
csi.storage.k8s.io/node-stage-secret-namespace: kube-system
csi.storage.k8s.io/fstype: ext4
reclaimPolicy: Delete
volumeBindingMode: Immediate
allowVolumeExpansion: true
mountOptions:
- discard
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# cat pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: rbd-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: csi-rbd-sc
# kubectl apply -f pvc.yaml
persistentvolumeclaim/rbd-pvc created
# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
rbd-pvc Bound pvc-5b72e56c-8f06-47dc-a00e-4a67b0c1fbd0 1Gi RWO csi-rbd-sc 11s

1.4 服务端查看

1
2
# rbd ls -p ccpool
csi-vol-d6ff6220-2cc8-11ed-a287-0a7be4a0ba4b

1.5 手动挂载测试

1
rbd feature disable ccpool/csi-vol-d6ff6220-2cc8-11ed-a287-0a7be4a0ba4b object-map fast-diff deep-flatten

2 文件存储: CephFS

2.1 创建 pool

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
apiVersion: ceph.rook.io/v1
kind: CephFilesystem
metadata:
name: ccfs
namespace: rook-ceph
spec:
metadataPool: # 3副本元数据池
replicated:
size: 3
dataPools: # 3副本复制池
- name: replicated
replicated:
size: 3
preserveFilesystemOnDelete: true
metadataServer:
activeCount: 1
activeStandby: true

2.2 查看

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
# kubectl get CephFilesystem -n rook-ceph
NAME ACTIVEMDS AGE PHASE
ccfs 1 62s Ready
# ceph df
--- RAW STORAGE ---
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 600 GiB 600 GiB 162 MiB 162 MiB 0.03
TOTAL 600 GiB 600 GiB 162 MiB 162 MiB 0.03

--- POOLS ---
POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL
ccfs-metadata 7 32 90 KiB 24 372 KiB 0 190 GiB
ccfs-replicated 8 32 0 B 0 0 B 0 190 GiB

# 查看信息
ceph fs ls
name: ccfs, metadata pool: ccfs-metadata, data pools: [ccfs-replicated ]
# 详细信息
ceph fs dump
e8
enable_multiple, ever_enabled_multiple: 1,1
default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag i
s stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
legacy client fscid: 1

Filesystem 'ccfs' (1)
fs_nameccfs
epoch8
flags32 joinable allow_snaps allow_multimds_snaps allow_standby_replay
created2023-06-09T05:37:28.081181+0000
modified2023-06-09T05:37:30.036108+0000
tableserver0
root0
session_timeout60
session_autoclose300
max_file_size1099511627776
required_client_features{}
last_failure0
last_failure_osd_epoch0
compatcompat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored
in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
max_mds1
in0
up{0=15684943}
failed
damaged
stopped
data_pools[5]
metadata_pool4
inline_datadisabled
balancer
standby_count_wanted1

2.3 外部集群使用

CSI 部署参考:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
apiVersion: v1
kind: Secret
metadata:
name: csi-cephfs-secret
namespace: cephfs
stringData:
adminID: admin
adminKey: AQA3exFjJIhtBBAAujNu5Xvc4o4DH7vY7JO9lg==
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: csi-cephfs-sc
provisioner: cephfs.csi.ceph.com
parameters:
# ceph 集群的clusterID
clusterID: 6aa56874-3c30-4c29-abd2-5e4a1363f03c
# ceph 用来对接K8S的pool
pool: ccfs-replicated
# ceph 对接K8S的文件系统
fsName: ccfs # 上面的name
csi.storage.k8s.io/provisioner-secret-name: csi-cephfs-secret
csi.storage.k8s.io/provisioner-secret-namespace: cephfs
csi.storage.k8s.io/controller-expand-secret-name: csi-cephfs-secret
csi.storage.k8s.io/controller-expand-secret-namespace: cephfs
csi.storage.k8s.io/node-stage-secret-name: csi-cephfs-secret
csi.storage.k8s.io/node-stage-secret-namespace: cephfs
reclaimPolicy: Delete
mountOptions:
- discard
1
2
3
4
5
6
7
8
9
10
11
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: csi-cephfs-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1M
storageClassName: csi-cephfs-sc
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: csi-cephfs-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1M
storageClassName: csi-cephfs-sc

# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
csi-cephfs-pvc Bound pvc-4c5eebcf-a9d6-4ea8-8251-ea0f41b8aad3 1Mi RWO csi-cephfs-sc 1s

2.4 服务端查看

2.5 手动挂载测试

1
2
3
4
# mount -t ceph 10.1.1.1:6789,10.1.1.2:6789,10.1.1.3:6789:/volumes/csi -o name=admin,secret=xxxxxx9iMd3rnzJQHaMLkw== /tmp/test
# cd /tmp/test2/
# ls
csi-vol-0715c282-2ce5-11ed-97df-ea43b6d16965

3 对象存储: S3

注意节点标签

3.1 创建 zone

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
apiVersion: ceph.rook.io/v1
kind: CephObjectRealm
metadata:
name: realm-cc
namespace: rook-ceph # namespace:cluster
---
apiVersion: ceph.rook.io/v1
kind: CephObjectZoneGroup
metadata:
name: zonegroup-cc
namespace: rook-ceph # namespace:cluster
spec:
realm: realm-cc
---
apiVersion: ceph.rook.io/v1
kind: CephObjectZone
metadata:
name: zone-cc
namespace: rook-ceph # namespace:cluster
spec:
zoneGroup: zonegroup-cc
metadataPool:
failureDomain: host
replicated:
size: 3
requireSafeReplicaSize: true
dataPool:
failureDomain: host
replicated:
size: 3
requireSafeReplicaSize: true
parameters:
compression_mode: nones
compression_mode: none

3.2 创建 pool

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
apiVersion: ceph.rook.io/v1
kind: CephObjectStore
metadata:
name: ccs3-store
namespace: rook-ceph
spec:
metadataPool:
failureDomain: host
replicated:
size: 3
dataPool:
failureDomain: host
erasureCoded:
dataChunks: 2
codingChunks: 1
preservePoolsOnDelete: true
gateway:
port: 888 # 暴露端口
hostNetwork: true # 主机网络
instances: 1 # 节点数
annotations:
placement: # 亲和性
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: role
operator: In
values:
- rgw-node
# tolerations:
# - key: rgw-node
# operator: Exists
# podAffinity:
# podAntiAffinity:
# topologySpreadConstraints:
resources:
limits:
cpu: "500m"
memory: "1024Mi"
requests:
cpu: "500m"
memory: "1024Mi"
zone:
name: zone-cc

3.3 账户

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
apiVersion: ceph.rook.io/v1
kind: CephObjectStoreUser
metadata:
name: cc-user # 用户名称
namespace: rook-ceph
spec:
store: ccs3-store # 与对象库 CRD 的名称匹配
displayName: cc-display-name # 显示名称
quotas:
maxBuckets: 100 # 最大存储桶限制
maxSize: 10G # 大小
maxObjects: 10000 # 桶中的最大对象数
capabilities:
user: "*"
bucket: "*"

3.4 查看

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# kubectl get CephObjectZone -A
NAMESPACE NAME PHASE
rook-ceph zone-cc Ready
# kubectl get CephObjectStore -A
NAMESPACE NAME PHASE
rook-ceph ccs3-store Connected
# kubectl get cephobjectstoreuser -A
NAMESPACE NAME PHASE
rook-ceph cc-user Ready
# kubectl -n rook-ceph get secret |grep ccs3-store-cc-user
# 如果没有说明上面账户有问题
rook-ceph-object-user-ccs3-store-cc-user kubernetes.io/rook 3 4m55s
# 上部查到的结果
# AccessKey
kubectl -n rook-ceph get secret rook-ceph-object-user-ccs3-store-cc-user -o jsonpath='{.data.AccessKey}' | base64 --decode
# SecretKey
kubectl -n rook-ceph get secret rook-ceph-object-user-ccs3-store-cc-user -o jsonpath='{.data.SecretKey}' | base64 --decode

3.5 外部集群使用

CSI 部署参考:

1
2
3
4
5
6
7
8
9
apiVersion: v1
kind: Secret
metadata:
namespace: minio
name: csi-s3-secret
stringData:
accessKeyID: "CXJ2R8NNGQE6A2Y9BQXB"
secretAccessKey: "oPcVD6XOVtBmhy6WxeAEZAwlRdt44Pv1C7YpyBDL"
endpoint: http://10.1.1.1:888 # 因为用的主机网络,节点加端口就行

3.5.1 查看

1
2
3
kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
csi-s3-pvc Bound pvc-e2c83416-0c15-4a6f-88ea-8352f4ab77fc 5Gi RWO csi-s3 4m1s

服务端已经有以 pv 名字的 bucket,也就是说这个账户只能创建 100 个 pvc

3.6 手动创建测试

3.6.1 创建 backup

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: rook-ceph-bucket
provisioner: rook-ceph.ceph.rook.io/bucket
reclaimPolicy: Delete
parameters:
objectStoreName: ccs3-store # 与对象库 CRD 的名称匹配
objectStoreNamespace: rook-ceph
---
apiVersion: objectbucket.io/v1alpha1
kind: ObjectBucketClaim
metadata:
name: ceph-bucket
spec:
generateBucketName: ceph-bkt
storageClassName: rook-ceph-bucket

3.6.2 查看

-o yaml 可以查看状态,如果 configmap 没有自动创建一般是 Pending 了

1
2
3
4
5
6
7
8
9
10
# kubectl get ObjectBucketClaim
NAME AGE
cc-bucket 48s

# bucket 名字
kubectl -n default get cm cc-bucket -o jsonpath='{.data.BUCKET_NAME}'
# AccessKey
kubectl -n default get secret cc-bucket -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 --decode
# SecretKey
kubectl -n default get secret cc-bucket -o jsonpath='{.data.SECRET_ACCESS_KEY}' | base64 --decode

3.6.3 访问测试

4 NFS

注意节点标签

4.1 创建 pool

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
apiVersion: ceph.rook.io/v1
kind: CephNFS
metadata:
name: cc-nfs
namespace: rook-ceph
spec:
server:
active: 1 # 多个节点并不高可用
placement: # 亲和性
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: role
operator: In
values:
- nfs-node
topologySpreadConstraints:
tolerations:
- key: nfs-node
operator: Exists
podAffinity:
podAntiAffinity:
annotations:
cc-annotation: something
labels:
cc-label: something
resources:
limits:
cpu: "2"
memory: "4Gi"
requests:
cpu: "2"
memory: "4Gi"
priorityClassName: ""
logLevel: NIV_INFO # 日志级别
---
apiVersion: ceph.rook.io/v1
kind: CephBlockPool
metadata:
name: cc-nfs
namespace: rook-ceph # namespace:cluster
spec:
name: .nfs
failureDomain: osd
replicated:
size: 1
requireSafeReplicaSize: false

4.2 对外暴露

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
apiVersion: v1
kind: Service
metadata:
name: rook-ceph-nfs-my-nfs-load-balancer
namespace: rook-ceph
spec:
ports:
- name: nfs
port: 2049
externalIPs:
- 10.1.1.2 # 节点ip
type: ClusterIP
selector:
app: rook-ceph-nfs
ceph_nfs: cc-nfs
instance: a

4.3 启用 Ceph 协调程序

1
2
3
ceph mgr module enable rook
ceph mgr module enable nfs
ceph orch set backend rook

4.4 手动创建目录

1
2
3
4
5
6
7
8
9
10
11
12
# ceph nfs export create cephfs cc-nfs /test ccfs
{
"bind": "/test",
"fs": "ccfs",
"path": "/",
"cluster": "cc-nfs",
"mode": "RW"
}
# ceph nfs export ls cc-nfs
[
"/test"
]

4.4.1 关闭 Ceph 协调程序

1
2
ceph orch set backend ""
ceph mgr module disable rook

4.5 查看

仅支持通过 TCP 的 V4 版本 NFS 协议

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
ceph nfs export info cc-nfs /test
{
"export_id": 1,
"path": "/",
"cluster_id": "cc-nfs",
"pseudo": "/test",
"access_type": "RW",
"squash": "none",
"security_label": true,
"protocols": [
4
],
"transports": [
"TCP"
],
"fsal": {
"name": "CEPH",
"user_id": "nfs.cc-nfs.1",
"fs_name": "ccfs"
},
"clients": []
}

4.6 外部集群使用

CSI 部署参考:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
......
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: willdockerhub/nfs-subdir-external-provisioner:v4.0.2
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: k8s-sigs.io/nfs-subdir-external-provisioner
- name: NFS_SERVER
value: 10.122.148.142 # 刚才对外暴露地址
- name: NFS_PATH
value: /test
volumes:
- name: nfs-client-root
nfs:
server: 10.122.148.142 # 刚才对外暴露地址
path: /test # 刚才创建的路径

4.6.1 查看

4.6.1.1 pvc
1
2
3
kubectl get pvc nfs-pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
nfs-pvc Bound pvc-1d2b8446-974e-4fff-9e85-387fecd7c564 1Mi RWX nfs-client 5s
4.6.1.2 服务端
1
2
3
4
5
# cd /tmp/test2/
# ll
total 1
drwxrwxrwx 2 nobody nobody 0 Sep 6 13:22 default-nfs-pvc-pvc-1d2b8446-974e-4fff-9e85-387fecd7c564
drwxr-xr-x 4 nobody nobody 158 Sep 5 14:36 volumes

4.7 手动挂载测试

1
2
3
# mount -t nfs 10.122.148.142:/test /tmp/test/
# df -h |grep 142
10.122.148.142:/test 190G 0 190G 0% /tmp/test

三, 总结

此文章简单演示了通过 rook 部署 ceph 集群与管理

并且如何使用块存储,文件存储,对象存储,NFS

以及外部集群使用此集群存储

1 整体查看下 pod 挂载

1
2
3
4
5
6
# kubectl get pod
NAME READY STATUS RESTARTS AGE
cephfs-pod 1/1 Running 0 22h
nfs-pod 1/1 Running 0 3m6s
rbd-pod 1/1 Running 0 23h
s3-pod 1/1 Running 0 12m