部署 Glusterfs 参考: glusterfs

Kubernetes 通过 StorageClass 访问 heketi,heketi 负责管理 Glusterfs 集群

1 heketi

1.1 安装

1.1.1 配置 yum 源并安装

1
2
3
4
5
6
7
cat /etc/yum.repos.d/glusterfs.repo
[glusterfs]
name=glusterfs
baseurl=https://buildlogs.centos.org/centos/7/storage/x86_64/gluster-9/
enabled=1
gpgcheck=0
yum install -y heketi heketi-client

1.1.2 配置文件

这个一般只需要修改用户名与 key

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
cat /etc/heketi/heketi.json

{
"_port_comment": "Heketi Server Port Number",
"port": "8080", # 对外暴露端口号

"_use_auth": "Enable JWT authorization. Please enable for deployment",
"use_auth": true,

"_jwt": "Private keys for access",
"jwt": {
"_admin": "Admin has access to all APIs",
"admin": {
"key": "admin" # admin key
},
"_user": "User only has access to /volumes endpoint",
"user": {
"key": "user"
}
},

"_glusterfs_comment": "GlusterFS Configuration",
"glusterfs": {
"_executor_comment": [
"Execute plugin. Possible choices: mock, ssh",
"mock: This setting is used for testing and development.",
" It will not send commands to any node.",
"ssh: This setting will notify Heketi to ssh to the nodes.",
" It will need the values in sshexec to be configured.",
"kubernetes: Communicate with GlusterFS containers over",
" Kubernetes exec api."
],
"executor": "ssh",

"_sshexec_comment": "SSH username and private key file information",
"sshexec": {
"keyfile": "/etc/heketi/heketi_key",
"user": "ops", # 远程用户名
"port": "22", # 远程端口
"sudo": true, # 是否sudo
"fstab": "/etc/fstab"
},

"_db_comment": "Database file name",
"db": "/var/lib/heketi/heketi.db",

"_loglevel_comment": [
"Set log level. Choices are:",
" none, critical, error, warning, info, debug",
"Default is warning"
],
"loglevel" : "info" # 日志级别
}
}

1.1.3 添加免密登录

1
2
3
4
5
cd /etc/heketi/
ssh-keygen -t rsa -q -f /etc/heketi/heketi_key -N ""
ssh-copy-id -i /etc/heketi/heketi_key.pub -p 9999 ops@10.1.1.1
ssh-copy-id -i /etc/heketi/heketi_key.pub -p 9999 ops@10.1.1.2
ssh-copy-id -i /etc/heketi/heketi_key.pub -p 9999 ops@10.1.1.3

1.1.4 启动

1
2
systemctl enable heketi
systemctl start heketi

1.1.5 查看与测试

1
2
3
systemctl status heketi
curl 10.1.1.1:8080/hello
Hello from Heketi

1.2 添加 Glusterfs 集群

1.2.1 添加 heketi 专用磁盘

不需要格式化

1
2
3
dd if=/dev/zero of=gfs-data.img bs=1M count=10240
losetup -d /dev/loop10
losetup /dev/loop10 gfs-data.img

查看刚才创建的虚拟磁盘

1
2
3
lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop10 7:10 0 10G 0 loop

1.2.2 配置 Glusterfs 集群

  • 磁盘一定要与上面查询到的一样
  • 可以使用主机名,我为了方便直接使用 ip 地址了
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
cat /etc/heketi/topology.json

{
"clusters": [
{
"nodes": [
{
"node": {
"hostnames": {
"manage": [
"10.1.1.1"
],
"storage": [
"10.1.1.1"
]
},
"zone": 1
},
"devices": [
"/dev/loop10"
]
},
{
"node": {
"hostnames": {
"manage": [
"10.1.1.2"
],
"storage": [
"10.1.1.2"
]
},
"zone": 1
},
"devices": [
"/dev/loop10"
]
},
{
"node": {
"hostnames": {
"manage": [
"10.1.1.3"
],
"storage": [
"10.1.1.3"
]
},
"zone": 1
},
"devices": [
"/dev/loop10"
]
},
{
"node": {
"hostnames": {
"manage": [
"10.1.1.4"
],
"storage": [
"10.1.1.4"
]
},
"zone": 1
},
"devices": [
"/dev/loop10"
]
}
]
}
]
}

1.2.3 连接集群

1
2
3
4
5
6
7
8
9
heketi-cli --user admin --secret admin topology load --json=/etc/heketi/topology.json
Found node 10.1.1.1 on cluster bd307c566431464e6d91578ff88724f8
Adding device /dev/loop10 ... OK
Found node 10.1.1.2 on cluster bd307c566431464e6d91578ff88724f8
Adding device /dev/loop10 ... OK
Found node 10.1.1.3 on cluster bd307c566431464e6d91578ff88724f8
Adding device /dev/loop10 ... OK
Found node 10.1.1.4 on cluster bd307c566431464e6d91578ff88724f8
Adding device /dev/loop10 ... OK

1.2.4 创建测试存储卷

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
heketi-cli --user admin --secret admin volume create --size=5

Name: vol_399d09a026f7e9faa4c112fa01c01f97
Size: 3
Volume Id: 399d09a026f7e9faa4c112fa01c01f97
Cluster Id: bd307c566431464e6d91578ff88724f8
Mount: 10.1.1.1:vol_399d09a026f7e9faa4c112fa01c01f97
Mount Options: backup-volfile-servers=10.1.1.2,10.1.1.3,10.1.1.4
Block: false
Free Size: 0
Reserved Size: 0
Block Hosting Restriction: (none)
Block Volumes: []
Durability Type: replicate
Distribute Count: 1
Replica Count: 3

1.2.5 查看

1.2.5.1 glusterfs
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
gluster volume info

Volume Name: vol_399d09a026f7e9faa4c112fa01c01f97
Type: Replicate
Volume ID: 26c8ae49-00e4-4f1b-9388-e2e3ec5eabf9
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.1.1.3:/var/lib/heketi/mounts/vg_61f345a18c1093cac5abfc6a373a8716/brick_0826c144799419a265dae9759214bfa2/brick
Brick2: 10.1.1.1:/var/lib/heketi/mounts/vg_e5b0b51220b519ed7bfb553ac1073a1f/brick_163d78579056f4e38ca6e8a0f8467f03/brick
Brick3: 10.1.1.4:/var/lib/heketi/mounts/vg_034008384665cab8fb2fdb1477893424/brick_7266a0b5b52f60e8c363a2e17f3a6915/brick
Options Reconfigured:
user.heketi.id: 399d09a026f7e9faa4c112fa01c01f97
cluster.granular-entry-heal: on
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off

能看到说明以前正常,如果看不到看看日志,应该是免密没有做好

1.2.6 删除

1
heketi-cli --user admin --secret admin volume delete ${Volume Id}

2 Kubernetes

2.1 安装 glusterfs 客户端

所有节点最好都执行

yum install glusterfs-fuse -y

2.2 StorageClass

2.2.1 查看 heketi 的 Cluster Id

1
2
3
4
5
6
7
heketi-cli --user admin --secret admin topology info
Cluster Id: b122b2fa65f1e02648d0bc434deefc5e

File: true
Block: true

Volumes: ...........................

2.2.2 StorageClass 配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
cat gfs.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: gluster-heketi-storageclass
provisioner: kubernetes.io/glusterfs
parameters:
resturl: "http://10.1.1.1:8080" # heketi服务端地址
clusterid: "b122b2fa65f1e02648d0bc434deefc5e" # 刚才查到的Cluster Id
restauthenabled: "true"
restuser: "admin"
secretNamespace: "gfs"
secretName: "heketi-secret"
gidMin: "40000"
gidMax: "50000"
volumetype: "replicate:2"
---
apiVersion: v1
kind: Secret
metadata:
name: heketi-secret
data:
key: YWRtaW4= # heketi.json里配置的admin key base64加密
type: kubernetes.io/glusterfs

添加

1
kubectl apply -f gfs.yaml  -n gfs

2.3 pvc

1
2
3
4
5
6
7
8
9
10
11
12
13
14
cat pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: gfs-pvc
spec:
storageClassName: gluster-heketi-storageclass
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1G

kubectl apply -f pvc.yaml -n gfs

2.3.1 查看

1
2
3
kubectl get pvc -n gfs
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
gfs-pvc Bound pvc-e3e5cfb6-25b1-429c-8e85-2bf93a5ae61f 1Gi RWX gluster-heketi-storageclass 4d17h

2.4 pod

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
cat pod.yaml
kind: Pod
apiVersion: v1
metadata:
name: gfs-pod
namespace: gfs
spec:
containers:
- name: gfs-pod
image: nginx
volumeMounts:
- name: pvc
mountPath: "/mnt"
volumes:
- name: pvc
persistentVolumeClaim:
claimName: gfs-pvc

2.5 查看

2.5.1 pod

1
2
3
kubectl get pod -n gfs
NAME READY STATUS RESTARTS AGE
gfs-pod 1/1 Running 0 4d17h

2.5.2 Glusterfs

1
2
3
4
5
6
7
8
9
gluster volume info

Volume Name: vol_3373ccb6e612ae94f39d56a8b55c5068
Type: Replicate
Volume ID: 751b7641-443f-4207-875a-4d42f13abfef
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
............

3 静态绑定

3.1 glusterfs 创建 volume

gluster volume create kube_vol 10.1.1.1:/data/gfs2_data 10.1.1.2:/data/gfs2_data 10.1.1.3:/data/gfs2_data

3.1.1 启动 volume

gluster volume start kube_vol

3.2 配置 kubernetes 访问 glusterfs 集群

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
 cat endpoints.yaml

apiVersion: v1
kind: Endpoints
metadata:
name: glusterfs-cluster
subsets:
- addresses:
- ip: 10.1.1.1
ports:
- port: 20
- addresses:
- ip: 10.1.1.2
ports:
- port: 20
- addresses:
- ip: 10.1.1.3
ports:
- port: 20
- addresses:
- ip: 10.1.1.4
ports:
- port: 20
---
apiVersion: v1
kind: Service
metadata:
name: glusterfs-cluster
spec:
ports:
- port: 20

3.3 创建 pv

1
2
3
4
5
6
7
8
9
10
11
12
13
apiVersion: v1
kind: PersistentVolume
metadata:
name: gfs-pv
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
glusterfs:
endpoints: glusterfs-cluster
path: kube_vol
readOnly: true

3.4 创建 pvc 绑定 pv

1
2
3
4
5
6
7
8
9
10
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: gfs-pvc2
spec:
volumeName: gfs-pv
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 1Gi

3.5 查看

1
2
3
kubectl get pvc gfs-pvc2 -n gfs
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
gfs-pvc2 Bound gfs-pv 1Gi RWO 19m

4 问题

4.1 mount: unknown filesystem type ‘glusterfs’

安装 glusterfs 客户端

yum install glusterfs-fuse -y

4.2 添加集群初始化失败

硬盘不需要格式化

4.3 pod 挂载不上 pvc

查看 glusterfs 集群 volume 是否正常,可能没启动,gluster volume status