1 准备工作

1.1 创建虚拟化磁盘

别批量复制执行,这里要一步一步操作

1
2
3
4
5
6
dd if=/dev/zero of=local_data.img bs=1M count=4096
du -h local_data.img
mkfs.ext4 local_data.img
mkdir /data/local_data
mount -o loop -t ext4 local_data.img /data/local_data/
df -h

2 静态供应器

2.1 查看刚才创建的虚拟磁盘

1
2
lsblk |grep local_data
loop4 7:4 0 4G 0 loop /data/local_data

2.2 配置静态供应器的发现目录

1
2
mkdir /mnt/disks
ln -s /dev/loop4 /mnt/disks

2.3 部署静态供应器

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
cat csi.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: local-provisioner-config
namespace: local-csi
data:
storageClassMap: |
# storageclass的名称
fast-disks:
# 配置发现目录
hostDir: /mnt/disks
mountDir: /mnt/disks
blockCleanerCommand:
- "/scripts/shred.sh"
- "2"
volumeMode: Block # 创建的pv将是这个类型的卷
fsType: ext4
namePattern: "*"
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: local-volume-provisioner
namespace: local-csi
labels:
app: local-volume-provisioner
spec:
selector:
matchLabels:
app: local-volume-provisioner
template:
metadata:
labels:
app: local-volume-provisioner
spec:
serviceAccountName: local-storage-admin
containers:
- image: "dyrnq/local-volume-provisioner:v2.4.0"
imagePullPolicy: "IfNotPresent"
name: provisioner
securityContext:
privileged: true
env:
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
volumeMounts:
- mountPath: /etc/provisioner/config
name: provisioner-config
readOnly: true
- mountPath: /mnt/disks # 静态供应器发现目录
name: disks
mountPropagation: "HostToContainer"
volumes:
- name: provisioner-config
configMap:
name: local-provisioner-config
- name: disks
hostPath:
path: /mnt/disks # 静态供应器发现目录

---
apiVersion: v1
kind: ServiceAccount
metadata:
name: local-storage-admin
namespace: local-csi

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: local-storage-provisioner-pv-binding
namespace: local-csi
subjects:
- kind: ServiceAccount
name: local-storage-admin
namespace: local-csi
roleRef:
kind: ClusterRole
name: system:persistent-volume-provisioner
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: local-storage-provisioner-node-clusterrole
namespace: local-csi
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: local-storage-provisioner-node-binding
namespace: local-csi
subjects:
- kind: ServiceAccount
name: local-storage-admin
namespace: local-csi
roleRef:
kind: ClusterRole
name: local-storage-provisioner-node-clusterrole
apiGroup: rbac.authorization.k8s.io
1
kubectl apply -f csi.yaml -n local-csi

3 创建 storageclass

1
2
3
4
5
6
7
8
cat sc.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-sc
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: Immediate
kubectl apply -f sc.yaml -n local-csi

4 验证

4.1 查看

静态供应器会将发现目录中的设备文件自动创建为 pv

1
2
3
4
kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS AGE
local-pv-xxx0b1 4Gi RWO Delete Bound local-sc 21h
local-pv-xxx79f 4Gi RWO Delete Bound local-sc 21h

4.2 创建 pvc

1
2
3
4
5
6
7
8
9
10
11
12
13
14
cat pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: local-pvc
spec:
accessModes:
- ReadWriteOnce
volumeMode: Block
storageClassName: local-sc
resources:
requests:
storage: 1Gi
kubectl apply -f pvc.yaml -n local-csi

查看

1
2
3
kubectl get pvc -n local-csi
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
local-pvc Bound local-pv-xxx0b1 4Gi RWO local-sc 19h

4.3 创建 statefuset

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
cat statefuset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: local-test
spec:
serviceName: "local-service"
replicas: 1
selector:
matchLabels:
app: local-test
template:
metadata:
labels:
app: local-test
spec:
containers:
- name: test-container
image: nginx
volumeDevices:
- name: local-vol
devicePath: /mnt/loop4
volumeClaimTemplates:
- metadata:
name: local-vol
spec:
accessModes:
- ReadWriteOnce
volumeMode: Block
storageClassName: "local-sc"
resources:
requests:
storage: 4Gi
kubectl apply -f statefuset.yaml -n local-csi

查看

1
2
3
kubectl get pod -n local-csi
NAME READY STATUS RESTARTS AGE
local-test-0 1/1 Running 0 19h