Having a hard time rolling out ceph-csi, looking for help #4396
-
IntroHey there I am having a hard time rolling out ceph-csi on my cluster and am looking for some help to unblock. Infra:
ceph-csi installation:I am using CephFS rather than RBD and this is what my storage class looks like: apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: csi-cephfs
namespace: ceph-csi-cephfs
provisioner: cephfs.csi.ceph.com
parameters:
clusterID: cb917f3e-bd73-11ee-881d-6371034f44bd
fsName: kubernetes-fs
pool: pool-kubernetes
csi.storage.k8s.io/provisioner-secret-name: csi-cephfs-secret
csi.storage.k8s.io/provisioner-secret-namespace: ceph-csi-cephfs
csi.storage.k8s.io/controller-expand-secret-name: csi-cephfs-secret
csi.storage.k8s.io/controller-expand-secret-namespace: ceph-csi-cephfs
csi.storage.k8s.io/node-stage-secret-name: csi-cephfs-secret
csi.storage.k8s.io/node-stage-secret-namespace: ceph-csi-cephfs
reclaimPolicy: Delete
allowVolumeExpansion: true This is what my ceph-csi-config ConfigMap looks like: apiVersion: v1
kind: ConfigMap
metadata:
name: ceph-csi-config
namespace: ceph-csi-cephfs
labels:
app.kubernetes.io/managed-by: Helm
annotations:
meta.helm.sh/release-name: ceph-csi-cephfs
meta.helm.sh/release-namespace: ceph-csi-cephfs
data:
config.json: |-
[
{
"clusterID": "cb917f3e-bd73-11ee-881d-6371034f44bd",
"monitors": [
"192.168.50.1:6789",
"192.168.50.2:6789",
"192.168.50.3:6789"
]
}
]
Things to note
[client.kubernetes]
key = redacted
caps mds = "allow *"
caps mgr = "allow *"
caps mon = "allow *"
caps osd = "allow *" Error:I am trying to test ceph-csi by creating a PVC as below, however the pvc stays in a perpetual pending state. kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-pvc
namespace: ceph-csi-cephfs
spec:
storageClassName: csi-cephfs
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi I also noticed that a
|
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
I found what the problem was, in my secrets config I was using |
Beta Was this translation helpful? Give feedback.
I found what the problem was, in my secrets config I was using
client.kubernetes
as the username (since that is what I see on the ceph dashboard) and theclient.
prefix was not necessary.