You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, I encountered a problem using geesefs csi-s3, in the case of connecting volume to cronjobs with frequent periodic startup.
After a short time, the following messages begin to flood the logs like:
/var/log/messages: May 27 15:11:15 node-worker-hsilpy9f kubelet[1319]: E0527 15:11:15.934992 1319 reconciler_common.go:166] "operationExecutor.UnmountVolume failed (controllerAttachDetachEnabled true) for volume \"production\" (UniqueName: \"kubernetes.io/csi/ru.yandex.s3.csi^production\") pod \"6823e06c-cc27-4785-a5fe-a07de3f28930\" (UID: \"6823e06c-cc27-4785-a5fe-a07de3f28930\") : UnmountVolume.NewUnmounter failed for volume \"production\" (UniqueName: \"kubernetes.io/csi/ru.yandex.s3.csi^production\") pod \"6823e06c-cc27-4785-a5fe-a07de3f28930\" (UID: \"6823e06c-cc27-4785-a5fe-a07de3f28930\") : kubernetes.io/csi: unmounter failed to load volume data file [/var/lib/kubelet/pods/6823e06c-cc27-4785-a5fe-a07de3f28930/volumes/kubernetes.io~csi/production/mount]: kubernetes.io/csi: failed to open volume data file [/var/lib/kubelet/pods/6823e06c-cc27-4785-a5fe-a07de3f28930/volumes/kubernetes.io~csi/production/vol_data.json]: open /var/lib/kubelet/pods/6823e06c-cc27-4785-a5fe-a07de3f28930/volumes/kubernetes.io~csi/production/vol_data.json: no such file or directory" err="UnmountVolume.NewUnmounter failed for volume \"production\" (UniqueName: \"kubernetes.io/csi/ru.yandex.s3.csi^production\") pod \"6823e06c-cc27-4785-a5fe-a07de3f28930\" (UID: \"6823e06c-cc27-4785-a5fe-a07de3f28930\") : kubernetes.io/csi: unmounter failed to load volume data file [/var/lib/kubelet/pods/6823e06c-cc27-4785-a5fe-a07de3f28930/volumes/kubernetes.io~csi/production/mount]: kubernetes.io/csi: failed to open volume data file [/var/lib/kubelet/pods/6823e06c-cc27-4785-a5fe-a07de3f28930/volumes/kubernetes.io~csi/production/vol_data.json]: open /var/lib/kubelet/pods/6823e06c-cc27-4785-a5fe-a07de3f28930/volumes/kubernetes.io~csi/production/vol_data.json: no such file or directory"
DaemonSet csi-s3 logs: E0522 12:12:23.705357 1 utils.go:101] GRPC error: rpc error: code = Internal desc = Unmount failed: exit status 1 Unmounting arguments: /var/lib/kubelet/pods/6823e06c-cc27-4785-a5fe-a07de3f28930/volumes/kubernetes.io~csi/production/mount Output: umount: can't unmount /var/lib/kubelet/pods/6823e06c-cc27-4785-a5fe-a07de3f28930/volumes/kubernetes.io~csi/production/mount: No such file or directory
This may be related to issue, where they report that the problem may be in the NodeUnpublish CSI driver.
This issue imo is almost entirely caused by problematic CSI driver implementations.
Kubernetes version: 1.26.8
csi-s3 version: 0.40.1
The text was updated successfully, but these errors were encountered:
Hello, I encountered a problem using geesefs csi-s3, in the case of connecting volume to cronjobs with frequent periodic startup.
After a short time, the following messages begin to flood the logs like:
/var/log/messages:
May 27 15:11:15 node-worker-hsilpy9f kubelet[1319]: E0527 15:11:15.934992 1319 reconciler_common.go:166] "operationExecutor.UnmountVolume failed (controllerAttachDetachEnabled true) for volume \"production\" (UniqueName: \"kubernetes.io/csi/ru.yandex.s3.csi^production\") pod \"6823e06c-cc27-4785-a5fe-a07de3f28930\" (UID: \"6823e06c-cc27-4785-a5fe-a07de3f28930\") : UnmountVolume.NewUnmounter failed for volume \"production\" (UniqueName: \"kubernetes.io/csi/ru.yandex.s3.csi^production\") pod \"6823e06c-cc27-4785-a5fe-a07de3f28930\" (UID: \"6823e06c-cc27-4785-a5fe-a07de3f28930\") : kubernetes.io/csi: unmounter failed to load volume data file [/var/lib/kubelet/pods/6823e06c-cc27-4785-a5fe-a07de3f28930/volumes/kubernetes.io~csi/production/mount]: kubernetes.io/csi: failed to open volume data file [/var/lib/kubelet/pods/6823e06c-cc27-4785-a5fe-a07de3f28930/volumes/kubernetes.io~csi/production/vol_data.json]: open /var/lib/kubelet/pods/6823e06c-cc27-4785-a5fe-a07de3f28930/volumes/kubernetes.io~csi/production/vol_data.json: no such file or directory" err="UnmountVolume.NewUnmounter failed for volume \"production\" (UniqueName: \"kubernetes.io/csi/ru.yandex.s3.csi^production\") pod \"6823e06c-cc27-4785-a5fe-a07de3f28930\" (UID: \"6823e06c-cc27-4785-a5fe-a07de3f28930\") : kubernetes.io/csi: unmounter failed to load volume data file [/var/lib/kubelet/pods/6823e06c-cc27-4785-a5fe-a07de3f28930/volumes/kubernetes.io~csi/production/mount]: kubernetes.io/csi: failed to open volume data file [/var/lib/kubelet/pods/6823e06c-cc27-4785-a5fe-a07de3f28930/volumes/kubernetes.io~csi/production/vol_data.json]: open /var/lib/kubelet/pods/6823e06c-cc27-4785-a5fe-a07de3f28930/volumes/kubernetes.io~csi/production/vol_data.json: no such file or directory"
DaemonSet csi-s3 logs:
E0522 12:12:23.705357 1 utils.go:101] GRPC error: rpc error: code = Internal desc = Unmount failed: exit status 1 Unmounting arguments: /var/lib/kubelet/pods/6823e06c-cc27-4785-a5fe-a07de3f28930/volumes/kubernetes.io~csi/production/mount Output: umount: can't unmount /var/lib/kubelet/pods/6823e06c-cc27-4785-a5fe-a07de3f28930/volumes/kubernetes.io~csi/production/mount: No such file or directory
This may be related to issue, where they report that the problem may be in the NodeUnpublish CSI driver.
Kubernetes version: 1.26.8
csi-s3 version: 0.40.1
The text was updated successfully, but these errors were encountered: