-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pod stuck in Terminating status and won't unbind pvc #204
Comments
I assume by "disable one node" you mean you forced the node to shut down or something along those lines? In that case, you might want to look into https://github.com/piraeusdatastore/piraeus-ha-controller Can I recommend using the operator instead of deploying and managing LINSTOR manually? It also comes with the ha-controller deployed out of the box. |
Yes, right. Thanks for the advice, I'll give it a try! |
Hello! In normal deployment, without replication, everything works. But when I want to do replication, when deploying the deployment example, the disks are not used and are in the "Unused" status. Also, when deleting this example, the mounted volumes should be automatically deleted, but this does not happen, I stumble upon an error: 1000: State change failed: (-2) Need access to UpToDate data.
Before deploying linstor through operator, do you need any preparation on worker nodes? |
Additionally:
after manually deleting this directory from /var/lib/kubelet/pods/ the error disappeared and pvc became in "InUse" status. |
this was a one-time issue, it hasn't happened again, but the problem with the "Unused" status remains. |
Hi all!
deployed linstor in HA mode on k3s (3 master (controller ) nodes), i.e. controller is installed on the master node.
what it looks like:
k3s
linstor
Warning FailedAttachVolume 118s attachdetach-controller Multi-Attach error for volume "pvc-3af7a2db-aff2-4983-b69d-2e191695c328" Volume is already used by pod(s) demo-pod-0-5b87665bc8-7xnpz
Whether it is possible to solve it somehow?
The text was updated successfully, but these errors were encountered: