Skip to content
This repository has been archived by the owner on Nov 9, 2020. It is now read-only.

VM hard-locks after adding vSphere Docker Volume, sometimes lists un-wanted docker volumes. #2075

Open
kmott opened this issue Mar 13, 2018 · 9 comments
Assignees
Labels

Comments

@kmott
Copy link

kmott commented Mar 13, 2018

Description:

I am trying a test run of the vDVS plugins, and am running into a few problems. The main issue I am having is that our cluster VM's that are running the Docker daemon seem to hard-lock after a little while. When I reboot them and re-add the Docker vSphere volumes, it actually shows up with a bunch more other volumes in 'docker volume ls' output (note that the only 4 I added via the docker volume create --driver=vsphere volname@datastore command are at the bottom, the minio volumes)

Environment Details:

  • Debian 8.8, 3.16.0-4-amd64 Added WIP to Readme.md #1 SMP Debian 3.16.43-2 (2017-04-30) x86_64 GNU/Linux
  • Docker 17.12.0-ce
  • docker.io/vmware/vsphere-storage-for-docker:latest for vDVS
  • ESXi-6.0.0-20160302001
  • VMWare_bootbank_esx-vmdkops-service_0.21.c420818-0.0.1.vib

Steps to Reproduce:

  1. Install VIB on ESXi host(s)
  2. Install docker plugin vsphere:latest on VM1 and VM2 (with Open-VM-Tools already installed)
  3. VM1: docker volume create --driver=vsphere minio1@CMP1-DevOps-Docker1
  4. VM1: docker volume create --driver=vsphere minio1@CMP1-DevOps-Docker2
  5. VM2: docker volume create --driver=vsphere minio2@CMP1-DevOps-Docker1
  6. VM2: docker volume create --driver=vsphere minio2@CMP1-DevOps-Docker2
  7. Start up minio container using 4 newly created volumes in erasure mode. Add some small content.
  8. After running for a while (overnight, usually), the VM1 and/or VM2 will hard-lock and/or be incredibly slow to respond. Once they do hard-lock and reboot, the odd volumes show up in docker volume ls output.

Expected Result:

VM system should not lock up, and docker volume ls should only emit added volumes from vSphere via docker volume create ... syntax.

Actual Result:

VM system hard-locks, and emits un-wanted docker volumes in docker volume ls output.

Triage:

I am working on getting the VIB log files from each of the hosts, as well as the VM log file vsphere-storage-for-docker.log as well. Here's a snippet from docker volume ls:

root@cluster1-docker1:~# docker volume ls 
DRIVER              VOLUME NAME 
vsphere:latest      014eb783bd8848dec377655837715a02b91af6e9a3696a33272e217bbf89af49@EQL1-DevOpsVol-01 
vsphere:latest      02730e557c8501248e54ee6c1958d245ddd267cb0022825f2096c2c91662782e@EQL1-DevOpsVol-01 
vsphere:latest      06209d83e9052f981161f8a3f62115f26f048c6cf59fcebbcfcc76f8231fea36@EQL1-DevOpsVol-01 
vsphere:latest      06582106ca06dfeba73607f60f570662ba21032631a8cac793c534d99ceb40db@EQL1-DevOpsVol-01 
vsphere:latest      073a893a1b6775001c4edc759023899e2eaf6308b862029a194cd5bab7b3b55e@EQL1-DevOpsVol-01 
vsphere:latest      09db4f450b10560ba1899d207f271663f3bcde6a71abcdd4ecfbdd2730f3c1ab@EQL1-DevOpsVol-01 
vsphere:latest      0aac251ff88d2b07320edb1c48e8f525a89736e782b1f03dc65fe5da9aac5390@EQL1-DevOpsVol-01 
vsphere:latest      0abc25479a2187ac26651aacd7cee1b3a51fb6bdfcd5166b22cd1f9c549d5e8d@EQL1-DevOpsVol-01 
vsphere:latest      0db48ecdb84fa4120f9877eaeb037d47aab203782be190d7dcecae8628da565d@EQL1-DevOpsVol-01 
vsphere:latest      0f0830a7e371b62646cd6bacb08f355ac151c3be256974e97fe60bd756556999@EQL1-DevOpsVol-01 
vsphere:latest      12388ba7e24651769f7fc7859a0a2658a0488fd633a312c8f156c8f4f390305f@EQL1-DevOpsVol-01 
vsphere:latest      17ff68c78d67a194b7c6720ff667015cd4fa55ea16dbae7728d0decf425cd509@EQL1-DevOpsVol-01 
vsphere:latest      1b1052b553d65560e8c59409e6c297a5f622d0d6bfb6af1175a221e2f22b8230@EQL1-DevOpsVol-01 
vsphere:latest      1c45723bdd2d71508c93828d923c98acd3d463a9699ac1c9673b194d28767e43@EQL1-DevOpsVol-01 
vsphere:latest      1d0509a4b5cbc8b910ca3b9e34fd451e1166b0cb54dae63c186bc958774f7d94@CMP1-WorkstationVol-01
vsphere:latest      1e02f4eb28dfa49cc324d8b7f63b4f05625759b9c62a801cbc7594a6c15f7b35@EQL1-DevOpsVol-01 
vsphere:latest      1e6441cc5bf10a72e085df974197d43920e0ac34da0faff98771ab2df56ff4ba@EQL1-DevOpsVol-01 
vsphere:latest      1ec4e4a0c7fe070b3d2c96e4d1e12712024a6ef72d89815261651c56527666c3@CMP1-ServerVol-03
[ ... snip ... ]
vsphere:latest      minio1@CMP1-DevOps-Docker1 
vsphere:latest      minio1@CMP1-DevOps-Docker2 
vsphere:latest      minio2@CMP1-DevOps-Docker1 
vsphere:latest      minio2@CMP1-DevOps-Docker2
[ ... snip ... ]
@kmott
Copy link
Author

kmott commented Mar 13, 2018

Here's the logs from one of the VMs that eventually hard locked:

vsphere-storage-for-docker.log.gz

@kmott
Copy link
Author

kmott commented Mar 14, 2018

I have the VIB log files as well from each of our ESXi hosts, but there's a lot of other info in them unrelated to the problem at hand, so I'd prefer a direct message to send the log files to, rather than posting here. Is that possible?

@shuklanirdesh82 shuklanirdesh82 assigned govint and unassigned tusharnt Mar 14, 2018
@govint
Copy link
Contributor

govint commented Mar 15, 2018

@kmott, are EQL1-DevOpsVol-01 and CMP1-WorkstationVol-01 datastores in the ESX host (ls /vmfs/volumes)? These extra volumes seem to be what docker seems to be creating on its own. Could check if Minio creates these volumes via Docker.

Can you also post "cat /proc/mounts", if these volumes are all mounted and hence in use, that may indicate why the VM goes slow over time (assuming that these many volumes are in use at once).

@kmott
Copy link
Author

kmott commented Mar 15, 2018

@govint, the datastores are legit on each of our 4 ESXi hosts, but the large-hash prefix that Docker is creating is not.

Minio doesn't create the volumes (as far as I know), it just uses whatever path(s) you specify when starting the container (and if the path(s) you specify point to docker volumes that are not initialized, minio will initialize them for you).

Here's the cat /proc/mounts output:

root@cluster1-docker1:~# cat /proc/mounts
rootfs / rootfs rw 0 0
sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
udev /dev devtmpfs rw,relatime,size=10240k,nr_inodes=4125124,mode=755 0 0
devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0
tmpfs /run tmpfs rw,nosuid,relatime,size=6603564k,mode=755 0 0
/dev/sda1 / ext4 rw,relatime,errors=remount-ro,data=ordered 0 0
securityfs /sys/kernel/security securityfs rw,nosuid,nodev,noexec,relatime 0 0
tmpfs /dev/shm tmpfs rw,nosuid,nodev 0 0
tmpfs /run/lock tmpfs rw,nosuid,nodev,noexec,relatime,size=5120k 0 0
tmpfs /sys/fs/cgroup tmpfs ro,nosuid,nodev,noexec,mode=755 0 0
cgroup /sys/fs/cgroup/systemd cgroup rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd 0 0
pstore /sys/fs/pstore pstore rw,nosuid,nodev,noexec,relatime 0 0
cgroup /sys/fs/cgroup/cpuset cgroup rw,nosuid,nodev,noexec,relatime,cpuset 0 0
cgroup /sys/fs/cgroup/cpu,cpuacct cgroup rw,nosuid,nodev,noexec,relatime,cpu,cpuacct 0 0
cgroup /sys/fs/cgroup/devices cgroup rw,nosuid,nodev,noexec,relatime,devices 0 0
cgroup /sys/fs/cgroup/freezer cgroup rw,nosuid,nodev,noexec,relatime,freezer 0 0
cgroup /sys/fs/cgroup/net_cls,net_prio cgroup rw,nosuid,nodev,noexec,relatime,net_cls,net_prio 0 0
cgroup /sys/fs/cgroup/blkio cgroup rw,nosuid,nodev,noexec,relatime,blkio 0 0
cgroup /sys/fs/cgroup/perf_event cgroup rw,nosuid,nodev,noexec,relatime,perf_event 0 0
systemd-1 /proc/sys/fs/binfmt_misc autofs rw,relatime,fd=23,pgrp=1,timeout=300,minproto=5,maxproto=5,direct 0 0
mqueue /dev/mqueue mqueue rw,relatime 0 0
debugfs /sys/kernel/debug debugfs rw,relatime 0 0
hugetlbfs /dev/hugepages hugetlbfs rw,relatime 0 0                                                                                                                                                                                                           
/dev/sdc1 /data/consul ext4 rw,relatime,data=ordered 0 0                                                                                                                                                                                                     
/dev/sdd1 /var/lib/docker/volumes/default ext4 rw,relatime,data=ordered 0 0                                                                                                                                                                                  
/dev/sdb1 /data/nomad ext4 rw,relatime,data=ordered 0 0                                                                                                                                                                                                      
rpc_pipefs /run/rpc_pipefs rpc_pipefs rw,relatime 0 0                                                                                                                                                                                                        
binfmt_misc /proc/sys/fs/binfmt_misc binfmt_misc rw,relatime 0 0                                                                                                                                                                                             
/dev/sda1 /var/lib/docker/plugins ext4 rw,relatime,errors=remount-ro,data=ordered 0 0                                                                                                                                                                        
/dev/sda1 /var/lib/docker/aufs ext4 rw,relatime,errors=remount-ro,data=ordered 0 0                                                                                                                                                                           
tmpfs /data/nomad/alloc/4a421d6e-d5aa-cc43-9518-88157e9dadbd/traefik-loadbalancer/secrets tmpfs rw,noexec,relatime,size=1024k 0 0                                                                                                                            
proc /run/docker/netns/default proc rw,nosuid,nodev,noexec,relatime 0 0                                                                                                                                                                                      
none /var/lib/docker/aufs/mnt/528aebed5fbed47d4b86667f30c55bfeb6556964071874706f5450bb835f3796 aufs rw,relatime,si=70c8e26d17d1fc77,dio,dirperm1 0 0                                                                                                         
/dev/sda1 /var/lib/docker/containers/70db665016a27a1e6bae159946270f55dff629ade79f2e4c312d1bebb5c908e6/mounts ext4 rw,relatime,errors=remount-ro,data=ordered 0 0                                                                                             
shm /var/lib/docker/containers/70db665016a27a1e6bae159946270f55dff629ade79f2e4c312d1bebb5c908e6/mounts/shm tmpfs rw,nosuid,nodev,noexec,relatime,size=65536k 0 0                                                                                             
/dev/sda1 /var/lib/docker/plugins/d78002c4bba0460f35eaa97c2b6df4c0cb4f9d42c90b3b2eef781e2d28cc7bf7/propagated-mount ext4 rw,relatime,errors=remount-ro,data=ordered 0 0                                                                                      
/dev/sda1 /var/lib/docker/plugins/d78002c4bba0460f35eaa97c2b6df4c0cb4f9d42c90b3b2eef781e2d28cc7bf7/rootfs/mnt/vmdk ext4 rw,relatime,errors=remount-ro,data=ordered 0 0

Interestingly, it seems like if I do not re-start the Minio job in Nomad (e.g., just leave it DEAD and reboot both docker nodes), I don't have any additional stability issues. Note that docker volume ls still shows the defunct volumes in this case, but each of the minio ones are not mounted by a container and currently in-use.

@govint
Copy link
Contributor

govint commented Apr 2, 2018

From the vsphere plugini logs, these volumes with long names are actually anonymous volumes that docker creates when its asked to create a volume on the fly. One example is below, will need to check with docker logs why its causing these volumes to be created.

2018-03-09 15:36:08.835927686 -0800 PST [ERROR] Failed to get volume meta-data error="Volume 634bc83d5c9346d11d58b9d9c493a5402f1b18d7b1fe1cc322c0383b8e801fa7 not found (file: /vmfs/volumes/EQL1-DevOpsVol01/dockvols/_DEFAULT/634bc83d5c9346d11d58b9d9c493a5402f1b18d7b1fe1cc322c0383b8e801fa7.vmdk)" name=634bc83d5c9346d11d58b9d9c493a5402f1b18d7b1fe1cc322c0383b8e801fa7
2018-03-09 15:36:10.095907092 -0800 PST [INFO] Attaching volume and creating filesystem name=940b0c940e4fbbed53898247d69b6274fd6782aead68d12338fd2ec1402dd9a8 fstype=ext4

2018-03-09 15:36:11.116030901 -0800 PST [INFO] Device file found. device="/dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:5:0"
2018-03-09 15:36:11.951960645 -0800 PST [INFO] Attaching volume and creating filesystem name=634bc83d5c9346d11d58b9d9c493a5402f1b18d7b1fe1cc322c0383b8e801fa7 fstype=ext4

2018-03-09 15:36:15.32395768 -0800 PST [INFO] Volume and filesystem created fstype=ext4 name=634bc83d5c9346d11d58b9d9c493a5402f1b18d7b1fe1cc322c0383b8e801fa7

Doesn't seem to be an issue with the plugin as far as these volumes go. For the lock up, please post the dmesg Linux kernel logs when that happens. Again, could try increasing memory for the VM in case thats the issue for the workload generated with Minio.

@kmott
Copy link
Author

kmott commented Apr 9, 2018

@govint, the VMs have 32GB RAM each. I will get the dmesg and docker output from the lock-up, I just re-started all of the services this morning, should be able to post something tomorrow.

@kmott
Copy link
Author

kmott commented Apr 10, 2018

Note that I am not able to get the dmesg output--the system is hard-locked, and nothing is dumped to kern.log. Console is also unresponsive during this time, I have to hard-reset it to get anything back.

Attached some docker logs during the time of the lock-up (system locked up just after Apr 9 @ 20:42:19 PST, local time on the box).

dockerd.log.gz

@govint
Copy link
Contributor

govint commented Apr 10, 2018

@kmott thanks I'll check these and get back.

@kmott
Copy link
Author

kmott commented May 4, 2018

@govint Did you have a chance to check things out?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

4 participants