Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

goroutine leak in getter when bufferPool goroutine exits early #98

Open
scosgrave opened this issue May 13, 2016 · 1 comment
Open

goroutine leak in getter when bufferPool goroutine exits early #98

scosgrave opened this issue May 13, 2016 · 1 comment

Comments

@scosgrave
Copy link

We use your s3gof3r library where I work for downloading files from S3, and we recently noticed what we thought was a goroutine leak. Our monitoring showed a increase in the amount of memory being used that correlated exactly with a growing count of goroutines. We added a goroutine to print out the stack traces of all goroutines periodically, and we saw a lot of stack traces that looked like this:

goroutine 19603 [chan receive, 205 minutes]:
github.com/XXX/XXX/vendor/github.com/rlmcpherson/s3gof3r.(*getter).retryGetChunk(0xc8269ab760, 0xc820c79080)
        /var/jenkins_home/workspace/XXX Server/go/src/github.com/XXX/XXX/vendor/github.com/rlmcpherson/s3gof3r/getter.go:157 +0x8f
github.com/XXX/XXX/vendor/github.com/rlmcpherson/s3gof3r.(*getter).worker(0xc8269ab760)
        /var/jenkins_home/workspace/XXX Server/go/src/github.com/XXX/XXX/vendor/github.com/rlmcpherson/s3gof3r/getter.go:151 +0x75
created by github.com/XXX/XXX/vendor/github.com/rlmcpherson/s3gof3r.newGetter
        /var/jenkins_home/workspace/XXX Server/go/src/github.com/XXX/XXX/vendor/github.com/rlmcpherson/s3gof3r/getter.go:95 +0x97a

I'm not 100% sure how the code gets into this state. My guess is that the getter.Read() function is returning and I'm calling getter.Close() before initChunks() is done. The getter.Close() closes the g.sp.quit channel, and that causes the goroutine in bufferPool() to exit before initChunks() is done. I think that if you closed the bp.get channel at the end of the goroutine in bufferPool(), and checked for this condition in retryGetChunk(), it allow the worker goroutines to exit and not wait forever in retryGetChunk().

@jmheidly
Copy link

We are facing the same issue with the bufferPool. Our server memory is in an ever-growing state every time PutWriter is triggered.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants