Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Default value for AWS_S3_MAX_MEMORY_SIZE is dangerous for large files #936

Closed
swiatekm-side opened this issue Sep 14, 2020 · 1 comment
Closed

Comments

@swiatekm-side
Copy link

AWS_S3_MAX_MEMORY_SIZE sets the max size for a SpooledTemporaryFile, which the S3Boto3StorageFile uses for temporary storage. The default of 0 means "never rollover, keep contents only in memory", which causes issues with large files. In particular, even if we only want to read the first X bytes of the file, the current implementation (

self.obj.download_fileobj(self._file)
) will still download the whole thing and keep it in memory.

I think this is an unpleasantly surprising default - that users don't expect their files to be kept wholly in memory whenever their contents are accessed, irrespective of their size. The current default behaviour of "keep the buffer in memory always" should be an opt-in rather than an opt-out.

Let me know if I'm missing something here. Given that I've used storages for a while and haven't ever noticed this, I assume the typical use cases don't care. I don't think changing the default would really impact them in a negative way, though.

@jschneier
Copy link
Owner

Closed in #1359

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants