You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The basic idea is you can drop a list of files or a directory. It scans it recursively for every file then computes a SHA for each file, and sends through through a pipeline (see drop-processor). If the code decides the file “matches”, then it can upload it…but the upload is designed to use AWS s3 presigned URLs (to avoid file size limits) so there is a server-side resolver that you can ask (via an ident) if a specific SHA already exists. It responds with exists, along with a presigned URL if it doesn’t thus the upload can short-circuit if that SHA is already in the store.
The basic idea is you can drop a list of files or a directory. It scans it recursively for every file then computes a SHA for each file, and sends through through a pipeline (see drop-processor). If the code decides the file “matches”, then it can upload it…but the upload is designed to use AWS s3 presigned URLs (to avoid file size limits) so there is a server-side resolver that you can ask (via an ident) if a specific SHA already exists. It responds with exists, along with a presigned URL if it doesn’t thus the upload can short-circuit if that SHA is already in the store.
Here are the basics for file processing:
https://gist.github.com/awkay/3cf6d550986d4e25feefdebb8ff00671
The server-side resolver looks basically like this:
The text was updated successfully, but these errors were encountered: