Replies: 1 comment
-
It's better to patch Mostodon/Feiverse to use hardlink for duplicated files. At least, they should use copy_file_range or S3 COPY to duplicate any file, so that these file will use the same blocks in underlying storage. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello.
I run a Mastodon instance, and I would find great value in hosting my mastodon on juicefs... but my usecase is for multiple mastodon instances to use the same juicefs S3.
These files are like 1kb to 15mb average... and if we have 5-100 instances on a single juice. that's 5-100 copies of the same file, and dedupe would be majorly important as they're all sharing the same file.
also, I'd love to see juicefs work as a kind of CDN. If we had multiple backend S3 buckets. We could replicate to multiple locations.
Anyway, just an interesting use case.
Beta Was this translation helpful? Give feedback.
All reactions