-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Serve cached compressed content #13
Comments
|
func NewIMC(size int) {*IMC, error)
func (cache *IMC) Get(id string) (value, error)
func (cache *IMC) Add(id string, value Value) (error) the value is the gzipped content and the key is the static file path, it would also use cache busting and append the SHA256 hashed file to the path. For the cache algorithm, it would be a simple least recently used cache. |
|
the scope of this cache is per-server because the fixed size of the cache should depend on the machine (if there are multiple apps and it is per-app the effective total size is multiplied). If implementing from scratch, the cache would consist of a list of keys (ordered in access time) and a key value map. There are also libraries such as https://github.com/hashicorp/golang-lru (simple) and https://github.com/dgraph-io/ristretto (more complex but seems better performance) |
I think ristretto is a good choice as it is in-memory and memory bounded. The type of value of cache is bytes.Buffer. The content cache will be added to Handler struct in line 31 of |
The placement of cache sounds good to me. I'd like consider more on the cache logic:
|
|
*typo: it will be io.WriteCloser and directly use gzip.Writer or brotli.Writer |
Ristretto supports concurrent get & set, but how about the initial cache content? Suppose there are 100 requests for the same uncached file, we'd like to compress the file only once, and the 100 requests should wait on the compression to complete before serving the same compressed content.
Does it mean it would never be compressed, or compress from scratch every time the file is requested?
The user should be able to configure using the usual way of configuration (command line flags or environment variables). You may want to try setup the managed-sites mode for general deployment setup |
what about this: make a map of mutexes with the hash as key; on cache miss, it will
If there are a lot of request for the same file, the first request will trigger a compression and putting into cache, then after unlocking the other request will see the compressed file in cache so it would not need to compress again if file exceed the max cache size, it would never be compressed. it is most likely some media file that has underwent compression according to the file format already. |
Sounds good to me!
If we have an additional map of mutexs, we need to be careful to ensure each mutex has the same lifetime as the actual value. That is, when the value is evicted from cache (maybe due to LRU policy), the mutex is also deleted from the map. |
content cache
|
|
should i implement compression and contentcache as middlewares instead? |
That sounds good to me. However please note that the cache and the logic that handles caching should be separated, so the cache middleware would add the compressed content to the cache if needed (e.g. according to threshold). |
above: done 1, 2, 4, taken 3 hours
|
If file size > threshold (about >1M), we may assume it is a compressed asset (e.g. PNG/ZIP). TODO:
Expected work order: compress -> cache, so need pay attention to the order of applying middleware. |
|
|
TODO:
|
https://github.com/bxcodec/httpcache : uses transport and roundtripper conclusion: code the cache while referring to code of last 2 caches |
Serve gziped/brotli compressed content, with fixed-size in-memory cache.
The text was updated successfully, but these errors were encountered: