Skip to content

Latest commit

 

History

History
191 lines (142 loc) · 6.44 KB

README.md

File metadata and controls

191 lines (142 loc) · 6.44 KB

High performance in-memory cache

Go Reference Mentioned in Awesome Go

Otter is one of the most powerful caching libraries for Go based on researches in caching and concurrent data structures. Otter also uses the experience of designing caching libraries in other languages (for example, caffeine).

📖 Contents

✨ Features

  • Simple API: Just set the parameters you want in the builder and enjoy
  • Autoconfiguration: Otter is automatically configured based on the parallelism of your application
  • Generics: You can safely use any comparable types as keys and any types as values
  • TTL: Expired values will be automatically deleted from the cache
  • Cost-based eviction: Otter supports eviction based on the cost of each entry
  • Deletion listener: You can pass a callback function in the builder that will be called when an entry is deleted from the cache
  • Stats: You can collect various usage statistics
  • Excellent throughput: Otter can handle a huge number of requests
  • Great hit ratio: New S3-FIFO algorithm is used, which shows excellent results

🗃 Related works

Otter is based on the following papers:

📚 Usage

📋 Requirements

  • Go 1.19+

🛠️ Installation

go get -u github.com/maypok86/otter

✏️ Examples

Otter uses a builder pattern that allows you to conveniently create a cache instance with different parameters.

Cache with const TTL

package main

import (
    "fmt"
    "time"

    "github.com/maypok86/otter"
)

func main() {
    // create a cache with capacity equal to 10000 elements
    cache, err := otter.MustBuilder[string, string](10_000).
        CollectStats().
        Cost(func(key string, value string) uint32 {
            return 1
        }).
        WithTTL(time.Hour).
        Build()
    if err != nil {
        panic(err)
    }

    // set item with ttl (1 hour) 
    cache.Set("key", "value")

    // get value from cache
    value, ok := cache.Get("key")
    if !ok {
        panic("not found key")
    }
    fmt.Println(value)

    // delete item from cache
    cache.Delete("key")

    // delete data and stop goroutines
    cache.Close()
}

Cache with variable TTL

package main

import (
    "fmt"
    "time"

    "github.com/maypok86/otter"
)

func main() {
    // create a cache with capacity equal to 10000 elements
    cache, err := otter.MustBuilder[string, string](10_000).
        CollectStats().
        Cost(func(key string, value string) uint32 {
            return 1
        }).
        WithVariableTTL().
        Build()
    if err != nil {
        panic(err)
    }

    // set item with ttl (1 hour)
    cache.Set("key1", "value1", time.Hour)
    // set item with ttl (1 minute)
    cache.Set("key2", "value2", time.Minute)

    // get value from cache
    value, ok := cache.Get("key1")
    if !ok {
        panic("not found key")
    }
    fmt.Println(value)

    // delete item from cache
    cache.Delete("key1")

    // delete data and stop goroutines
    cache.Close()
}

📊 Performance

The benchmark code can be found here.

🚀 Throughput

Throughput benchmarks are a Go port of the caffeine benchmarks. This microbenchmark compares the throughput of caches on a zipf distribution, which allows to show various inefficient places in implementations.

You can find results here.

🎯 Hit ratio

The hit ratio simulator tests caches on various traces:

  1. Synthetic (zipf distribution)
  2. Traditional (widely known and used in various projects and papers)
  3. Modern (recently collected from the production of the largest companies in the world)

You can find results here.

💾 Memory consumption

The memory overhead benchmark shows how much additional memory the cache will require at different capacities.

You can find results here.

👏 Contribute

Contributions are welcome as always, before submitting a new PR please make sure to open a new issue so community members can discuss it. For more information please see contribution guidelines.

Additionally, you might find existing open issues which can help with improvements.

This project follows a standard code of conduct so that you can understand what actions will and will not be tolerated.

📄 License

This project is Apache 2.0 licensed, as found in the LICENSE.