Before running an example, MUST enter the specific example directory. For example, to run the map
example, switch to the map
directory, and then use python to run it.
git clone https://github.com/zilliztech/gpt-cache
cd gpt-cache
cd examples/map
python map_manager.py
If the running example includes a model or complex third-party library (like: faiss, towhee), the first run may take some time as it needs to download the model runtime environment, model data, dependencies, etc. However, the subsequent runs will be significantly faster.
How to use the map to cache data.
How to use the sqlite to store the scale data and the faiss to query the vector data.
On the basis of the above example, use towhee for embedding operation
Note: the default embedding model only support the ENGLISH. If you want to use the Chinese, you can use the uer/albert-base-chinese-cluecorpussmall
model. For other languages, you should use the corresponding model.
How to use the sqlite to store the scale data and the Milvus or Zilliz Cloud to store the vector data.
The benchmark script about the Sqlite + Faiss + Towhee
Test data source: Randomly scrape some information from the webpage (origin), and then let chatgpt produce corresponding data (similar).
- threshold: answer evaluation threshold, A smaller value means higher consistency with the content in the cache, a lower cache hit rate, and a lower cache miss hit; a larger value means higher tolerance, a higher cache hit rate, and at the same time also have higher cache misses.
- positive: effective cache hit, which means entering
similar
to search and get the same result asorigin
- negative: cache hit but the result is wrong, which means entering
similar
to search and get the different result asorigin
- fail count: cache miss
data file: mock_data.json similarity evaluation func: pair_evaluation (search distance)
threshold | average time | positive | negative | fail count |
---|---|---|---|---|
20 | 0.04s | 455 | 27 | 517 |
50 | 0.09s | 871 | 86 | 42 |
100 | 0.12s | 905 | 93 | 1 |