diff --git a/detailed_guides/PERMANENT-MEMORY.md b/detailed_guides/PERMANENT-MEMORY.md index f7786d6b..6bb3af99 100644 --- a/detailed_guides/PERMANENT-MEMORY.md +++ b/detailed_guides/PERMANENT-MEMORY.md @@ -1,5 +1,7 @@ -# Permanent Memory and Conversations -Permanent memory has now been implemented into the bot, using the OpenAI Ada embeddings endpoint, and Pinecone. +# Permanent Memory and Conversations +We are migrating towards using [QDRANT](https://qdrant.tech/) as our vector database backing, we are moving away from pinecone. Qdrant is an excellent vector database choice, and in fact the best one that we've tested and used so far. + +Permanent memory has now been implemented into the bot, using the OpenAI Ada embeddings endpoint, and Pinecone. Pinecone is a vector database. The OpenAI Ada embeddings endpoint turns pieces of text into embeddings. The way that this feature works is by embedding the user prompts and the GPT responses, storing them in a pinecone index, and then retrieving the most relevant bits of conversation whenever a new user prompt is given in a conversation. @@ -22,4 +24,4 @@ To manually create an index instead of the bot automatically doing it, go to the Then, name the index `conversation-embeddings`, set the dimensions to `1536`, and set the metric to `DotProduct`: -