From a7f055c7b9c6c6e912212cbcd47b4c030cb6b70b Mon Sep 17 00:00:00 2001 From: fm1320 Date: Sun, 29 Dec 2024 23:43:20 +0000 Subject: [PATCH] add integrations and githubchat in docs --- docs/source/get_started/integrations.rst | 62 ++++-------- docs/source/tutorials/index.rst | 3 + docs/source/tutorials/rag_with_memory.rst | 117 ++++++++++++++++++++++ 3 files changed, 137 insertions(+), 45 deletions(-) create mode 100644 docs/source/tutorials/rag_with_memory.rst diff --git a/docs/source/get_started/integrations.rst b/docs/source/get_started/integrations.rst index be368091..44f8313c 100644 --- a/docs/source/get_started/integrations.rst +++ b/docs/source/get_started/integrations.rst @@ -23,18 +23,6 @@ Model Providers Anthropic -
- - Mistral AI - Mistral AI - -
-
- - Amazon Bedrock - Amazon Bedrock - -
Groq @@ -61,22 +49,10 @@ Vector Databases LanceDB
-
- - Pinecone - Pinecone - -
-
- - Milvus - Milvus - -
-Embedding Models --------------- +Embedding and Reranking Models +--------------------------- .. raw:: html @@ -93,6 +69,12 @@ Embedding Models OpenAI Embeddings +
+ + Cohere Rerank + Cohere Rerank + +
.. raw:: html @@ -132,26 +114,16 @@ Embedding Models } -Quick Start ----------- - -To use any of these integrations, first install AdalFlow with the appropriate extras: - -.. code-block:: bash - - # For model providers - pip install "adalflow[openai,anthropic,mistral,bedrock,groq]" - - # For vector databases - pip install "adalflow[qdrant,lancedb]" - -See the :ref:`installation guide ` for more details. - Usage Examples ------------ -Check out our tutorials for detailed examples of using these integrations: +Have a look at our comprehensive :ref:`tutorials ` featuring all of these integrations, including: + +- Model Clients and LLM Integration +- Vector Databases and RAG +- Embeddings and Reranking +- Agent Development +- Evaluation and Optimization +- Logging and Tracing -- :ref:`Model Clients ` -- :ref:`Vector Databases ` -- :ref:`Embeddings ` +Each tutorial provides practical examples and best practices for building production-ready LLM applications. diff --git a/docs/source/tutorials/index.rst b/docs/source/tutorials/index.rst index 4985f92b..754d0217 100644 --- a/docs/source/tutorials/index.rst +++ b/docs/source/tutorials/index.rst @@ -166,6 +166,8 @@ Putting it all together - Description * - :doc:`rag_playbook` - Comprehensive RAG playbook according to the sota research and the best practices in the industry. + * - :doc:`rag_with_memory` + - Building RAG systems with conversation memory for enhanced context retention and follow-up handling. .. toctree:: @@ -182,6 +184,7 @@ Putting it all together text_splitter db rag_playbook + rag_with_memory diff --git a/docs/source/tutorials/rag_with_memory.rst b/docs/source/tutorials/rag_with_memory.rst new file mode 100644 index 00000000..f1bfb822 --- /dev/null +++ b/docs/source/tutorials/rag_with_memory.rst @@ -0,0 +1,117 @@ +.. _tutorials-rag_with_memory: + +RAG with Memory +============== + +This guide demonstrates how to implement a RAG system with conversation memory using AdalFlow, based on our `github_chat `_ reference implementation. + +Overview +-------- + +The github_chat project is a practical RAG implementation that allows you to chat with GitHub repositories while maintaining conversation context. It demonstrates: + +- Code-aware responses using RAG +- Memory management for conversation context +- Support for multiple programming languages +- Both web and command-line interfaces + +Architecture +----------- + +The system is built with several key components: + +Data Pipeline +^^^^^^^^^^^^ + +.. code-block:: text + + Input Documents → Text Splitter → Embedder → Vector Database + +The data pipeline processes repository content through: + +1. Document reading and preprocessing +2. Text splitting for optimal chunk sizes +3. Embedding generation +4. Storage in vector database + +RAG System +^^^^^^^^^^ + +.. code-block:: text + + User Query → RAG Component → [FAISS Retriever, Generator, Memory] + ↓ + Response + +The RAG system includes: + +- FAISS-based retrieval for efficient similarity search +- LLM-based response generation +- Memory component for conversation history + +Memory Management +--------------- + +The memory system maintains conversation context through: + +1. Dialog turn tracking +2. Context preservation +3. Dynamic memory updates + +This enables: + +- Follow-up questions +- Reference to previous context +- More coherent conversations + +Quick Start +---------- + +1. Installation: + +.. code-block:: bash + + git clone https://github.com/SylphAI-Inc/github_chat + cd github_chat + poetry install + +2. Set up your OpenAI API key: + +.. code-block:: bash + + mkdir -p .streamlit + echo 'OPENAI_API_KEY = "your-key-here"' > .streamlit/secrets.toml + +3. Run the application: + +.. code-block:: bash + + # Web interface + poetry run streamlit run app.py + + # Repository analysis + poetry run streamlit run app_repo.py + +Example Usage +----------- + +Here are some example queries you can try: + +.. code-block:: text + + "What does the RAG class do?" + "Can you explain how the memory system works?" + "Show me the implementation of text splitting" + "How is the conversation context maintained?" + +Implementation Details +------------------- + +The system uses AdalFlow's components: + +- :class:`core.embedder.Embedder` for document embedding +- :class:`core.retriever.Retriever` for similarity search +- :class:`core.generator.Generator` for response generation +- Custom memory management for conversation tracking + +For detailed implementation examples, check out the `github_chat repository `_.