- 1.1 LLMs and RAG
- 1.2 Preparing the environment
- 1.3 Retrieval and the basics of search
- 1.4 OpenAI API
- 1.5 Simple RAG with Open AI
- 1.6 Text search with Elasticsearch
- 2.1 Getting an environment with a GPU
- 2.2 Open-source models from HuggingFace Hub
- 2.3 Running LLMs on a CPU with Ollama
- 2.4 Creating a simple UI with Streamlit
- 3.1 Vector search
- 3.2 Creating and indexing embeddings
- 3.3 Vector search with Elasticsearch
- 4.1 Monitoring
- 4.2 Computing metrics to monitor the quality of LLM answers
- 4.3 Tracking chat history and user feedback
- 4.4 Creating dashboards with Grafana for visualization
- 5.1 Ingesting data with Mage
- 6.1 Best practices