Workshop to create a RAG application using LLM models.
This workshop is developed in Python 🐍 (Jupyter Notebook) and InterSystems IRIS.
The main purpose is to show you the main steps to create a RAG application using an LLM and a vector database.
You can find more in-depth information in https://learning.intersystems.com.
- Git
- Docker (if you are using Windows, make sure you set your Docker installation to use "Linux containers").
- Docker Compose
- Visual Studio Code + InterSystems ObjectScript VSCode Extension
Build the image we will use during the workshop:
Clone the repository:
git clone https://github.com/intersystems-ib/workshop-llm
cd workshop-llm
Build the image:
docker compose build
Run the containers:
docker compose up -d
After running the containers, you should be able to access to:
- InterSystems IRIS Management Portal. You can login using
superuser
/SYS
- Jupyter Notebook
You have some medicine leaflets (in spanish) in ./data.
This example is about creating a RAG Q&A application that can answer questions about those medicine leaflets.
Open Jupyter Notebook, there you can find:
- QA-PDF-LLM.ipynb - RAG example using MistralAI LLM
- QA-PDF-Local.ipynb - RAG example using a local LLM
You can test the project step by step or execute it at one time, feel free.
This example is about a company called Holefoods that sells food with some hole on it :)
Using the sales data model of the company, the goal is to create an assistant that can translate natural language questions into valid SQL that answer the question.
In Jupyter Notebook, you will find:
- QA-SQL-LLM.ipynb - text to SQL example using OpenAI LLM.