To start with, we will introduce HuggingFace Transformers library, and how to make use of it to generate text from a prompt. We will explore some of the important concepts behind LLMs like embeddings.
If you already have Conda setup, run the below command in the root directory to set up the environment
make -f Makefile
pip install -r requirements.txt
Install Ollama. Follow the instructions and install Ollama locally. Run the Llama 2 7B with the command. It will serve Llama 2 on port 11434
ollama run llama2
- Indexing and Vector Storage library
- Serves the Ollama Llama2 7B model
- Vector Database
Create a database of PDFs (we've used some chapters from Artificial Intelligence: A Modern Approach) and save it in a directory 'data' at the root of this project. This creates a private knowledge database for the LLM.
Place the data directory at the root of the project folder. Run main.ipynb found at the src directory.