Skip to content

Commit

Permalink
llm-builder-update v0.3 (#93)
Browse files Browse the repository at this point in the history
Covering New features

* different RAG modes
* graph quality configuration
* webpages as sources
* more LLM models supported (not in public deployment, but for customers)
* better graph visualization (lexical + entity graph) + local filtering
* lots of more details
  • Loading branch information
jexp authored Jul 12, 2024
1 parent 879df35 commit 67aceb2
Show file tree
Hide file tree
Showing 3 changed files with 201 additions and 48 deletions.
123 changes: 100 additions & 23 deletions modules/genai-ecosystem/pages/llm-graph-builder-deployment.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -11,13 +11,11 @@ include::_graphacademy_llm.adoc[]

== Prerequisites

You will need to have a Neo4j Database V5.15 or later with https://neo4j.com/docs/apoc/current/installation/[APOC installed^] to use this Knowledge Graph Builder.
You can use any https://neo4j.com/aura/[Neo4j Aura database^] (including the free tier database). Neo4j Aura automatically includes APOC and run on the latest Neo4j version, making it a great choice to get started quickly. You can also use the free trial in https://sandbox.neo4j.com[Neo4j Sandbox^], which also includes Graph Data Science.
You will need to have a Neo4j Database V5.18 or later with https://neo4j.com/docs/apoc/current/installation/[APOC installed^] to use this Knowledge Graph Builder.
You can use any https://neo4j.com/aura/[Neo4j Aura database^] (including the free tier database). Neo4j Aura automatically includes APOC and run on the latest Neo4j version, making it a great choice to get started quickly.

[CAUTION]
====
If want to use https://neo4j.com/download/[Neo4j Desktop^] instead, you will not be able to use the docker-compose deployment method. You will have to follow the link:/labs/genai-ecosystem/llm-graph-builder-deployment/#dev-deployment[separate deployment of backend and frontend section].
====
You can also use the free trial in https://sandbox.neo4j.com[Neo4j Sandbox^], which also includes Graph Data Science.
If want to use https://neo4j.com/product/developer-tools/#desktop[Neo4j Desktop^] instead, you need to configure your `NEO4J_URI=bolt://host.docker.internal` to allow the Docker container to access the network running on your computer.

== Docker-compose

Expand Down Expand Up @@ -46,6 +44,49 @@ You can then run Docker Compose to build and start all components:
docker-compose up --build
```

=== Configuring LLM Models

You can configure the following LLM models besides the ones supported out of the box:

* OpenAI GPT 3.5 and 4o (default)
* VertexAI (Gemini 1.0) (default)
* VertexAI (Gemini 1.5)
* Diffbot
* Bedrock models
* Anthropic
* OpenAI API compatible models like Ollama, Groq, Fireworks

To achieve that you need to set a number of environment variables:

In your `.env` file, add the following lines. You can of course also add other model configurations from these providers or any OpenAI API compatible provider.

[source,env]
====
LLM_MODEL_CONFIG_azure_ai_gpt_35="gpt-35,https://<deployment>.openai.azure.com/,<api-key>,<version>"
LLM_MODEL_CONFIG_anthropic_claude_35_sonnet="claude-3-5-sonnet-20240620,<api-key>"
LLM_MODEL_CONFIG_fireworks_llama_v3_70b="accounts/fireworks/models/llama-v3-70b-instruct,<api-key>"
LLM_MODEL_CONFIG_bedrock_claude_35_sonnet="anthropic.claude-3-sonnet-20240229-v1:0,<api-key>,<region>"
LLM_MODEL_CONFIG_ollama_llama3="llama3,http://host.docker.internal:11434"
LLM_MODEL_CONFIG_fireworks_qwen_72b="accounts/fireworks/models/qwen2-72b-instruct,<api-key>"
# Optional Frontend config
LLM_MODELS="diffbot,gpt-3.5,gpt-4o,azure_ai_gpt_35,azure_ai_gpt_4o,groq_llama3_70b,anthropic_claude_35_sonnet,fireworks_llama_v3_70b,bedrock_claude_35_sonnet,ollama_llama3,fireworks_qwen_72b"
====

In your `docker-compose.yml` you need to pass the variables through:

[source,yaml]
====
- LLM_MODEL_CONFIG_anthropic_claude_35_sonnet=${LLM_MODEL_CONFIG_anthropic_claude_35_sonnet-}
- LLM_MODEL_CONFIG_fireworks_llama_v3_70b=${LLM_MODEL_CONFIG_fireworks_llama_v3_70b-}
- LLM_MODEL_CONFIG_azure_ai_gpt_4o=${LLM_MODEL_CONFIG_azure_ai_gpt_4o-}
- LLM_MODEL_CONFIG_azure_ai_gpt_35=${LLM_MODEL_CONFIG_azure_ai_gpt_35-}
- LLM_MODEL_CONFIG_groq_llama3_70b=${LLM_MODEL_CONFIG_groq_llama3_70b-}
- LLM_MODEL_CONFIG_bedrock_claude_3_5_sonnet=${LLM_MODEL_CONFIG_bedrock_claude_3_5_sonnet-}
- LLM_MODEL_CONFIG_fireworks_qwen_72b=${LLM_MODEL_CONFIG_fireworks_qwen_72b-}
- LLM_MODEL_CONFIG_ollama_llama3=${LLM_MODEL_CONFIG_ollama_llama3-}
====

=== Additional configs

By default, the input sources will be: Local files, Youtube, Wikipedia and AWS S3.
Expand Down Expand Up @@ -92,32 +133,68 @@ uvicorn score:app --reload

== ENV

=== Processing Configuration

[options="header",cols="m,a,m,a"]
|===
| Env Variable Name | Mandatory/Optional | Default Value | Description
| OPENAI_API_KEY | Optional | sk-... | API key for OpenAI (if enabled)
| DIFFBOT_API_KEY | Optional | | API key for Diffbot (if enabled)
| EMBEDDING_MODEL | Optional | all-MiniLM-L6-v2 | Model for generating the text embedding (all-MiniLM-L6-v2 , openai , vertexai)
| IS_EMBEDDING | Optional | true | Flag to enable text embedding
| IS_EMBEDDING | Optional | true | Flag to enable text embedding for chunks
| ENTITY_EMBEDDING | Optional | false | Flag to enable entity embedding (id and description)
| KNN_MIN_SCORE | Optional | 0.94 | Minimum score for KNN algorithm for connecting similar Chunks
| GEMINI_ENABLED | Optional | False | Flag to enable Gemini
| GCP_LOG_METRICS_ENABLED | Optional | False | Flag to enable Google Cloud logs
| NUMBER_OF_CHUNKS_TO_COMBINE | Optional | 6 | Number of chunks to combine when extracting entities
| UPDATE_GRAPH_CHUNKS_PROCESSED | Optional | 20 | Number of chunks processed before writing to the database and updating progress
| NEO4J_URI | Optional | neo4j://database:7687 | URI for Neo4j database
| NEO4J_USERNAME | Optional | neo4j | Username for Neo4j database
| NEO4J_PASSWORD | Optional | password | Password for Neo4j database
| LANGCHAIN_API_KEY | Optional | | API key for LangSmith
| LANGCHAIN_PROJECT | Optional | | Project for LangSmith
| LANGCHAIN_TRACING_V2 | Optional | true | Flag to enable LangSmith tracing
| LANGCHAIN_ENDPOINT | Optional | https://api.smith.langchain.com | Endpoint for LangSmith API
| BACKEND_API_URL | Optional | http://localhost:8000 | URL for backend API
| BLOOM_URL | Optional | https://workspace-preview.neo4j.io/workspace/explore?connectURL={CONNECT_URL}&search=Show+me+a+graph | URL for Bloom visualization
| REACT_APP_SOURCES | Optional | local,youtube,wiki,s3 | List of input sources that will be available
| LLM_MODELS | Optional | diffbot,gpt-3.5,gpt-4o | Models available for selection on the frontend, used for entities extraction and Q&A Chatbot (other models: `gemini-1.0-pro, gemini-1.5-pro`)
| ENV | Optional | DEV | Environment variable for the app
| TIME_PER_CHUNK | Optional | 4 | Time per chunk for processing
| CHUNK_SIZE | Optional | 5242880 | Size of each chunk for processing
|===

=== Front-End Configuration

[options="header",cols="m,a,m,a"]
|===
| Env Variable Name | Mandatory/Optional | Default Value | Description
| BACKEND_API_URL | Optional | http://localhost:8000 | URL for backend API
| REACT_APP_SOURCES | Optional | local,youtube,wiki,s3 | List of input sources that will be available
| BLOOM_URL | Optional | https://workspace-preview.neo4j.io/workspace/explore?connectURL={CONNECT_URL}&search=Show+me+a+graph | URL for Bloom visualization
|===


=== GCP Cloud Integration

[options="header",cols="m,a,m,a"]
|===
| Env Variable Name | Mandatory/Optional | Default Value | Description
| GEMINI_ENABLED | Optional | False | Flag to enable Gemini
| GCP_LOG_METRICS_ENABLED | Optional | False | Flag to enable Google Cloud logs
| GOOGLE_CLIENT_ID | Optional | | Client ID for Google authentication for GCS upload
| GCS_FILE_CACHE | Optional | False | If set to True, will save the files to process into GCS. If set to False, will save the files locally
|===

=== LLM Model Configuration

[options="header",cols="m,a,m,a"]
|===
| Env Variable Name | Mandatory/Optional | Default Value | Description
| LLM_MODELS | Optional | diffbot,gpt-3.5,gpt-4o | Models available for selection on the frontend, used for entities extraction and Q&A Chatbot (other models: `gemini-1.0-pro, gemini-1.5-pro`)
| OPENAI_API_KEY | Optional | sk-... | API key for OpenAI (if enabled)
| DIFFBOT_API_KEY | Optional | | API key for Diffbot (if enabled)
| EMBEDDING_MODEL | Optional | all-MiniLM-L6-v2 | Model for generating the text embedding (all-MiniLM-L6-v2 , openai , vertexai)
| GROQ_API_KEY | Optional | | API key for Groq
| GEMINI_ENABLED | Optional | False | Flag to enable Gemini
| LLM_MODEL_CONFIG_<model_id>="<model>,<base-url>,<api-key>,<version>" | Optional | | Configuration for additional LLM models
|===


=== LangChain and Neo4j Configuration

[options="header",cols="m,a,m,a"]
|===
| Env Variable Name | Mandatory/Optional | Default Value | Description
| NEO4J_URI | Optional | neo4j://database:7687 | URI for Neo4j database for the backend to connect to
| NEO4J_USERNAME | Optional | neo4j | Username for Neo4j database for the backend to connect to
| NEO4J_PASSWORD | Optional | password | Password for Neo4j database for the backend to connect to
| LANGCHAIN_API_KEY | Optional | | API key for LangSmith
| LANGCHAIN_PROJECT | Optional | | Project for LangSmith
| LANGCHAIN_TRACING_V2 | Optional | false | Flag to enable LangSmith tracing
| LANGCHAIN_ENDPOINT | Optional | https://api.smith.langchain.com | Endpoint for LangSmith API
|===
92 changes: 81 additions & 11 deletions modules/genai-ecosystem/pages/llm-graph-builder-features.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,9 @@ include::_graphacademy_llm.adoc[]
== Sources

=== Local file upload
You can drag & drop files into the first input zone on the left. The application will store the uploaded sources as Document nodes in the graph using LangChain Loaders.

You can drag & drop files into the first input zone on the left. The application will store the uploaded sources as Document nodes in the graph using LangChain Loaders (PDFLoader and Unstructured Loader).

|===
| File Type | Supported Extensions

Expand All @@ -22,35 +24,103 @@ You can drag & drop files into the first input zone on the left. The application
| Text | .html, .txt, .md
|===

=== Youtube
The second input will let you copy/paste the link of a YouTube video you want to use. The application will parse and store the uploaded YouTube videos (transcript) as a Document nodes in the graph using YouTube parsers.
=== Web Links

The second input zone handles web links.

* YouTube transcripts
* Wikipedia pages
* Web Pages

The application will parse and store the uploaded YouTube videos (transcript) as a Document nodes in the graph using YouTube parsers.

=== Wikipedia
The third input takes a Wikipedia page URL as input. For example, you can provide `https://en.wikipedia.org/wiki/Neo4j` and it will load the Neo4j Wikipedia page.
For Wikipedia links we use the Wikipedia Loader. For example, you can provide `https://en.wikipedia.org/wiki/Neo4j` and it will load the Neo4j Wikipedia page.

For web pages, we use the Unstructured Loader. For example, you can provide articles from `https://theguardian.com/` and it will load the article content.

== Cloud Storage

=== AWS S3

This AWS S3 integration allows you to connect to an S3 bucket and load the files from there. You will need to provide your AWS credentials and the bucket name.

=== Google Cloud Storage
This Google Cloud Storage integration allows you to connect to a GCS bucket and load the files from there. You will need to provide your Google Cloud Project ID and the bucket name.

This Google Cloud Storage integration allows you to connect to a GCS bucket and load the files from there. You will have provide your GCS bucket name and optionally a folder an follow an auth flow to give the application access to the bucket.

== LLM Models
The application uses ML models (LLMs: OpenAI, Gemini, Diffbot) to transform PDFs, web pages, and YouTube videos into a knowledge graph of entities and their relationships. ENV variables can be set to enable/disable specific models.

== Graph Schema
The application uses ML models to transform PDFs, web pages, and YouTube video transcripts into a knowledge graph of entities and their relationships. ENV variables can be set to configure/enable/disable specific models.

The following models are configured (but only the first 3 are available in the publicly hosted version)

* OpenAI GPT 3.5 and 4o
* VertexAI (Gemini 1.0),
* Diffbot
* Bedrock,
* Anthropic
* OpenAI API compatible models like Ollama, Groq, Fireworks

The selected LLM model will both be used for processing the newly uploaded files and for powering the chatbot. Please note that the models have different capabilities, so they will work not equally well especially for extraction.

== Graph Enhancements

=== Graph Schema

image::llm-graph-builder-taxonomy.png[width=600, align=center]
If you want to use a pre-defined or your own graph schema, you can click on the setting icon in the top right corner and either select a pre-defined schema from the dropdown, use your own by writing down the node labels and relationships, pull the existing schema from an existing Neo4j database (`Use Existing Schema`), or copy/paste a text and ask the LLM to analyze it and come up with a suggested schema (`Get Schema From Text`).

If you want to use a pre-defined or your own graph schema, you can do so in the Graph Enhancements popup. This is also shown the first time you construct a graph and the state of the model configuration is listed below the connection information.

You can either:
* select a pre-defined schema from the dropdown on top,
* use your own by entering the node labels and relationships,
* fetch the existing schema from an existing Neo4j database (`Use Existing Schema`),
* or copy/paste a text or schema description (also works with RDF ontologies or Cypher/GQL schema) and ask the LLM to analyze it and come up with a suggested schema (`Get Schema From Text`).

=== Delete Disconnected Nodes

When extracting entities, it can happen that after the extraction a number of nodes are only connected to text chunks but not to other entities.
Which results to disconnected entities in the entity graph.

While they can hold relevant information for question answering they might affect your downstream usage. So in this view you can select which of the entities that are only connected to text chunks should be deleted.

////
=== Merging Duplicate Entities
While the prompt instructs the LLM to extract unique identifier for entities, across chunks and documents the same entity can end up with different spellings as duplicate in the graph.
Here we use a mixture of entity embedding, edit distance and substring containment to generate a list of potentially duplicate entities that can be merged.
You can select which sets of entities should be merged and exclude certain entities from the merge.
////

== Chatbot

=== How it works
When the user asks a question, we use the Neo4j Vector Index with a Retrieval Query to find the related chunks and entities connected together, up to a depth of 2 hops. We also summarize the chat history and use it as an element to enrich the context.

The various inputs and sources (the question, vector results, chat history) are all sent to the selected LLM model in a custom prompt, asking to provide and format a response to the question asked based on the elements and context provided. Of course, there is more magic to the prompt such as formatting, asking to cite sources, not speculating if the answer is not known, etc. The full prompt and instructions can be found as FINAL_PROMPT in QA_integration.py.
When the user asks a question, we use the configured RAG mode to answer it with the data from the graph of extracted documents. That can mean the question is turned into an embedding or a graph query or a more advanced RAG approach.

We also summarize the chat history and use it as an element to enrich the context.

=== Features

- *Select RAG Mode* you can select vector-only or GraphRAG (vector+graph) modes
- *Clear chat:* Will delete the current session's chat history.
- *Expand view:* Will open the chatbot interface in a fullscreen mode.
- *Details:* Will open a Retrieval information pop-up showing details on how the RAG agent collected and used sources (documents), chunks, and entities. Also provides information on the model used and the token consumption.
- *Copy:* Will copy the content of the response to the clipboard.
- *Text-To-Speech:* Will read out loud the content of the response.

=== GraphRAG

For GraphRAG we use the Neo4j Vector Index (and a fulltext index for hybrid search) with a Retrieval Query to find the most relevant chunks and entities connected to these and then follow the entity relationships up to a depth of 2 hops.

=== Vectory Only RAG

For Vector only RAG we only use the vector and fulltext index (hybrid) search results and don't include additional information from the entity graph.

=== Answer Generation

The various inputs and determined sources (the question, vector results, entities (name + description), relationship pairs, chat history) are all sent to the selected LLM model as context information in a custom prompt, asking to provide and format a response to the question asked based on the elements and context provided.

Of course, there is more magic to the prompt such as formatting, asking to cite sources, not speculating if the answer is not known, etc. The full prompt and instructions can be found in the https://github.com/neo4j-labs/llm-graph-builder[GitHub repository^].
Loading

0 comments on commit 67aceb2

Please sign in to comment.