Skip to content

Commit

Permalink
Update Google Page - move integrations out, add Aura to LLM Graph Bui…
Browse files Browse the repository at this point in the history
…lder
  • Loading branch information
jexp committed May 6, 2024
1 parent 64ddbf8 commit 2b0459b
Show file tree
Hide file tree
Showing 3 changed files with 62 additions and 32 deletions.
43 changes: 11 additions & 32 deletions modules/genai-ecosystem/pages/google-cloud-demo.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -9,38 +9,6 @@ include::_graphacademy_llm.adoc[]
:page-product: google-cloud-demo
//:imagesdir: https://dev.assets.neo4j.com/wp-content/uploads/2024/

== Deploying GenAI Applications and APIs with Vertex AI Reasoning Engine

GenAI developers, familiar with orchestration tools and architectures often face challenges when deploying their work to production.
Google's https://cloud.google.com/vertex-ai/generative-ai/docs/reasoning-engine/overview[Vertex AI Reasoning Engine Runtime^](preview) now offers an easy way to deploy, scale, and monitor GenAI applications and APIs without in-depth knowledge of containers or cloud configurations.

Compatible with various orchestration frameworks, including xref:langchain.adoc[LangChain], this solution allows developers to use the https://cloud.google.com/vertex-ai/docs/python-sdk/use-vertex-ai-python-sdk[Vertex AI Python SDK^] for setup, testing, deployment.

It works like this:

- Create a Python class for your GenAI app class to store environment info and static data in the constructor.
- Initialize your orchestration framework in a `set_up` method at startup.
- Process user queries with a `query` method, returning text responses.
- Deploy your GenAI API with `llm_extension.ReasoningEngine.create`, including class instance and requirements.

Our new integrations with Google Cloud, combined with our extensive LangChain integrations, allow you to seamlessly incorporate Neo4j knowledge graphs into your GenAI stack.
You can use LangChain and other orchestration tools to deploy RAG architectures, like GraphRAG, with Reasoning Engine Runtime.

You can see below an example of GraphRAG on a Company News Knowledge Graph using LangChain, Neo4j and Gemini.

image::https://dev.assets.neo4j.com/wp-content/uploads/2024/reasoning-engine-graphrag.png[]

== Knowledge Graph Generation with Gemini Pro

The xref:llm-graph-builder.adoc[LLM Graph Builder] that extracts entities from unstructured text (PDFs, YouTube Transcripts, Wikipedia) can be configured to use VertexAI both as embedding model and Gemnini as LLM for the extraction.
PDFs can be also be loaded from Google Cloud buckets.

It uses the underlying llm-graph-transformer library that we contributed to LangChain.

// TODO image

// The Demo is available https://llm-graph-builder-gemini.neo4jlabs.com[online with with Google Gemini on Vertex AI^].

== SEC Filings GenAI Labs

This example consists of two sample applications that show how to use Neo4j with the generative AI capabilities in Google Cloud Vertex AI. We explore how to leverage Google generative AI to build and consume a knowledge graph in Neo4j.
Expand Down Expand Up @@ -68,3 +36,14 @@ The Demo is available on GitHub: https://github.com/neo4j-partners/neo4j-generat
++++
<iframe width="640" height="480" src="https://www.youtube.com/embed/UGWVMfo5Pew" frameborder="0" allow="accelerometer; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
++++

== Knowledge Graph Generation with Gemini Pro

The xref:llm-graph-builder.adoc[LLM Graph Builder] that extracts entities from unstructured text (PDFs, YouTube Transcripts, Wikipedia) can be configured to use VertexAI both as embedding model and Gemnini as LLM for the extraction.
PDFs can be also be loaded from Google Cloud buckets.

It uses the underlying llm-graph-transformer library that we contributed to LangChain.

// TODO image

// The Demo is available https://llm-graph-builder-gemini.neo4jlabs.com[online with with Google Gemini on Vertex AI^].
40 changes: 40 additions & 0 deletions modules/genai-ecosystem/pages/google-cloud-genai-integrations.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
= Google Cloud GenAI Integrations
include::_graphacademy_llm.adoc[]
:slug: google-cloud-genai-integrations
:author: Ben Lackey, Michael Hunger
:category: genai-ecosystem
:tags: rag, demo, retrieval augmented generation, chatbot, google, vertexai, gemini, langchain, reasoning-engine
:neo4j-versions: 5.x
:page-pagination:
:page-product: google-cloud-demo
//:imagesdir: https://dev.assets.neo4j.com/wp-content/uploads/2024/

== Function Calling with Gemini

// TODO

== Quering Neo4j via Vertex AI Extensions

// TODO

== Deploying GenAI Applications and APIs with Vertex AI Reasoning Engine

GenAI developers, familiar with orchestration tools and architectures often face challenges when deploying their work to production.
Google's https://cloud.google.com/vertex-ai/generative-ai/docs/reasoning-engine/overview[Vertex AI Reasoning Engine Runtime^](preview) now offers an easy way to deploy, scale, and monitor GenAI applications and APIs without in-depth knowledge of containers or cloud configurations.

Compatible with various orchestration frameworks, including xref:langchain.adoc[LangChain], this solution allows developers to use the https://cloud.google.com/vertex-ai/docs/python-sdk/use-vertex-ai-python-sdk[Vertex AI Python SDK^] for setup, testing, deployment.

It works like this:

- Create a Python class for your GenAI app class to store environment info and static data in the constructor.
- Initialize your orchestration framework in a `set_up` method at startup.
- Process user queries with a `query` method, returning text responses.
- Deploy your GenAI API with `llm_extension.ReasoningEngine.create`, including class instance and requirements.

Our new integrations with Google Cloud, combined with our extensive LangChain integrations, allow you to seamlessly incorporate Neo4j knowledge graphs into your GenAI stack.
You can use LangChain and other orchestration tools to deploy RAG architectures, like GraphRAG, with Reasoning Engine Runtime.

You can see below an example of GraphRAG on a Company News Knowledge Graph using LangChain, Neo4j and Gemini.

image::https://dev.assets.neo4j.com/wp-content/uploads/2024/reasoning-engine-graphrag.png[]

11 changes: 11 additions & 0 deletions modules/genai-ecosystem/pages/llm-graph-builder.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,17 @@ It uses the https://python.langchain.com/docs/use_cases/graph/constructing[llm-g

image::llm-graph-builder-viz.png[width=600, align=center]

[NOTE]
====
[Creating a new Neo4j Aura Instance^]
* Login or create an account at https://console.neo4j.io
* Under Instances, create a new AuraDB Free Database
* Which downloads the credentials file
* Wait until the instance is running
* Drop the credentials file on the connect dialog
====

== How it works

1. Uploaded Sources are stored as `Document` nodes in the graph
Expand Down

0 comments on commit 2b0459b

Please sign in to comment.