Skip to content

Commit

Permalink
Reasoning Engine
Browse files Browse the repository at this point in the history
  • Loading branch information
jexp committed Apr 4, 2024
1 parent 2bfd0d4 commit 18ca8c7
Showing 1 changed file with 22 additions and 7 deletions.
29 changes: 22 additions & 7 deletions modules/genai-ecosystem/pages/google-cloud-demo.adoc
Original file line number Diff line number Diff line change
@@ -1,23 +1,38 @@
= Google Cloud Demo
include::_graphacademy_llm.adoc[]
:slug: google-cloud-demo
:author: Ben Lackey
:author: Ben Lackey, Michael Hunger
:category: genai-ecosystem
:tags: rag, demo, retrieval augmented generation, chatbot, google, vertexai
:tags: rag, demo, retrieval augmented generation, chatbot, google, vertexai, gemini, langchain, reasoning-engine
:neo4j-versions: 5.x
:page-pagination:
:page-product: google-cloud-demo

////
:imagesdir: https://dev.assets.neo4j.com/wp-content/uploads/2024/

== Deploying GenAI Applications and APIs with Vertex AI Reasoning Engine

GenAI developers, familiar with orchestration tools and architectures often face challenges when deploying their work to production.
Google's https://cloud.google.com/vertex-ai/generative-ai/docs/reasoning-engine/overview[Vertex AI Reasoning Engine Runtime^](preview) now offers an easy way to deploy, scale, and monitor GenAI applications and APIs without in-depth knowledge of containers or cloud configurations.

Compatible with various orchestration frameworks, including xref:langchai.adoc[LangChain], this solution allows developers to use the https://cloud.google.com/vertex-ai/docs/python-sdk/use-vertex-ai-python-sdk[Vertex AI Python SDK^] for setup, testing, deployment.

It works like this:

- Create a Python class for your GenAI app class to store environment info and static data in the constructor.
- Initialize your orchestration framework in a `set_up` method at startup.
- Process user queries with a `query` method, returning text responses.
- Deploy your GenAI API with `llm_extension.ReasoningEngine.create`, including class instance and requirements.

Our new integrations with Google Cloud, combined with our extensive LangChain integrations, allow you to seamlessly incorporate Neo4j knowledge graphs into your GenAI stack.
You can use LangChain and other orchestration tools to deploy RAG architectures, like GraphRAG, with Reasoning Engine Runtime.

You can see below an example of GraphRAG on a Company News Knowledge Graph using LangChain, Neo4j and Gemini.

////
image::reasoning-engine-graphrag.png[]

== Graph Consumption with Gemini Pro
== Knowledge Graph Generation with Gemini Pro

The xref:llm-graph-builder[LLM Graph Builder] that extracts entities from unstructured text (PDFs, YouTube Transcripts, Wikipedia) can be configured to use VertexAI both as embedding model and Gemnini as LLM for the extraction.
The xref:llm-graph-builder.adoc[LLM Graph Builder] that extracts entities from unstructured text (PDFs, YouTube Transcripts, Wikipedia) can be configured to use VertexAI both as embedding model and Gemnini as LLM for the extraction.
PDFs can be also be loaded from Google Cloud buckets.

It uses the underlying llm-graph-transformer library that we contributed to LangChain.
Expand Down

0 comments on commit 18ca8c7

Please sign in to comment.