OPEA’s mission is to offer a validated enterprise-grade GenAI (Generative Artificial Intelligence) RAG reference implementation. This will simplify GenAI development and deployment, thereby accelerating time-to-market.
The project currently consists of a technical conceptual framework that enables GenAI implementations to meet enterprise-grade requirements. The project offers a set of reference implementations for a wide range of enterprise use cases that can be used out-of-the-box. The project additionally offers a set of validation and compliance tools to ensure the reference implementations meet the needs outlined in the conceptual framework. This enables new reference implementations to be contributed and validated in an open manner. Partnering with the LF AI & Data places in the perfect spot for multi-partner development, evolution, and expansion.
Enterprises face a myriad of challenges in development and deployment of Gen AI. The development of new models, algorithms, fine tuning techniques, detecting and resolving bias and how to deploy large solutions at scale continues to evolve at a rapid pace. One of the biggest challenges enterprises come up against is a lack of standardized software tools and technologies from which to choose. Additionally, enterprises want the flexibility to innovate rapidly, extend the functionality to meet their business needs while ensuring the solution is secure and trustworthy. The lack of a framework that encompasses both proprietary and open solutions impedes enterprises from charting their destiny. This results in enormous investment of time and money impacting time-to-market advantage. OPEA answers the need for a multi-provider, ecosystem-supported framework that enables the evaluation, selection, customization, and trusted deployment of solutions that businesses can rely on.
The major adoption and deployment cycle of robust, secure, enterprise-grade Gen AI solutions across all industries is at its early stages. Enterprise-grade solutions will require collaboration in the open ecosystem. The time is now for the ecosystem to come together and accelerate GenAI deployments across enterprises by offering a standardized set of tools and technologies while supporting three key tenets – open, security, and scalability. This will require the ecosystem to work together to build reference implementations that are performant, trustworthy and enterprise-grade ready.
There is not an alternative that brings the entire ecosystem together in a vendor neutral manner and delivers on the promise of open, security and scalability. This is our primary motivation for creating OPEA project.
Like any other open-source project, the community will determine which components are needed by the broader ecosystem. Enterprises can always extend OPEA project with other multi-vendor proprietary solutions to achieve their business goals.
Open Platform for Enterprise AI
It is said ‘OH-PEA-AY'
AnyScale Cloudera DataStax Domino Data Lab HuggingFace Intel KX MariaDB Foundation MinIO Qdrant Red Hat SAS VMware by Broadcom Yellowbrick Data Zilliz
OPEA is to be defined jointly by several community partners, with a call for broad ecosystem contribution, under the well-established LF AI & Data Foundation. As a starting point, Intel has contributed a Technical Conceptual Framework that shows how to construct and optimize curated GenAI pipelines built for secure, turnkey enterprise deployment. At launch, Intel contributed several reference implementations on Intel hardware across Intel® Xeon® 5, Intel® Xeon® 6 and Intel® Gaudi® 2, which you can see in a Github repo here. Over time we intend to add to that contribution including a software infrastructure stack to enable fully containerized AI workload deployments as well as potentially implementations of those containerized workloads.
The models and modules can be part of an OPEA repository, or be published in a stable unobstructed repository (e.g., Hugging Face) and cleared for use by an OPEA assessment. These include:
GenAI models – Large Language Models (LLMs), Large Vision Models (LVMs), multimodal models, etc.
- Ingest/Data Processing
- Embedding Models/Services
- Indexing/Vector/Graph data stores
- Retrieval/Ranking
- Prompt Engines
- Guardrails
- Memory systems
There are different ways partners can contribute to this project:
- Join the project and contribute assets in terms of use cases, code, test harness, etc.
- Provide technical leadership
- Drive community engagement and evangelism
- Offer program management for various projects
- Become a maintainer, committer, and adopter
- Define and offer use cases for various industry verticals that shape OPEA project
- Build the infrastructure to support OPEA projects
A version of the spec is available in the docs repo in this project
There is no cost for anyone to join and contribute.
Anyone can join and contribute. You don’t need to be a Linux Foundation member.
Vulnerability reports can be sent to [email protected].