Skip to content

Commit

Permalink
Refactor guardrails microservice (#1116)
Browse files Browse the repository at this point in the history
Signed-off-by: lvliang-intel <[email protected]>
  • Loading branch information
lvliang-intel authored Jan 8, 2025
1 parent 650be0d commit 631b570
Show file tree
Hide file tree
Showing 99 changed files with 1,306 additions and 2,735 deletions.
28 changes: 8 additions & 20 deletions .github/workflows/docker/compose/guardrails-compose.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,39 +3,27 @@

# this file should be run in the root of the repo
services:
guardrails-tgi:
guardrails:
build:
dockerfile: comps/guardrails/llama_guard/langchain/Dockerfile
image: ${REGISTRY:-opea}/guardrails-tgi:${TAG:-latest}
guardrails-pii-detection:
build:
dockerfile: comps/guardrails/pii_detection/Dockerfile
image: ${REGISTRY:-opea}/guardrails-pii-detection:${TAG:-latest}
dockerfile: comps/guardrails/src/guardrails/Dockerfile
image: ${REGISTRY:-opea}/guardrails:${TAG:-latest}
guardrails-bias-detection:
build:
dockerfile: comps/guardrails/bias_detection/Dockerfile
dockerfile: comps/guardrails/src/bias_detection/Dockerfile
image: ${REGISTRY:-opea}/guardrails-bias-detection:${TAG:-latest}
guardrails-toxicity-detection:
build:
dockerfile: comps/guardrails/toxicity_detection/Dockerfile
image: ${REGISTRY:-opea}/guardrails-toxicity-detection:${TAG:-latest}
guardrails-wildguard:
build:
dockerfile: comps/guardrails/wildguard/langchain/Dockerfile
image: ${REGISTRY:-opea}/guardrails-wildguard:${TAG:-latest}
guardrails-pii-predictionguard:
build:
dockerfile: comps/guardrails/pii_detection/predictionguard/Dockerfile
dockerfile: comps/guardrails/src/pii_detection/Dockerfile
image: ${REGISTRY:-opea}/guardrails-pii-predictionguard:${TAG:-latest}
guardrails-toxicity-predictionguard:
build:
dockerfile: comps/guardrails/toxicity_detection/predictionguard/Dockerfile
dockerfile: comps/guardrails/src/toxicity_detection/Dockerfile
image: ${REGISTRY:-opea}/guardrails-toxicity-predictionguard:${TAG:-latest}
guardrails-factuality-predictionguard:
build:
dockerfile: comps/guardrails/factuality/predictionguard/Dockerfile
dockerfile: comps/guardrails/src/factuality_alignment/Dockerfile
image: ${REGISTRY:-opea}/guardrails-factuality-predictionguard:${TAG:-latest}
guardrails-injection-predictionguard:
build:
dockerfile: comps/guardrails/prompt_injection/predictionguard/Dockerfile
dockerfile: comps/guardrails/src/prompt_injection/Dockerfile
image: ${REGISTRY:-opea}/guardrails-injection-predictionguard:${TAG:-latest}
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ pip install -e .

## MicroService

`Microservices` are akin to building blocks, offering the fundamental services for constructing `RAG (Retrieval-Augmented Generation)` applications.
`Microservices` are akin to building blocks, offering the fundamental services for constructing `RAG (Retrieval-Augmented Generation)` and other Enterprise AI applications.

Each `Microservice` is designed to perform a specific function or task within the application architecture. By breaking down the system into smaller, self-contained services, `Microservices` promote modularity, flexibility, and scalability.

Expand Down
15 changes: 8 additions & 7 deletions comps/guardrails/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,12 +2,13 @@

The Guardrails service enhances the security of LLM-based applications by offering a suite of microservices designed to ensure trustworthiness, safety, and security.

| MicroService | Description |
| ---------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------ |
| [Llama Guard](./llama_guard/langchain/README.md) | Provides guardrails for inputs and outputs to ensure safe interactions using Llama Guard |
| [WildGuard](./wildguard/langchain/README.md) | Provides guardrails for inputs and outputs to ensure safe interactions using WildGuard |
| [PII Detection](./pii_detection/README.md) | Detects Personally Identifiable Information (PII) and Business Sensitive Information (BSI) |
| [Toxicity Detection](./toxicity_detection/README.md) | Detects Toxic language (rude, disrespectful, or unreasonable language that is likely to make someone leave a discussion) |
| [Bias Detection](./bias_detection/README.md) | Detects Biased language (framing bias, epistemological bias, and demographic bias) |
| MicroService | Description |
| -------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------ |
| [Llama Guard](./src/guardrails/README.md#LlamaGuard) | Provides guardrails for inputs and outputs to ensure safe interactions using Llama Guard |
| [WildGuard](./src/guardrails/README.md#WildGuard) | Provides guardrails for inputs and outputs to ensure safe interactions using WildGuard |
| [PII Detection](./src/pii_detection/README.md) | Detects Personally Identifiable Information (PII) and Business Sensitive Information (BSI) |
| [Toxicity Detection](./src/toxicity_detection/README.md) | Detects Toxic language (rude, disrespectful, or unreasonable language that is likely to make someone leave a discussion) |
| [Bias Detection](./src/bias_detection/README.md) | Detects Biased language (framing bias, epistemological bias, and demographic bias) |
| [Prompt Injection Detection](./src/prompt_injection/README.md) | Detects malicious prompts causing the system running an LLM to execute the attacker’s intentions) |

Additional safety-related microservices will be available soon.
31 changes: 0 additions & 31 deletions comps/guardrails/bias_detection/bias_detection.py

This file was deleted.

Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ version: "3.8"

services:
tgi_gaudi_service:
image: ghcr.io/huggingface/tgi-gaudi:2.0.5
image: ghcr.io/huggingface/tgi-gaudi:2.0.1
container_name: tgi-service
ports:
- "8088:80"
Expand All @@ -16,8 +16,8 @@ services:
shm_size: 1g
command: --model-id ${LLM_MODEL_ID} --max-input-tokens 1024 --max-total-tokens 2048
guardrails:
image: opea/guardrails-tgi:latest
container_name: guardrails-tgi-gaudi-server
image: opea/guardrails:latest
container_name: guardrails-llamaguard-gaudi-server
ports:
- "9090:9090"
ipc: host
Expand All @@ -26,7 +26,7 @@ services:
http_proxy: ${http_proxy}
https_proxy: ${https_proxy}
SAFETY_GUARD_ENDPOINT: ${SAFETY_GUARD_ENDPOINT}
LANGCHAIN_API_KEY: ${LANGCHAIN_API_KEY}
GUARDRAILS_COMPONENT_NAME: "OPEA_LLAMA_GUARD"
HUGGINGFACEHUB_API_TOKEN: ${HF_TOKEN}
restart: unless-stopped

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,8 +16,8 @@ services:
shm_size: 1g
command: --model-id ${LLM_MODEL_ID} --max-input-tokens 1024 --max-total-tokens 2048
guardrails:
image: opea/guardrails-tgi:latest
container_name: guardrails-tgi-gaudi-server
image: opea/guardrails:latest
container_name: guardrails-wildguard-gaudi-server
ports:
- "9090:9090"
ipc: host
Expand All @@ -26,8 +26,8 @@ services:
http_proxy: ${http_proxy}
https_proxy: ${https_proxy}
SAFETY_GUARD_ENDPOINT: ${SAFETY_GUARD_ENDPOINT}
LANGCHAIN_API_KEY: ${LANGCHAIN_API_KEY}
HUGGINGFACEHUB_API_TOKEN: ${HF_TOKEN}
GUARDRAILS_COMPONENT_NAME: "OPEA_WILD_GUARD"
restart: unless-stopped

networks:
Expand Down
Empty file.
2 changes: 0 additions & 2 deletions comps/guardrails/factuality/predictionguard/__init__.py

This file was deleted.

This file was deleted.

30 changes: 0 additions & 30 deletions comps/guardrails/llama_guard/langchain/Dockerfile

This file was deleted.

115 changes: 0 additions & 115 deletions comps/guardrails/llama_guard/langchain/README.md

This file was deleted.

Loading

0 comments on commit 631b570

Please sign in to comment.