Skip to content

Latest commit

 

History

History
11 lines (7 loc) · 775 Bytes

README.md

File metadata and controls

11 lines (7 loc) · 775 Bytes

Llama Guard

Llama Guard is a new experimental model that provides input and output guardrails for LLM deployments. For more details, please visit the main repository.

Note Please find the right model on HF side here.

Running locally

The llama_guard folder contains the inference script to run Llama Guard locally. Add test prompts directly to the inference script before running it.

Running on the cloud

The notebooks Purple_Llama_Anyscale & Purple_Llama_OctoAI contain examples for running Llama Guard on cloud hosted endpoints.