Skip to content

Commit

Permalink
Fix prompt format for Llama 2 13B chat example
Browse files Browse the repository at this point in the history
  • Loading branch information
gongy committed Aug 10, 2023
1 parent 7b76f42 commit 5340581
Showing 1 changed file with 7 additions and 3 deletions.
10 changes: 7 additions & 3 deletions 06_gpu_and_ml/vllm_inference.py
Original file line number Diff line number Diff line change
Expand Up @@ -96,9 +96,13 @@ def __enter__(self):

# Load the model. Tip: MPT models may require `trust_remote_code=true`.
self.llm = LLM(MODEL_DIR)
self.template = """SYSTEM: You are a helpful assistant.
USER: {}
ASSISTANT: """
self.template = """<s>[INST] <<SYS>>
You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.
If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
<</SYS>>
{} [/INST] """

@method()
def generate(self, user_questions):
Expand Down

0 comments on commit 5340581

Please sign in to comment.