From 74d61d677e826e3fe68fee3f44a5b5180e8496e0 Mon Sep 17 00:00:00 2001 From: Diondra <16376603+diondrapeck@users.noreply.github.com> Date: Fri, 1 Nov 2024 14:41:00 -0400 Subject: [PATCH] Update groundedness pro asset description (#3556) --- .../models/groundedness-pro-evaluator/description.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/assets/promptflow/evaluators/models/groundedness-pro-evaluator/description.md b/assets/promptflow/evaluators/models/groundedness-pro-evaluator/description.md index 831b0478df..87f188b84b 100644 --- a/assets/promptflow/evaluators/models/groundedness-pro-evaluator/description.md +++ b/assets/promptflow/evaluators/models/groundedness-pro-evaluator/description.md @@ -1,7 +1,7 @@ | | | | -- | -- | -| Score range | Integer [1-5]: where 1 is bad and 5 is good | -| What is this metric? | Uses service-based evaluation to measure how well the model's generated answers align with information from the source data (user-defined context). | -| How does it work? | The groundedness measure calls Responsible AI service to assess the correspondence between claims in an AI-generated answer and the source context, making sure that these claims are substantiated by the context. Even if the responses from LLM are factually correct, they'll be considered ungrounded if they can't be verified against the provided sources (such as your input source or your database). | +| Score range | Boolean: [true, false]: where True means that your response is grounded, False means that your response is ungrounded. | +| What is this metric? | Uses service-based evaluation to measure how well the model's generated answers are grounded in the information from the source data (user-defined context). | +| How does it work? | The groundedness measure calls Azure AI Evaluation service to assess the correspondence between claims in an AI-generated answer and the source context, making sure that these claims are substantiated by the context. Even if the responses from LLM are factually correct, they'll be considered ungrounded if they can't be verified against the provided sources (such as your input source or your database). | | When to use it? | Use the groundedness metric when you need to verify that AI-generated responses align with and are validated by the provided context. It's essential for applications where factual correctness and contextual accuracy are key, like information retrieval, question-answering, and content summarization. This metric ensures that the AI-generated answers are well-supported by the context. | | What does it need as input? | Query, Context, Generated Response | \ No newline at end of file