You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A list of issues and potential enhancements for the newly added generative text task type
Bug Fixes
Cache not being used in the compute_genai_metrics method of the RAITextInsights class.
Cache does not appear to be updated from the front-end for the compute_question_answering_metrics and compute_genai_metrics methods of the RAITextInsights class.
Error tree currently uses the model being evaluated to compute the AI assisted metrics instead of the evaluation metric.
Additional text columns (like reference answers) in the input dataset not working as intended. As a result, metrics requiring reference answers (e.g., equivalence, exact match score, BLEU, etc.) are also currently not working.
Feature enhancements
The templates with metric definitions for AI assisted metrics is currently hard-coded into the metric computation scripts. These can be made configurable in the future.
Allowing the users to create new metrics by supplying their own templates with metrics definitions.
Error tree - show name of the metric used for training the error tree (currently show mean squared error).
Error tree - allow the use of other metrics for creating the error tree.
Implement the front-end UI for model explainer.
The text was updated successfully, but these errors were encountered:
A list of issues and potential enhancements for the newly added generative text task type
Bug Fixes
compute_genai_metrics
method of theRAITextInsights
class.compute_question_answering_metrics
andcompute_genai_metrics
methods of theRAITextInsights
class.Feature enhancements
The text was updated successfully, but these errors were encountered: