Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: potential ai hallucination with opportunity refinement ❗️ #689

Merged
merged 3 commits into from
Dec 21, 2024

Conversation

ramiAbdou
Copy link
Member

Description ✏️

This PR potentially fixes an LLM hallucination issue when refining an opportunity with AI.

Type of Change 🐞

  • Feature - A non-breaking change which adds functionality.
  • Fix - A non-breaking change which fixes an issue.
  • Refactor - A change that neither fixes a bug nor adds a feature.
  • Documentation - A change only to in-code or markdown documentation.
  • Tests - A change that adds missing unit/integration tests.
  • Chore - A change that is likely none of the above.

Checklist ✅

  • I have done a self-review of my code.
  • I have manually tested my code (if applicable).
  • I have added/updated any relevant documentation (if applicable).

@ramiAbdou ramiAbdou self-assigned this Dec 21, 2024
@ramiAbdou ramiAbdou changed the title fix: potential llm hallucination with opportunity refinement ❗️ fix: potential ai hallucination with opportunity refinement ❗️ Dec 21, 2024
@ramiAbdou ramiAbdou marked this pull request as ready for review December 21, 2024 07:21
@ramiAbdou ramiAbdou merged commit 72ccc8f into main Dec 21, 2024
1 check passed
@ramiAbdou ramiAbdou deleted the rami/bug branch December 21, 2024 07:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant