-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Add ConversationMemory and enhance AnnotationMessage #238
Merged
svilupp
merged 137 commits into
main
from
devin/1732559303-add-conversation-memory-and-annotation
Nov 26, 2024
Merged
feat: Add ConversationMemory and enhance AnnotationMessage #238
svilupp
merged 137 commits into
main
from
devin/1732559303-add-conversation-memory-and-annotation
Nov 26, 2024
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This reverts commit ff4aee0.
Add docs link
* Gwen 72B and open-router Google Flash added. * Fixing the pricing informations. * Pricing updated for gemini-flash-1.5-8b * Removed orgfexp Removed orgfexp because the model is heavily rate limited. * Following naming conventions for qwen aliases
* Add GoogleOpenAISchema with OpenAI compatibility mode - Add GoogleOpenAISchema struct with documentation - Implement create_chat method with Google API base URL - Implement create_embedding method with Google API base URL - Use GOOGLE_API_KEY for authentication * test: Add tests for GoogleOpenAISchema implementation * test: Enhance GoogleOpenAISchema tests to improve coverage * Move GoogleOpenAISchema tests to llm_openai_schema_def.jl * feat: Add Gemini 1.5 models and aliases to user_preferences.jl Added new models: - gemini-1.5-pro-latest - gemini-1.5-flash-8b-latest - gemini-1.5-flash-latest Added corresponding aliases: - gem15p - gem15f8 - gem15f Set pricing according to Google's pay-as-you-go rates: Input: /bin/bash.075/1M tokens Output: /bin/bash.30/1M tokens * Update Gemini model pricing to reflect correct per-million token rates * revert: Remove unintended changes to test/llm_google.jl * revert: restore test/llm_google.jl to original state from main branch * feat: Add GoogleProvider with proper Bearer token auth and update GoogleOpenAISchema methods - Add GoogleProvider struct with proper Bearer token authentication - Update create_chat to use GoogleProvider and openai_request - Update create_embeddings to use GoogleProvider and openai_request - Maintain consistent URL handling up to /v1beta * feat: Add Gemini models to registry and fix GoogleOpenAISchema tests - Add Gemini 1.5 models (Pro, Flash, Flash 8b) with correct pricing - Fix GoogleOpenAISchema tests to properly handle GOOGLE_API_KEY - Save and restore original API key value during testing * fix: restore gpt-4-turbo-preview model and ensure correct model ordering * Add comprehensive logging to GoogleOpenAISchema test mock servers - Add detailed request header logging - Track authorization header values and expectations - Log request body content and responses - Improve debugging capabilities for test failures * fix: Update auth_header in GoogleProvider to use OpenAIProvider implementation * chore: prepare release v0.63.0 - Update version to 0.63.0 - Add warning about token/cost counting limitations in GoogleOpenAISchema - Update changelog with correct model aliases (gem15p, gem15f, gem15f8) - Remove logging from test/llm_openai_schema_def.jl --------- Co-authored-by: devin-ai-integration[bot] <158243242+devin-ai-integration[bot]@users.noreply.github.com>
- Implement ConversationMemory struct for efficient message history management - Add batch-aware truncation and caching capabilities - Enhance AnnotationMessage with comprehensive filtering tests across providers - Add tests for edge cases and multiple consecutive annotations
…' of https://github.com/svilupp/PromptingTools.jl into devin/1732559303-add-conversation-memory-and-annotation
svilupp
force-pushed
the
devin/1732559303-add-conversation-memory-and-annotation
branch
from
November 26, 2024 10:32
2086464
to
7b79d8b
Compare
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #238 +/- ##
==========================================
+ Coverage 91.98% 92.03% +0.05%
==========================================
Files 47 49 +2
Lines 4640 4771 +131
==========================================
+ Hits 4268 4391 +123
- Misses 372 380 +8 ☔ View full report in Codecov by Sentry. |
svilupp
deleted the
devin/1732559303-add-conversation-memory-and-annotation
branch
November 26, 2024 19:43
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Add ConversationMemory and enhance AnnotationMessage
Changes
This PR introduces two major features:
ConversationMemory Implementation
ConversationMemory
struct for efficient message history managementEnhanced AnnotationMessage Testing
Testing
test/memory.jl
and related filestest/annotation_messages_render.jl
Implementation Details
Link to Devin run: https://preview.devin.ai/devin/1313c322110e474eb7be51c54071a39c
If you have any feedback, you can leave comments in the PR and I'll address them in the app!