Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add AnnotationMessage implementation #242

Closed

Conversation

devin-ai-integration[bot]
Copy link
Contributor

@devin-ai-integration devin-ai-integration bot commented Nov 26, 2024

Add ConversationMemory and enhance AnnotationMessage

Changes

This PR introduces two major features:

  1. ConversationMemory Implementation

    • New ConversationMemory struct for efficient message history management
    • Smart truncation with batch-size awareness
    • Preservation of system messages and initial context
    • Run ID-based deduplication for message appending
    • Direct integration with PromptingTools.jl's aigenerate function
  2. Enhanced AnnotationMessage Testing

    • Comprehensive tests for annotation message filtering across all providers
    • Edge case coverage for multiple consecutive annotations
    • Verification of metadata isolation in rendered output
    • Tests for all annotation message fields (extras, tags, comments)

Testing

  • Added comprehensive test suite in test/memory.jl and related files
  • Enhanced annotation message render tests in test/annotation_messages_render.jl
  • All tests passing across all providers

Implementation Details

  • ConversationMemory maintains conversation history in fixed-size batches
  • Smart truncation preserves system messages and first user message
  • Annotation messages are properly filtered out during rendering across all providers
  • Efficient caching through deterministic truncation points

Link to Devin run: https://preview.devin.ai/devin/1313c322110e474eb7be51c54071a39c

If you have any feedback, you can leave comments in the PR and I'll address them in the app!

- Implement ConversationMemory struct for efficient message history management
- Add batch-aware truncation and caching capabilities
- Enhance AnnotationMessage with comprehensive filtering tests across providers
- Add tests for edge cases and multiple consecutive annotations
- Implement ConversationMemory with batch-aware message truncation
- Add AnnotationMessage type for metadata and documentation
- Add comprehensive test suites for both features
- Ensure proper rendering behavior for annotation messages
- Add AnnotationMessage type for metadata and documentation
- Implement ConversationMemory for efficient message history management
- Add comprehensive test suite for both features
- Update Project.toml dependencies
- Adjust test configurations
- Update test files for memory implementation
…tationMessage type for storing metadata with messages\n- Implement ConversationMemory for efficient message history management\n- Add comprehensive test suite for both features\n- Ensure annotation messages are filtered from LLM rendering
- Add AnnotationMessage struct for metadata and documentation
- Implement annotate! utility for single/vector messages
- Add comprehensive tests for construction and rendering
- Ensure proper rendering skipping across all providers
- Add serialization support
Copy link
Contributor Author

Closing to create a more focused PR with just the AnnotationMessage changes

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

0 participants