Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BUG: hard coded to use gpt-4o model #51

Closed
michaelneale opened this issue Sep 8, 2024 · 1 comment
Closed

BUG: hard coded to use gpt-4o model #51

michaelneale opened this issue Sep 8, 2024 · 1 comment

Comments

@michaelneale
Copy link
Collaborator

michaelneale commented Sep 8, 2024

Somehow it is always passing in gpt-4o/mini even when calling anthropic providers:

eg in exchange:

    def complete(
        self,
        model: str,
        system: str,
        messages: List[Message],
        tools: List[Tool] = [],
        **kwargs: Dict[str, Any],
    ) -> Tuple[Message, Usage]:
        print("anthropic model", model)

Always shows a gpt4 model despite the config being:

anthropic:
  provider: anthropic
  processor: claude-3-5-sonnet-20240620
  accelerator: claude-3-5-sonnet-20240620
  moderator: truncate
  toolkits:
  - name: developer
    requires: {}

Seems a regression @baxen @lukealvoeiro with some refactoring with provider loading?

edit: this is due to truncate.py having it hard coded, fix: square/exchange#35

@michaelneale michaelneale changed the title BUG: goose hard coding gpt-4o models BUG: goose hard coding gpt-4o models and not using config processor/accelerator Sep 8, 2024
@michaelneale michaelneale changed the title BUG: goose hard coding gpt-4o models and not using config processor/accelerator BUG: hard coded to use gpt-4o model Sep 8, 2024
@lukealvoeiro
Copy link
Contributor

I have an idea about this, will fix tomorrow morning - Monday PST.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants