You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Looks like litellm does a (too) good job of encapsulating the calls to openai, making calls to local openai-api-based models require a proxy to intercept and re-route.
Is this the recommended approach for the time being? Any plans to drop the litellm dependency, use one that's a little more open, or write your own layer?
It would be nice to use this with swappable models especially since AC seems to generalize across general instruct models and not require function-calling models.
The text was updated successfully, but these errors were encountered:
I added AWS Bedrock support on this MR. Not sure how good results would be on the local models, since they are kind of weak at development, but the code seems to support them, ie you can try something like model="huggingface/deepseek-ai/deepseek-coder-33b-instruct" in configuration.toml.
Looks like
litellm
does a (too) good job of encapsulating the calls to openai, making calls to local openai-api-based models require a proxy to intercept and re-route.Is this the recommended approach for the time being? Any plans to drop the litellm dependency, use one that's a little more open, or write your own layer?
It would be nice to use this with swappable models especially since AC seems to generalize across general instruct models and not require function-calling models.
The text was updated successfully, but these errors were encountered: