Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue with Helicone and custom LLMs #313

Open
probablykabari opened this issue Nov 9, 2024 · 2 comments
Open

Issue with Helicone and custom LLMs #313

probablykabari opened this issue Nov 9, 2024 · 2 comments

Comments

@probablykabari
Copy link

Path: /qstash/integrations/llm

When using a custom LLM provider it doesn't seem like the Helicone integration works. I think this is related to the url being used for completion.

For example, when using GROQ the gateway url should be https://groq.helicone.ai/openai/v1 but the url in the Upstash SDK is set to https://gateway.helicone.ai/v1. Using it currently will make the LLM request fail.

@CahidArda
Copy link
Contributor

Hi @probablykabari,

We looked into this and have a fix on the way. it will be similar to how it works in the test we wrote:

await client.publishJSON({
  api: {
    name: "llm",
    provider: custom({ token: llmToken }),
    analytics: {
      name: "helicone",
      token: analyticsToken,
      baseUrl: "https://groq.helicone.ai/openai",
    },
  },
  body: {
    model,
  },
  callback,
});

@probablykabari
Copy link
Author

Nice! I started making a PR myself but got sidetracked. Though, my approach was making the interface more similar to a custom provider (with a function).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants