How to use OpenAI's O1-Preview model ? #4037
-
I added the o1-preview model to the .env file, but I got an error when using it: |
Beta Was this translation helpful? Give feedback.
Replies: 5 comments 16 replies
-
Beta Was this translation helpful? Give feedback.
-
Hi @danny-avila again, Something went wrong. Here's the specific error message we encountered: Failed to send message. HTTP 404 - { "error": { "message": "This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?", "type": "invalid_request_error", "param": "model", "code": null } } a.) The version i'm used [LibreChat v0.7.5-rc2] helm chart deployed on kubernates But it worked for the docker image on my local, any suggestions? or is a bug we have to wait for new releases? Thank you |
Beta Was this translation helpful? Give feedback.
-
I guess the problem is that the API uses a old version of "node-openai". This version does not include the new models, so it tries to use the completion endpoints instead of the chat completion endpoint. latest =>https://github.com/openai/openai-node/blob/master/src/resources/chat/chat.ts |
Beta Was this translation helpful? Give feedback.
-
v0.7.5 is now live which includes the o1 updates. If you are seeing a green icon and an error, you're using an older version of LibreChat. At the time of writing, the docker image for releases is still building, but should be done momentarily. |
Beta Was this translation helpful? Give feedback.
v0.7.5 is now live which includes the o1 updates. If you are seeing a green icon and an error, you're using an older version of LibreChat.
At the time of writing, the docker image for releases is still building, but should be done momentarily.
https://www.librechat.ai/changelog/v0.7.5