You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When trying process alert with soc-ai, the docker logs return error 400
Here's the part of error in socai docker container :
request to GPT: status code '400' received '{
"error": {
"message": "This model's maximum context length is 16385 tokens. However, your messages resulted in 19181 tokens. Please reduce the length of the messages.",
"type": "invalid_request_error",
"param": "messages",
"code": "context_length_exceeded"
}
}'
I think the default model for socai is using gpt-3.5-turbo-16k, which is need to change gpt-4-turbo to support above 16K tokens.
The text was updated successfully, but these errors were encountered:
When trying process alert with soc-ai, the docker logs return error 400
Here's the part of error in socai docker container :
I think the default model for socai is using gpt-3.5-turbo-16k, which is need to change gpt-4-turbo to support above 16K tokens.
The text was updated successfully, but these errors were encountered: