-
Notifications
You must be signed in to change notification settings - Fork 697
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Forced JSON mode on all models that support it. #57
base: main
Are you sure you want to change the base?
Forced JSON mode on all models that support it. #57
Conversation
@@ -127,7 +127,7 @@ async function getAIResponse(prompt: string): Promise<Array<{ | |||
const response = await openai.chat.completions.create({ | |||
...queryConfig, | |||
// return JSON if the model supports it: | |||
...(OPENAI_API_MODEL === "gpt-4-1106-preview" | |||
...(OPENAI_API_MODEL === "gpt-4-turbo-preview" || OPENAI_API_MODEL === "gpt-4-turbo" || OPENAI_API_MODEL === "gpt-3.5-turbo" || OPENAI_API_MODEL === "gpt-4-0125-preview" || OPENAI_API_MODEL === "gpt-4-1106-preview" || OPENAI_API_MODEL === "gpt-3.5-turbo-0125" || OPENAI_API_MODEL === "gpt-3.5-turbo-1106" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh this was a good catch! Where did you find the list? 🤔 Looks like most if not all the chat models are listed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you add |
Would be better to have something like this so that it is much easier to maintain in the future: const SUPPORTS_JSON_FORMAT = [
"gpt-4o",
"gpt-4-turbo-preview",
"gpt-4-turbo",
"gpt-3.5-turbo",
"gpt-4-0125-preview",
"gpt-4-1106-preview",
"gpt-3.5-turbo-0125",
"gpt-3.5-turbo-1106",
];
// .......ommited
const response = await openai.chat.completions.create({
...queryConfig,
// return JSON if the model supports it:
...(SUPPORTS_JSON_FORMAT.includes(OPENAI_API_MODEL)
? { response_format: { type: "json_object" } }
: {}),
messages: [
{
role: "system",
content: prompt,
},
],
}); |
This is a fix for Issue #56, ultimately caused by invalid JSON output being produced by the model, by forcing JSON mode on all supported models.
Was previously only forced on
gpt-4-1106-preview
, but OpenAI supports JSON mode on the latest GPT-3 and GPT-4 turbo models. See https://platform.openai.com/docs/guides/text-generation/json-mode.Put
gpt-4-turbo-preview
,gpt-4-turbo
, andgpt-3.5-turbo
first in the conditional for efficiency. Putgpt-4-turbo
- a model that does not exist yet - in the conditional to future-proof once GPT-4 Turbo goes out of preview.