diff --git a/CHANGELOG.md b/CHANGELOG.md index 8532cf5f8..c86656267 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -7,11 +7,17 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0 ## [Unreleased] ### Added -- Improved AICode parsing and error handling (eg, capture more REPL prompts, detect parsing errors earlier, parse more code fence types), including the option to remove unsafe code (eg, `Pkg.add("SomePkg")`) with `AICode(msg; skip_unsafe=true, vebose=true)` -- Added new prompt templates: `JuliaRecapTask`, `JuliaRecapCoTTask`, `JuliaExpertTestCode` and updated `JuliaExpertCoTTask` to be more robust against early stopping for smaller OSS models ### Fixed +## [0.4.0] + +### Added +- Improved AICode parsing and error handling (eg, capture more REPL prompts, detect parsing errors earlier, parse more code fence types), including the option to remove unsafe code (eg, `Pkg.add("SomePkg")`) with `AICode(msg; skip_unsafe=true, vebose=true)` +- Added new prompt templates: `JuliaRecapTask`, `JuliaRecapCoTTask`, `JuliaExpertTestCode` and updated `JuliaExpertCoTTask` to be more robust against early stopping for smaller OSS models +- Added support for MistralAI API via the MistralOpenAISchema(). All their standard models have been registered, so you should be able to just use `model="mistral-tiny` in your `aigenerate` calls without any further changes. Remember to either provide `api_kwargs.api_key` or ensure you have ENV variable `MISTRALAI_API_KEY` set. +- Added support for any OpenAI-compatible API via `schema=CustomOpenAISchema()`. All you have to do is to provide your `api_key` and `url` (base URL of the API) in the `api_kwargs` keyword argument. This option is useful if you use [Perplexity.ai](https://docs.perplexity.ai/), [Fireworks.ai](https://app.fireworks.ai/), or any other similar services. + ## [0.3.0] ### Added diff --git a/Project.toml b/Project.toml index 5957c08c7..355366810 100644 --- a/Project.toml +++ b/Project.toml @@ -1,7 +1,7 @@ name = "PromptingTools" uuid = "670122d1-24a8-4d70-bfce-740807c42192" authors = ["J S @svilupp and contributors"] -version = "0.4.0-DEV" +version = "0.4.0" [deps] Base64 = "2a0f44e3-6c83-55bd-87e4-b1978d98bd5f" diff --git a/README.md b/README.md index 0dbf0978f..532e260ce 100644 --- a/README.md +++ b/README.md @@ -79,6 +79,7 @@ For more practical examples, see the `examples/` folder and the [Advanced Exampl - [Data Extraction](#data-extraction) - [OCR and Image Comprehension](#ocr-and-image-comprehension) - [Using Ollama models](#using-ollama-models) + - [Using MistralAI API and other OpenAI-compatible APIs](#using-mistralai-api-and-other-openai-compatible-apis) - [More Examples](#more-examples) - [Package Interface](#package-interface) - [Frequently Asked Questions](#frequently-asked-questions) @@ -395,6 +396,38 @@ msg.content # 4096×2 Matrix{Float64}: If you're getting errors, check that Ollama is running - see the [Setup Guide for Ollama](#setup-guide-for-ollama) section below. +### Using MistralAI API and other OpenAI-compatible APIs + +Mistral models have long been dominating the open-source space. They are now available via their API, so you can use them with PromptingTools.jl! + +```julia +msg = aigenerate("Say hi!"; model="mistral-tiny") +``` + +It all just works, because we have registered the models in the `PromptingTools.MODEL_REGISTRY`! There are currently 4 models available: `mistral-tiny`, `mistral-small`, `mistral-medium`, `mistral-embed`. + +Under the hood, we use a dedicated schema `MistralOpenAISchema` that leverages most of the OpenAI-specific code base, so you can always provide that explicitly as the first argument: + +```julia +const PT = PromptingTools +msg = aigenerate(PT.MistralOpenAISchema(), "Say Hi!"; model="mistral-tiny", api_key=ENV["MISTRALAI_API_KEY"]) +``` +As you can see, we can load your API key either from the ENV or via the Preferences.jl mechanism (see `?PREFERENCES` for more information). + +But MistralAI are not the only ones! There are many other exciting providers, eg, [Perplexity.ai](https://docs.perplexity.ai/), [Fireworks.ai](https://app.fireworks.ai/). +As long as they are compatible with the OpenAI API (eg, sending `messages` with `role` and `content` keys), you can use them with PromptingTools.jl by using `schema = CustomOpenAISchema()`: + +```julia +# Set your API key and the necessary base URL for the API +api_key = "..." +prompt = "Say hi!" +msg = aigenerate(PT.CustomOpenAISchema(), prompt; model="my_model", api_key, api_kwargs=(; url="http://localhost:8081")) +``` + +As you can see, it also works for any local models that you might have running on your computer! + +Note: At the moment, we only support `aigenerate` and `aiembed` functions for MistralAI and other OpenAI-compatible APIs. We plan to extend the support in the future. + ### More Examples TBU... @@ -529,6 +562,8 @@ Resources: ### Configuring the Environment Variable for API Key +This is a guide for OpenAI's API key, but it works for any other API key you might need (eg, `MISTRALAI_API_KEY` for MistralAI API). + To use the OpenAI API with PromptingTools.jl, set your API key as an environment variable: ```julia diff --git a/docs/src/examples/readme_examples.md b/docs/src/examples/readme_examples.md index 5e2f35ca3..5ec371b73 100644 --- a/docs/src/examples/readme_examples.md +++ b/docs/src/examples/readme_examples.md @@ -286,4 +286,38 @@ msg = aiembed(schema, ["Embed me", "Embed me"]; model="openhermes2.5-mistral") msg.content # 4096×2 Matrix{Float64}: ``` -If you're getting errors, check that Ollama is running - see the [Setup Guide for Ollama](#setup-guide-for-ollama) section below. \ No newline at end of file +If you're getting errors, check that Ollama is running - see the [Setup Guide for Ollama](#setup-guide-for-ollama) section below. + +## Using MistralAI API and other OpenAI-compatible APIs + +Mistral models have long been dominating the open-source space. They are now available via their API, so you can use them with PromptingTools.jl! + +```julia +msg = aigenerate("Say hi!"; model="mistral-tiny") +# [ Info: Tokens: 114 @ Cost: $0.0 in 0.9 seconds +# AIMessage("Hello there! I'm here to help answer any questions you might have, or assist you with tasks to the best of my abilities. How can I be of service to you today? If you have a specific question, feel free to ask and I'll do my best to provide accurate and helpful information. If you're looking for general assistance, I can help you find resources or information on a variety of topics. Let me know how I can help.") +``` + +It all just works, because we have registered the models in the `PromptingTools.MODEL_REGISTRY`! There are currently 4 models available: `mistral-tiny`, `mistral-small`, `mistral-medium`, `mistral-embed`. + +Under the hood, we use a dedicated schema `MistralOpenAISchema` that leverages most of the OpenAI-specific code base, so you can always provide that explicitly as the first argument: + +```julia +const PT = PromptingTools +msg = aigenerate(PT.MistralOpenAISchema(), "Say Hi!"; model="mistral-tiny", api_key=ENV["MISTRALAI_API_KEY"]) +``` +As you can see, we can load your API key either from the ENV or via the Preferences.jl mechanism (see `?PREFERENCES` for more information). + +But MistralAI are not the only ones! There are many other exciting providers, eg, [Perplexity.ai](https://docs.perplexity.ai/), [Fireworks.ai](https://app.fireworks.ai/). +As long as they are compatible with the OpenAI API (eg, sending `messages` with `role` and `content` keys), you can use them with PromptingTools.jl by using `schema = CustomOpenAISchema()`: + +```julia +# Set your API key and the necessary base URL for the API +api_key = "..." +prompt = "Say hi!" +msg = aigenerate(PT.CustomOpenAISchema(), prompt; model="my_model", api_key, api_kwargs=(; url="http://localhost:8081")) +``` + +As you can see, it also works for any local models that you might have running on your computer! + +Note: At the moment, we only support `aigenerate` and `aiembed` functions for MistralAI and other OpenAI-compatible APIs. We plan to extend the support in the future. \ No newline at end of file diff --git a/docs/src/frequently_asked_questions.md b/docs/src/frequently_asked_questions.md index cb51bdeb8..d12612c00 100644 --- a/docs/src/frequently_asked_questions.md +++ b/docs/src/frequently_asked_questions.md @@ -70,6 +70,8 @@ Resources: ## Configuring the Environment Variable for API Key +This is a guide for OpenAI's API key, but it works for any other API key you might need (eg, `MISTRALAI_API_KEY` for MistralAI API). + To use the OpenAI API with PromptingTools.jl, set your API key as an environment variable: ```julia diff --git a/src/llm_interface.jl b/src/llm_interface.jl index 990798000..3dc8d2f22 100644 --- a/src/llm_interface.jl +++ b/src/llm_interface.jl @@ -44,6 +44,48 @@ struct OpenAISchema <: AbstractOpenAISchema end inputs::Any = nothing end +""" + CustomOpenAISchema + +CustomOpenAISchema() allows user to call any OpenAI-compatible API. + +All user needs to do is to pass this schema as the first argument and provide the BASE URL of the API to call (`api_kwargs.url`). + +# Example + +Assumes that we have a local server running at `http://localhost:8081`: + +```julia +api_key = "..." +prompt = "Say hi!" +msg = aigenerate(CustomOpenAISchema(), prompt; model="my_model", api_key, api_kwargs=(; url="http://localhost:8081")) +``` + +""" +struct CustomOpenAISchema <: AbstractOpenAISchema end + +""" + MistralOpenAISchema + +MistralOpenAISchema() allows user to call MistralAI API known for mistral and mixtral models. + +It's a flavor of CustomOpenAISchema() with a url preset to `https://api.mistral.ai`. + +Most models have been registered, so you don't even have to specify the schema + +# Example + +Let's call `mistral-tiny` model: +```julia +api_key = "..." # can be set via ENV["MISTRAL_API_KEY"] or via our preference system +msg = aigenerate("Say hi!"; model="mistral_tiny", api_key) +``` + +See `?PREFERENCES` for more details on how to set your API key permanently. + +""" +struct MistralOpenAISchema <: AbstractOpenAISchema end + abstract type AbstractChatMLSchema <: AbstractPromptSchema end """ ChatMLSchema is used by many open-source chatbots, by OpenAI models (under the hood) and by several models and inferfaces (eg, Ollama, vLLM) diff --git a/src/llm_openai.jl b/src/llm_openai.jl index b89542d00..f9aaa9538 100644 --- a/src/llm_openai.jl +++ b/src/llm_openai.jl @@ -56,6 +56,151 @@ function render(schema::AbstractOpenAISchema, return conversation end +## OpenAI.jl back-end +## Types +# "Providers" are a way to use other APIs that are compatible with OpenAI API specs, eg, Azure and mamy more +# Define our sub-type to distinguish it from other OpenAI.jl providers +abstract type AbstractCustomProvider <: OpenAI.AbstractOpenAIProvider end +Base.@kwdef struct CustomProvider <: AbstractCustomProvider + api_key::String = "" + base_url::String = "http://localhost:8080" + api_version::String = "" +end +function OpenAI.build_url(provider::AbstractCustomProvider, api::AbstractString) + string(provider.base_url, "/", api) +end +function OpenAI.auth_header(provider::AbstractCustomProvider, api_key::AbstractString) + OpenAI.auth_header(OpenAI.OpenAIProvider(provider.api_key, + provider.base_url, + provider.api_version), + api_key) +end +## Extend OpenAI create_chat to allow for testing/debugging +# Default passthrough +function OpenAI.create_chat(schema::AbstractOpenAISchema, + api_key::AbstractString, + model::AbstractString, + conversation; + kwargs...) + OpenAI.create_chat(api_key, model, conversation; kwargs...) +end + +# Overload for testing/debugging +function OpenAI.create_chat(schema::TestEchoOpenAISchema, api_key::AbstractString, + model::AbstractString, + conversation; kwargs...) + schema.model_id = model + schema.inputs = conversation + return schema +end + +""" + OpenAI.create_chat(schema::CustomOpenAISchema, + api_key::AbstractString, + model::AbstractString, + conversation; + url::String="http://localhost:8080", + kwargs...) + +Dispatch to the OpenAI.create_chat function, for any OpenAI-compatible API. + +It expects `url` keyword argument. Provide it to the `aigenerate` function via `api_kwargs=(; url="my-url")` + +It will forward your query to the "chat/completions" endpoint of the base URL that you provided (=`url`). +""" +function OpenAI.create_chat(schema::CustomOpenAISchema, + api_key::AbstractString, + model::AbstractString, + conversation; + url::String = "http://localhost:8080", + kwargs...) + # Build the corresponding provider object + # Create chat will automatically pass our data to endpoint `/chat/completions` + provider = CustomProvider(; api_key, base_url = url) + OpenAI.create_chat(provider, model, conversation; kwargs...) +end + +""" + OpenAI.create_chat(schema::MistralOpenAISchema, + api_key::AbstractString, + model::AbstractString, + conversation; + url::String="https://api.mistral.ai/v1", + kwargs...) + +Dispatch to the OpenAI.create_chat function, but with the MistralAI API parameters. + +It tries to access the `MISTRALAI_API_KEY` ENV variable, but you can also provide it via the `api_key` keyword argument. +""" +function OpenAI.create_chat(schema::MistralOpenAISchema, + api_key::AbstractString, + model::AbstractString, + conversation; + url::String = "https://api.mistral.ai/v1", + kwargs...) + # Build the corresponding provider object + # try to override provided api_key because the default is OpenAI key + provider = CustomProvider(; + api_key = isempty(MISTRALAI_API_KEY) ? api_key : MISTRALAI_API_KEY, + base_url = url) + OpenAI.create_chat(provider, model, conversation; kwargs...) +end + +# Extend OpenAI create_embeddings to allow for testing +function OpenAI.create_embeddings(schema::AbstractOpenAISchema, + api_key::AbstractString, + docs, + model::AbstractString; + kwargs...) + OpenAI.create_embeddings(api_key, docs, model; kwargs...) +end +function OpenAI.create_embeddings(schema::TestEchoOpenAISchema, api_key::AbstractString, + docs, + model::AbstractString; kwargs...) + schema.model_id = model + schema.inputs = docs + return schema +end +function OpenAI.create_embeddings(schema::CustomOpenAISchema, + api_key::AbstractString, + docs, + model::AbstractString; + url::String = "http://localhost:8080", + kwargs...) + # Build the corresponding provider object + # Create chat will automatically pass our data to endpoint `/embeddings` + provider = CustomProvider(; api_key, base_url = url) + OpenAI.create_embeddings(provider, docs, model; kwargs...) +end +function OpenAI.create_embeddings(schema::MistralOpenAISchema, + api_key::AbstractString, + docs, + model::AbstractString; + url::String = "https://api.mistral.ai/v1", + kwargs...) + # Build the corresponding provider object + # try to override provided api_key because the default is OpenAI key + provider = CustomProvider(; + api_key = isempty(MISTRALAI_API_KEY) ? api_key : MISTRALAI_API_KEY, + base_url = url) + OpenAI.create_embeddings(provider, docs, model; kwargs...) +end + +## Temporary fix -- it will be moved upstream +function OpenAI.create_embeddings(provider::AbstractCustomProvider, + input, + model_id::String = OpenAI.DEFAULT_EMBEDDING_MODEL_ID; + http_kwargs::NamedTuple = NamedTuple(), + kwargs...) + return OpenAI.openai_request("embeddings", + provider; + method = "POST", + http_kwargs = http_kwargs, + model = model_id, + input, + kwargs...) +end + ## User-Facing API """ aigenerate(prompt_schema::AbstractOpenAISchema, prompt::ALLOWED_PROMPT_TYPE; @@ -170,21 +315,6 @@ function aigenerate(prompt_schema::AbstractOpenAISchema, prompt::ALLOWED_PROMPT_ return output end -# Extend OpenAI create_chat to allow for testing/debugging -function OpenAI.create_chat(schema::AbstractOpenAISchema, - api_key::AbstractString, - model::AbstractString, - conversation; - kwargs...) - OpenAI.create_chat(api_key, model, conversation; kwargs...) -end -function OpenAI.create_chat(schema::TestEchoOpenAISchema, api_key::AbstractString, - model::AbstractString, - conversation; kwargs...) - schema.model_id = model - schema.inputs = conversation - return schema -end """ aiembed(prompt_schema::AbstractOpenAISchema, @@ -268,21 +398,6 @@ function aiembed(prompt_schema::AbstractOpenAISchema, return msg end -# Extend OpenAI create_embeddings to allow for testing -function OpenAI.create_embeddings(schema::AbstractOpenAISchema, - api_key::AbstractString, - docs, - model::AbstractString; - kwargs...) - OpenAI.create_embeddings(api_key, docs, model; kwargs...) -end -function OpenAI.create_embeddings(schema::TestEchoOpenAISchema, api_key::AbstractString, - docs, - model::AbstractString; kwargs...) - schema.model_id = model - schema.inputs = docs - return schema -end """ aiclassify(prompt_schema::AbstractOpenAISchema, prompt::ALLOWED_PROMPT_TYPE; diff --git a/src/user_preferences.jl b/src/user_preferences.jl index ecf136221..2c33bf563 100644 --- a/src/user_preferences.jl +++ b/src/user_preferences.jl @@ -12,6 +12,7 @@ Check your preferences by calling `get_preferences(key::String)`. # Available Preferences (for `set_preferences!`) - `OPENAI_API_KEY`: The API key for the OpenAI API. See [OpenAI's documentation](https://platform.openai.com/docs/quickstart?context=python) for more information. +- `MISTRALAI_API_KEY`: The API key for the Mistral AI API. See [Mistral AI's documentation](https://docs.mistral.ai/) for more information. - `MODEL_CHAT`: The default model to use for aigenerate and most ai* calls. See `MODEL_REGISTRY` for a list of available models or define your own. - `MODEL_EMBEDDING`: The default model to use for aiembed (embedding documents). See `MODEL_REGISTRY` for a list of available models or define your own. - `PROMPT_SCHEMA`: The default prompt schema to use for aigenerate and most ai* calls (if not specified in `MODEL_REGISTRY`). Set as a string, eg, `"OpenAISchema"`. @@ -24,6 +25,7 @@ Define your `register_model!()` calls in your `startup.jl` file to make them ava # Available ENV Variables - `OPENAI_API_KEY`: The API key for the OpenAI API. +- `MISTRALAI_API_KEY`: The API key for the Mistral AI API. Preferences.jl takes priority over ENV variables, so if you set a preference, it will override the ENV variable. @@ -47,6 +49,7 @@ PromptingTools.set_preferences!("OPENAI_API_KEY" => "key1", "MODEL_CHAT" => "cha """ function set_preferences!(pairs::Pair{String, <:Any}...) allowed_preferences = [ + "MISTRALAI_API_KEY", "OPENAI_API_KEY", "MODEL_CHAT", "MODEL_EMBEDDING", @@ -79,6 +82,7 @@ PromptingTools.get_preferences("MODEL_CHAT") """ function get_preferences(key::String) allowed_preferences = [ + "MISTRALAI_API_KEY", "OPENAI_API_KEY", "MODEL_CHAT", "MODEL_EMBEDDING", @@ -98,11 +102,14 @@ const MODEL_EMBEDDING::String = @load_preference("MODEL_EMBEDDING", # First, load from preferences, then from environment variables const OPENAI_API_KEY::String = @load_preference("OPENAI_API_KEY", - default=get(ENV, "OPENAI_API_KEY", "")) + default=get(ENV, "OPENAI_API_KEY", "")); # Note: Disable this warning by setting OPENAI_API_KEY to anything isempty(OPENAI_API_KEY) && @warn "OPENAI_API_KEY variable not set! OpenAI models will not be available - set API key directly via `PromptingTools.OPENAI_API_KEY=`!" +const MISTRALAI_API_KEY::String = @load_preference("MISTRALAI_API_KEY", + default=get(ENV, "MISTRALAI_API_KEY", "")); + ## Model registry # A dictionary of model names and their specs (ie, name, costs per token, etc.) # Model specs are saved in ModelSpec struct (see below) @@ -261,7 +268,27 @@ registry = Dict{String, ModelSpec}("gpt-3.5-turbo" => ModelSpec("gpt-3.5-turbo", OllamaManagedSchema(), 0.0, 0.0, - "Yi is a 34B parameter model finetuned by X on top of base model from Starling AI.")) + "Yi is a 34B parameter model finetuned by X on top of base model from Starling AI."), + "mistral-tiny" => ModelSpec("mistral-tiny", + MistralOpenAISchema(), + 1.4e-7, + 4.53e-7, + "Mistral AI's hosted version of Mistral-7B-v0.2. Great for simple tasks."), + "mistral-small" => ModelSpec("mistral-small", + MistralOpenAISchema(), + 6.47e-7, + 1.94e-6, + "Mistral AI's hosted version of Mixtral-8x7B-v0.1. Good for more complicated tasks."), + "mistral-medium" => ModelSpec("mistral-medium", + MistralOpenAISchema(), + 2.7e-6, + 8.09e-6, + "Mistral AI's hosted version of their best model available. Details unknown."), + "mistral-embed" => ModelSpec("mistral-embed", + MistralOpenAISchema(), + 1.08e-7, + 0.0, + "Mistral AI's hosted model for embeddings.")) ### Model Registry Structure @kwdef mutable struct ModelRegistry diff --git a/test/llm_openai.jl b/test/llm_openai.jl index 7ac921ad7..95364b847 100644 --- a/test/llm_openai.jl +++ b/test/llm_openai.jl @@ -1,6 +1,7 @@ using PromptingTools: TestEchoOpenAISchema, render, OpenAISchema using PromptingTools: AIMessage, SystemMessage, AbstractMessage using PromptingTools: UserMessage, UserMessageWithImages, DataMessage +using PromptingTools: CustomProvider, CustomOpenAISchema, MistralOpenAISchema @testset "render-OpenAI" begin schema = OpenAISchema() @@ -169,6 +170,64 @@ using PromptingTools: UserMessage, UserMessageWithImages, DataMessage nothing end +@testset "OpenAI.build_url,OpenAI.auth_header" begin + provider = CustomProvider(; base_url = "http://localhost:8082", api_version = "xyz") + @test OpenAI.build_url(provider, "endpoint1") == "http://localhost:8082/endpoint1" + @test OpenAI.auth_header(provider, "ABC") == + ["Authorization" => "Bearer ABC", "Content-Type" => "application/json"] +end + +@testset "OpenAI.create_chat" begin + # Test CustomOpenAISchema() with a mock server + PORT = rand(1000:2000) + echo_server = HTTP.serve!(PORT) do req + content = JSON3.read(req.body) + user_msg = last(content[:messages]) + response = Dict(:choices => [Dict(:message => user_msg)], + :model => content[:model], + :usage => Dict(:total_tokens => length(user_msg[:content]), + :prompt_tokens => length(user_msg[:content]), + :completion_tokens => 0)) + return HTTP.Response(200, JSON3.write(response)) + end + + prompt = "Say Hi!" + msg = aigenerate(CustomOpenAISchema(), + prompt; + model = "my_model", + api_kwargs = (; url = "http://localhost:$(PORT)"), + return_all = false) + @test msg.content == prompt + @test msg.tokens == (length(prompt), 0) + + # clean up + close(echo_server) +end +@testset "OpenAI.create_embeddings" begin + # Test CustomOpenAISchema() with a mock server + PORT = rand(1000:2000) + echo_server = HTTP.serve!(PORT) do req + content = JSON3.read(req.body) + response = Dict(:data => [Dict(:embedding => ones(128))], + :usage => Dict(:total_tokens => length(content[:input]), + :prompt_tokens => length(content[:input]), + :completion_tokens => 0)) + return HTTP.Response(200, JSON3.write(response)) + end + + prompt = "Embed me!!" + msg = aiembed(CustomOpenAISchema(), + prompt; + model = "my_model", + api_kwargs = (; url = "http://localhost:$(PORT)"), + return_all = false) + @test msg.content == ones(128) + @test msg.tokens == (length(prompt), 0) + + # clean up + close(echo_server) +end + @testset "aigenerate-OpenAI" begin # corresponds to OpenAI API v1 response = Dict(:choices => [Dict(:message => Dict(:content => "Hello!"))], @@ -176,7 +235,7 @@ end # Test the monkey patch schema = TestEchoOpenAISchema(; response, status = 200) - msg = PT.OpenAI.create_chat(schema, "", "", "Hello") + msg = OpenAI.create_chat(schema, "", "", "Hello") @test msg isa TestEchoOpenAISchema # Real generation API diff --git a/test/runtests.jl b/test/runtests.jl index cc7a6f672..34b73012a 100644 --- a/test/runtests.jl +++ b/test/runtests.jl @@ -1,5 +1,5 @@ using PromptingTools -using JSON3 +using OpenAI, HTTP, JSON3 using Test using Aqua const PT = PromptingTools