From 21fc26dab0fb420bc351cac6ca6f61f2f2784c98 Mon Sep 17 00:00:00 2001 From: "Documenter.jl" Date: Sun, 26 Nov 2023 11:11:43 +0000 Subject: [PATCH] build based on af49075 --- dev/.documenter-siteinfo.json | 2 +- dev/examples/readme_examples/index.html | 2 +- .../working_with_aitemplates/index.html | 2 +- dev/examples/working_with_ollama/index.html | 2 +- dev/frequently_asked_questions/index.html | 2 +- dev/getting_started/index.html | 2 +- dev/index.html | 2 +- dev/reference/index.html | 36 +++++++++---------- 8 files changed, 25 insertions(+), 25 deletions(-) diff --git a/dev/.documenter-siteinfo.json b/dev/.documenter-siteinfo.json index b8d0546d3..da341a280 100644 --- a/dev/.documenter-siteinfo.json +++ b/dev/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.9.4","generation_timestamp":"2023-11-26T10:51:56","documenter_version":"1.1.2"}} \ No newline at end of file +{"documenter":{"julia_version":"1.9.4","generation_timestamp":"2023-11-26T11:11:41","documenter_version":"1.1.2"}} \ No newline at end of file diff --git a/dev/examples/readme_examples/index.html b/dev/examples/readme_examples/index.html index 60d437446..741c9a0ba 100644 --- a/dev/examples/readme_examples/index.html +++ b/dev/examples/readme_examples/index.html @@ -83,4 +83,4 @@ msg.content # 4096-element JSON3.Array{Float64... msg = aiembed(schema, ["Embed me", "Embed me"]; model="openhermes2.5-mistral") -msg.content # 4096×2 Matrix{Float64}:

If you're getting errors, check that Ollama is running - see the Setup Guide for Ollama section below.

+msg.content # 4096×2 Matrix{Float64}:

If you're getting errors, check that Ollama is running - see the Setup Guide for Ollama section below.

diff --git a/dev/examples/working_with_aitemplates/index.html b/dev/examples/working_with_aitemplates/index.html index e1e927394..02d481b80 100644 --- a/dev/examples/working_with_aitemplates/index.html +++ b/dev/examples/working_with_aitemplates/index.html @@ -31,4 +31,4 @@ PT.save_template(filename, tpl; description = "For asking data analysis questions in Julia language. Placeholders: `ask`") -rm(filename) # cleanup if we don't like it

When you create a new template, remember to re-load the templates with load_templates!() so that it's available for use.

PT.load_templates!();

!!! If you have some good templates (or suggestions for the existing ones), please consider sharing them with the community by opening a PR to the templates directory!


This page was generated using Literate.jl.

+rm(filename) # cleanup if we don't like it

When you create a new template, remember to re-load the templates with load_templates!() so that it's available for use.

PT.load_templates!();

!!! If you have some good templates (or suggestions for the existing ones), please consider sharing them with the community by opening a PR to the templates directory!


This page was generated using Literate.jl.

diff --git a/dev/examples/working_with_ollama/index.html b/dev/examples/working_with_ollama/index.html index 7e0e2a074..0e254c677 100644 --- a/dev/examples/working_with_ollama/index.html +++ b/dev/examples/working_with_ollama/index.html @@ -4115,4 +4115,4 @@ LinearAlgebra.normalize; model = "openhermes2.5-mistral")
DataMessage(Matrix{Float64} of size (4096, 2))

Cosine similarity is then a simple multiplication

msg.content' * msg.content[:, 1]
2-element Vector{Float64}:
  0.9999999999999946
- 0.34130017815042357

This page was generated using Literate.jl.

+ 0.34130017815042357

This page was generated using Literate.jl.

diff --git a/dev/frequently_asked_questions/index.html b/dev/frequently_asked_questions/index.html index 4155ecd3b..cab0348bc 100644 --- a/dev/frequently_asked_questions/index.html +++ b/dev/frequently_asked_questions/index.html @@ -1,3 +1,3 @@ F.A.Q. · PromptingTools.jl

Frequently Asked Questions

Why OpenAI

OpenAI's models are at the forefront of AI research and provide robust, state-of-the-art capabilities for many tasks.

There will be situations not or cannot use it (eg, privacy, cost, etc.). In that case, you can use local models (eg, Ollama) or other APIs (eg, Anthropic).

Note: To get started with Ollama.ai, see the Setup Guide for Ollama section below.

Data Privacy and OpenAI

At the time of writing, OpenAI does NOT use the API calls for training their models.

API

OpenAI does not use data submitted to and generated by our API to train OpenAI models or improve OpenAI’s service offering. In order to support the continuous improvement of our models, you can fill out this form to opt-in to share your data with us. – How your data is used to improve our models

You can always double-check the latest information on the OpenAI's How we use your data page.

Resources:

Creating OpenAI API Key

You can get your API key from OpenAI by signing up for an account and accessing the API section of the OpenAI website.

  1. Create an account with OpenAI
  2. Go to API Key page
  3. Click on “Create new secret key”

!!! Do not share it with anyone and do NOT save it to any files that get synced online.

Resources:

Pro tip: Always set the spending limits!

Setting OpenAI Spending Limits

OpenAI allows you to set spending limits directly on your account dashboard to prevent unexpected costs.

  1. Go to OpenAI Billing
  2. Set Soft Limit (you’ll receive a notification) and Hard Limit (API will stop working not to spend more money)

A good start might be a soft limit of c.$5 and a hard limit of c.$10 - you can always increase it later in the month.

Resources:

How much does it cost? Is it worth paying for?

If you use a local model (eg, with Ollama), it's free. If you use any commercial APIs (eg, OpenAI), you will likely pay per "token" (a sub-word unit).

For example, a simple request with a simple question and 1 sentence response in return (”Is statement XYZ a positive comment”) will cost you ~0.0001 (ie, one hundredth of a cent)

Is it worth paying for?

GenAI is a way to buy time! You can pay cents to save tens of minutes every day.

Continuing the example above, imagine you have a table with 200 comments. Now, you can parse each one of them with an LLM for the features/checks you need. Assuming the price per call was 0.0001, you'd pay 2 cents for the job and save 30-60 minutes of your time!

Resources:

Configuring the Environment Variable for API Key

To use the OpenAI API with PromptingTools.jl, set your API key as an environment variable:

ENV["OPENAI_API_KEY"] = "your-api-key"

As a one-off, you can:

  • set it in the terminal before launching Julia: export OPENAI_API_KEY = <your key>
  • set it in your setup.jl (make sure not to commit it to GitHub!)

Make sure to start Julia from the same terminal window where you set the variable. Easy check in Julia, run ENV["OPENAI_API_KEY"] and you should see your key!

A better way:

  • On a Mac, add the configuration line to your terminal's configuration file (eg, ~/.zshrc). It will get automatically loaded every time you launch the terminal
  • On Windows, set it as a system variable in "Environment Variables" settings (see the Resources)

Resources:

Note: In the future, we hope to add Preferences.jl-based workflow to set the API key and other preferences.

Understanding the API Keyword Arguments in aigenerate (api_kwargs)

See OpenAI API reference for more information.

Instant Access from Anywhere

For easy access from anywhere, add PromptingTools into your startup.jl (can be found in ~/.julia/config/startup.jl).

Add the following snippet:

using PromptingTools
-const PT = PromptingTools # to access unexported functions and types

Now, you can just use ai"Help me do X to achieve Y" from any REPL session!

Open Source Alternatives

The ethos of PromptingTools.jl is to allow you to use whatever model you want, which includes Open Source LLMs. The most popular and easiest to setup is Ollama.ai - see below for more information.

Setup Guide for Ollama

Ollama runs a background service hosting LLMs that you can access via a simple API. It's especially useful when you're working with some sensitive data that should not be sent anywhere.

Installation is very easy, just download the latest version here.

Once you've installed it, just launch the app and you're ready to go!

To check if it's running, go to your browser and open 127.0.0.1:11434. You should see the message "Ollama is running". Alternatively, you can run ollama serve in your terminal and you'll get a message that it's already running.

There are many models available in Ollama Library, including Llama2, CodeLlama, SQLCoder, or my personal favorite openhermes2.5-mistral.

Download new models with ollama pull <model_name> (eg, ollama pull openhermes2.5-mistral).

Show currently available models with ollama list.

See Ollama.ai for more information.

+const PT = PromptingTools # to access unexported functions and types

Now, you can just use ai"Help me do X to achieve Y" from any REPL session!

Open Source Alternatives

The ethos of PromptingTools.jl is to allow you to use whatever model you want, which includes Open Source LLMs. The most popular and easiest to setup is Ollama.ai - see below for more information.

Setup Guide for Ollama

Ollama runs a background service hosting LLMs that you can access via a simple API. It's especially useful when you're working with some sensitive data that should not be sent anywhere.

Installation is very easy, just download the latest version here.

Once you've installed it, just launch the app and you're ready to go!

To check if it's running, go to your browser and open 127.0.0.1:11434. You should see the message "Ollama is running". Alternatively, you can run ollama serve in your terminal and you'll get a message that it's already running.

There are many models available in Ollama Library, including Llama2, CodeLlama, SQLCoder, or my personal favorite openhermes2.5-mistral.

Download new models with ollama pull <model_name> (eg, ollama pull openhermes2.5-mistral).

Show currently available models with ollama list.

See Ollama.ai for more information.

diff --git a/dev/getting_started/index.html b/dev/getting_started/index.html index 3b6fca0c6..448d82f36 100644 --- a/dev/getting_started/index.html +++ b/dev/getting_started/index.html @@ -4,4 +4,4 @@ AIMessage("The capital of France is Paris.")

Returned object is a light wrapper with generated message in field :content (eg, ans.content) for additional downstream processing.

You can easily inject any variables with string interpolation:

country = "Spain"
 ai"What is the capital of \$(country)?"
[ Info: Tokens: 32 @ Cost: $0.0001 in 0.5 seconds
 AIMessage("The capital of Spain is Madrid.")

Pro tip: Use after-string-flags to select the model to be called, eg, ai"What is the capital of France?"gpt4 (use gpt4t for the new GPT-4 Turbo model). Great for those extra hard questions!

Using aigenerate with placeholders

For more complex prompt templates, you can use handlebars-style templating and provide variables as keyword arguments:

msg = aigenerate("What is the capital of {{country}}? Is the population larger than {{population}}?", country="Spain", population="1M")
[ Info: Tokens: 74 @ Cost: $0.0001 in 1.3 seconds
-AIMessage("The capital of Spain is Madrid. And yes, the population of Madrid is larger than 1 million. As of 2020, the estimated population of Madrid is around 3.3 million people.")

Pro tip: Use asyncmap to run multiple AI-powered tasks concurrently.

Pro tip: If you use slow models (like GPT-4), you can use async version of @ai_str -> @aai_str to avoid blocking the REPL, eg, aai"Say hi but slowly!"gpt4

For more practical examples, see the Various Examples section.

+AIMessage("The capital of Spain is Madrid. And yes, the population of Madrid is larger than 1 million. As of 2020, the estimated population of Madrid is around 3.3 million people.")

Pro tip: Use asyncmap to run multiple AI-powered tasks concurrently.

Pro tip: If you use slow models (like GPT-4), you can use async version of @ai_str -> @aai_str to avoid blocking the REPL, eg, aai"Say hi but slowly!"gpt4

For more practical examples, see the Various Examples section.

diff --git a/dev/index.html b/dev/index.html index 504d49085..a9328c8f0 100644 --- a/dev/index.html +++ b/dev/index.html @@ -1,2 +1,2 @@ -Home · PromptingTools.jl

PromptingTools

Documentation for PromptingTools.

Streamline your life using PromptingTools.jl, the Julia package that simplifies interacting with large language models.

PromptingTools.jl is not meant for building large-scale systems. It's meant to be the go-to tool in your global environment that will save you 20 minutes every day!

Why PromptingTools.jl?

Prompt engineering is neither fast nor easy. Moreover, different models and their fine-tunes might require different prompt formats and tricks, or perhaps the information you work with requires special models to be used. PromptingTools.jl is meant to unify the prompts for different backends and make the common tasks (like templated prompts) as simple as possible.

Some features:

  • aigenerate Function: Simplify prompt templates with handlebars (eg, {{variable}}) and keyword arguments
  • @ai_str String Macro: Save keystrokes with a string macro for simple prompts
  • Easy to Remember: All exported functions start with ai... for better discoverability
  • Light Wraper Types: Benefit from Julia's multiple dispatch by having AI outputs wrapped in specific types
  • Minimal Dependencies: Enjoy an easy addition to your global environment with very light dependencies
  • No Context Switching: Access cutting-edge LLMs with no context switching and minimum extra keystrokes directly in your REPL

First Steps

To get started, see the Getting Started section.

+Home · PromptingTools.jl

PromptingTools

Documentation for PromptingTools.

Streamline your life using PromptingTools.jl, the Julia package that simplifies interacting with large language models.

PromptingTools.jl is not meant for building large-scale systems. It's meant to be the go-to tool in your global environment that will save you 20 minutes every day!

Why PromptingTools.jl?

Prompt engineering is neither fast nor easy. Moreover, different models and their fine-tunes might require different prompt formats and tricks, or perhaps the information you work with requires special models to be used. PromptingTools.jl is meant to unify the prompts for different backends and make the common tasks (like templated prompts) as simple as possible.

Some features:

  • aigenerate Function: Simplify prompt templates with handlebars (eg, {{variable}}) and keyword arguments
  • @ai_str String Macro: Save keystrokes with a string macro for simple prompts
  • Easy to Remember: All exported functions start with ai... for better discoverability
  • Light Wraper Types: Benefit from Julia's multiple dispatch by having AI outputs wrapped in specific types
  • Minimal Dependencies: Enjoy an easy addition to your global environment with very light dependencies
  • No Context Switching: Access cutting-edge LLMs with no context switching and minimum extra keystrokes directly in your REPL

First Steps

To get started, see the Getting Started section.

diff --git a/dev/reference/index.html b/dev/reference/index.html index 311e1a344..16db785e7 100644 --- a/dev/reference/index.html +++ b/dev/reference/index.html @@ -1,5 +1,5 @@ -Reference · PromptingTools.jl

Reference

PromptingTools.AITemplateType
AITemplate

AITemplate is a template for a conversation prompt. This type is merely a container for the template name, which is resolved into a set of messages (=prompt) by render.

Naming Convention

  • Template names should be in CamelCase
  • Follow the format <Persona>...<Variable>... where possible, eg, JudgeIsItTrue, ``
    • Starting with the Persona (=System prompt), eg, Judge = persona is meant to judge some provided information
    • Variable to be filled in with context, eg, It = placeholder it
    • Ending with the variable name is helpful, eg, JuliaExpertTask for a persona to be an expert in Julia language and task is the placeholder name
  • Ideally, the template name should be self-explanatory, eg, JudgeIsItTrue = persona is meant to judge some provided information where it is true or false

Examples

Save time by re-using pre-made templates, just fill in the placeholders with the keyword arguments:

msg = aigenerate(:JuliaExpertAsk; ask = "How do I add packages?")

The above is equivalent to a more verbose version that explicitly uses the dispatch on AITemplate:

msg = aigenerate(AITemplate(:JuliaExpertAsk); ask = "How do I add packages?")

Find available templates with aitemplates:

tmps = aitemplates("JuliaExpertAsk")
+Reference · PromptingTools.jl

Reference

PromptingTools.AITemplateType
AITemplate

AITemplate is a template for a conversation prompt. This type is merely a container for the template name, which is resolved into a set of messages (=prompt) by render.

Naming Convention

  • Template names should be in CamelCase
  • Follow the format <Persona>...<Variable>... where possible, eg, JudgeIsItTrue, ``
    • Starting with the Persona (=System prompt), eg, Judge = persona is meant to judge some provided information
    • Variable to be filled in with context, eg, It = placeholder it
    • Ending with the variable name is helpful, eg, JuliaExpertTask for a persona to be an expert in Julia language and task is the placeholder name
  • Ideally, the template name should be self-explanatory, eg, JudgeIsItTrue = persona is meant to judge some provided information where it is true or false

Examples

Save time by re-using pre-made templates, just fill in the placeholders with the keyword arguments:

msg = aigenerate(:JuliaExpertAsk; ask = "How do I add packages?")

The above is equivalent to a more verbose version that explicitly uses the dispatch on AITemplate:

msg = aigenerate(AITemplate(:JuliaExpertAsk); ask = "How do I add packages?")

Find available templates with aitemplates:

tmps = aitemplates("JuliaExpertAsk")
 # Will surface one specific template
 # 1-element Vector{AITemplateMetadata}:
 # PromptingTools.AITemplateMetadata
@@ -14,18 +14,18 @@
 {{ask}}"
 #   source: String ""

The above gives you a good idea of what the template is about, what placeholders are available, and how much it would cost to use it (=wordcount).

Search for all Julia-related templates:

tmps = aitemplates("Julia")
 # 2-element Vector{AITemplateMetadata}... -> more to come later!

If you are on VSCode, you can leverage nice tabular display with vscodedisplay:

using DataFrames
-tmps = aitemplates("Julia") |> DataFrame |> vscodedisplay

I have my selected template, how do I use it? Just use the "name" in aigenerate or aiclassify like you see in the first example!

You can inspect any template by "rendering" it (this is what the LLM will see):

julia> AITemplate(:JudgeIsItTrue) |> PromptingTools.render

See also: save_template, load_template, load_templates! for more advanced use cases (and the corresponding script in examples/ folder)

source
PromptingTools.ChatMLSchemaType

ChatMLSchema is used by many open-source chatbots, by OpenAI models (under the hood) and by several models and inferfaces (eg, Ollama, vLLM)

You can explore it on tiktokenizer

It uses the following conversation structure:

<im_start>system
+tmps = aitemplates("Julia") |> DataFrame |> vscodedisplay

I have my selected template, how do I use it? Just use the "name" in aigenerate or aiclassify like you see in the first example!

You can inspect any template by "rendering" it (this is what the LLM will see):

julia> AITemplate(:JudgeIsItTrue) |> PromptingTools.render

See also: save_template, load_template, load_templates! for more advanced use cases (and the corresponding script in examples/ folder)

source
PromptingTools.ChatMLSchemaType

ChatMLSchema is used by many open-source chatbots, by OpenAI models (under the hood) and by several models and inferfaces (eg, Ollama, vLLM)

You can explore it on tiktokenizer

It uses the following conversation structure:

<im_start>system
 ...<im_end>
 <|im_start|>user
 ...<|im_end|>
 <|im_start|>assistant
-...<|im_end|>
source
PromptingTools.MaybeExtractType

Extract a result from the provided data, if any, otherwise set the error and message fields.

Arguments

  • error::Bool: true if a result is found, false otherwise.
  • message::String: Only present if no result is found, should be short and concise.
source
PromptingTools.OllamaManagedSchemaType

Ollama by default manages different models and their associated prompt schemas when you pass system_prompt and prompt fields to the API.

Warning: It works only for 1 system message and 1 user message, so anything more than that has to be rejected.

If you need to pass more messagese / longer conversational history, you can use define the model-specific schema directly and pass your Ollama requests with raw=true, which disables and templating and schema management by Ollama.

source
PromptingTools.OpenAISchemaType

OpenAISchema is the default schema for OpenAI models.

It uses the following conversation template:

[Dict(role="system",content="..."),Dict(role="user",content="..."),Dict(role="assistant",content="...")]

It's recommended to separate sections in your prompt with markdown headers (e.g. `##Answer

`).

source
PromptingTools.MaybeExtractType

Extract a result from the provided data, if any, otherwise set the error and message fields.

Arguments

  • error::Bool: true if a result is found, false otherwise.
  • message::String: Only present if no result is found, should be short and concise.
source
PromptingTools.OllamaManagedSchemaType

Ollama by default manages different models and their associated prompt schemas when you pass system_prompt and prompt fields to the API.

Warning: It works only for 1 system message and 1 user message, so anything more than that has to be rejected.

If you need to pass more messagese / longer conversational history, you can use define the model-specific schema directly and pass your Ollama requests with raw=true, which disables and templating and schema management by Ollama.

source
PromptingTools.OpenAISchemaType

OpenAISchema is the default schema for OpenAI models.

It uses the following conversation template:

[Dict(role="system",content="..."),Dict(role="user",content="..."),Dict(role="assistant",content="...")]

It's recommended to separate sections in your prompt with markdown headers (e.g. `##Answer

`).

source
PromptingTools.aiclassifyMethod
aiclassify(prompt_schema::AbstractOpenAISchema, prompt::ALLOWED_PROMPT_TYPE;
 api_kwargs::NamedTuple = (logit_bias = Dict(837 => 100, 905 => 100, 9987 => 100),
     max_tokens = 1, temperature = 0),
 kwargs...)

Classifies the given prompt/statement as true/false/unknown.

Note: this is a very simple classifier, it is not meant to be used in production. Credit goes to AAAzzam.

It uses Logit bias trick and limits the output to 1 token to force the model to output only true/false/unknown.

Output tokens used (via api_kwargs):

  • 837: ' true'
  • 905: ' false'
  • 9987: ' unknown'

Arguments

  • prompt_schema::AbstractOpenAISchema: The schema for the prompt.
  • prompt: The prompt/statement to classify if it's a String. If it's a Symbol, it is expanded as a template via render(schema,template).

Example

aiclassify("Is two plus two four?") # true
 aiclassify("Is two plus three a vegetable on Mars?") # false

aiclassify returns only true/false/unknown. It's easy to get the proper Bool output type out with tryparse, eg,

tryparse(Bool, aiclassify("Is two plus two four?")) isa Bool # true

Output of type Nothing marks that the model couldn't classify the statement as true/false.

Ideally, we would like to re-use some helpful system prompt to get more accurate responses. For this reason we have templates, eg, :JudgeIsItTrue. By specifying the template, we can provide our statement as the expected variable (it in this case) See that the model now correctly classifies the statement as "unknown".

aiclassify(:JudgeIsItTrue; it = "Is two plus three a vegetable on Mars?") # unknown

For better results, use higher quality models like gpt4, eg,

aiclassify(:JudgeIsItTrue;
     it = "If I had two apples and I got three more, I have five apples now.",
-    model = "gpt4") # true
source
PromptingTools.aiembedMethod
aiembed(prompt_schema::AbstractOllamaManagedSchema,
         doc_or_docs::Union{AbstractString, Vector{<:AbstractString}},
         postprocess::F = identity;
         verbose::Bool = true,
@@ -54,7 +54,7 @@
 schema = PT.OllamaManagedSchema()
 
 msg = aiembed(schema, "Hello World", copy; model="openhermes2.5-mistral")
-msg.content # 4096-element Vector{Float64}
source
PromptingTools.aiembedMethod
aiembed(prompt_schema::AbstractOpenAISchema,
         doc_or_docs::Union{AbstractString, Vector{<:AbstractString}},
         postprocess::F = identity;
         verbose::Bool = true,
@@ -70,7 +70,7 @@
 msg = aiembed(["embed me", "and me too"], LinearAlgebra.normalize)
 
 # calculate cosine distance between the two normalized embeddings as a simple dot product
-msg.content' * msg.content[:, 1] # [1.0, 0.787]
source
PromptingTools.aiextractMethod
aiextract([prompt_schema::AbstractOpenAISchema,] prompt::ALLOWED_PROMPT_TYPE; 
+msg.content' * msg.content[:, 1] # [1.0, 0.787]
source
PromptingTools.aiextractMethod
aiextract([prompt_schema::AbstractOpenAISchema,] prompt::ALLOWED_PROMPT_TYPE; 
 return_type::Type,
 verbose::Bool = true,
     model::String = MODEL_CHAT,
@@ -111,7 +111,7 @@
 # If LLM extraction fails, it will return a Dict with `error` and `message` fields instead of the result!
 msg = aiextract("Extract measurements from the text: I am giraffe", type)
 msg.content
-# MaybeExtract{MyMeasurement}(nothing, true, "I'm sorry, but I can only assist with human measurements.")

That way, you can handle the error gracefully and get a reason why extraction failed (in msg.content.message).

Note that the error message refers to a giraffe not being a human, because in our MyMeasurement docstring, we said that it's for people!

source
PromptingTools.aigenerateMethod
aigenerate(prompt_schema::AbstractOllamaManagedSchema, prompt::ALLOWED_PROMPT_TYPE; verbose::Bool = true,
+# MaybeExtract{MyMeasurement}(nothing, true, "I'm sorry, but I can only assist with human measurements.")

That way, you can handle the error gracefully and get a reason why extraction failed (in msg.content.message).

Note that the error message refers to a giraffe not being a human, because in our MyMeasurement docstring, we said that it's for people!

source
PromptingTools.aigenerateMethod
aigenerate(prompt_schema::AbstractOllamaManagedSchema, prompt::ALLOWED_PROMPT_TYPE; verbose::Bool = true,
     model::String = MODEL_CHAT,
     http_kwargs::NamedTuple = NamedTuple(), api_kwargs::NamedTuple = NamedTuple(),
     kwargs...)

Generate an AI response based on a given prompt using the OpenAI API.

Arguments

  • prompt_schema: An optional object to specify which prompt template should be applied (Default to PROMPT_SCHEMA = OpenAISchema not AbstractManagedSchema)
  • prompt: Can be a string representing the prompt for the AI conversation, a UserMessage, a vector of AbstractMessage or an AITemplate
  • verbose: A boolean indicating whether to print additional information.
  • api_key: Provided for interface consistency. Not needed for locally hosted Ollama.
  • model: A string representing the model to use for generating the response. Can be an alias corresponding to a model ID defined in MODEL_ALIASES.
  • http_kwargs::NamedTuple: Additional keyword arguments for the HTTP request. Defaults to empty NamedTuple.
  • api_kwargs::NamedTuple: Additional keyword arguments for the Ollama API. Defaults to an empty NamedTuple.
  • kwargs: Prompt variables to be used to fill the prompt/template

Returns

  • msg: An AIMessage object representing the generated AI message, including the content, status, tokens, and elapsed time.

Use msg.content to access the extracted string.

See also: ai_str, aai_str, aiembed

Example

Simple hello world to test the API:

const PT = PromptingTools
@@ -134,7 +134,7 @@
 
 msg = aigenerate(schema, conversation; model="openhermes2.5-mistral")
 # [ Info: Tokens: 111 in 2.1 seconds
-# AIMessage("Strong the attachment is, it leads to suffering it may. Focus on the force within you must, ...<continues>")

Note: Managed Ollama currently supports at most 1 User Message and 1 System Message given the API limitations. If you want more, you need to use the ChatMLSchema.

source
PromptingTools.aigenerateMethod
aigenerate([prompt_schema::AbstractOpenAISchema,] prompt::ALLOWED_PROMPT_TYPE; verbose::Bool = true,
+# AIMessage("Strong the attachment is, it leads to suffering it may. Focus on the force within you must, ...<continues>")

Note: Managed Ollama currently supports at most 1 User Message and 1 System Message given the API limitations. If you want more, you need to use the ChatMLSchema.

source
PromptingTools.aigenerateMethod
aigenerate([prompt_schema::AbstractOpenAISchema,] prompt::ALLOWED_PROMPT_TYPE; verbose::Bool = true,
     model::String = MODEL_CHAT,
     http_kwargs::NamedTuple = (;
         retry_non_idempotent = true,
@@ -152,7 +152,7 @@
     PT.SystemMessage("You're master Yoda from Star Wars trying to help the user become a Yedi."),
     PT.UserMessage("I have feelings for my iPhone. What should I do?")]
 msg=aigenerate(conversation)
-# AIMessage("Ah, strong feelings you have for your iPhone. A Jedi's path, this is not... <continues>")
source
PromptingTools.aiscanMethod

aiscan([promptschema::AbstractOpenAISchema,] prompt::ALLOWEDPROMPTTYPE; imageurl::Union{Nothing, AbstractString, Vector{<:AbstractString}} = nothing, imagepath::Union{Nothing, AbstractString, Vector{<:AbstractString}} = nothing, imagedetail::AbstractString = "auto", attachtolatest::Bool = true, verbose::Bool = true, model::String = MODELCHAT, httpkwargs::NamedTuple = (; retrynonidempotent = true, retries = 5, readtimeout = 120), apikwargs::NamedTuple = = (; maxtokens = 2500), kwargs...)

Scans the provided image (image_url or image_path) with the goal provided in the prompt.

Can be used for many multi-modal tasks, such as: OCR (transcribe text in the image), image captioning, image classification, etc.

It's effectively a light wrapper around aigenerate call, which uses additional keyword arguments image_url, image_path, image_detail to be provided. At least one image source (url or path) must be provided.

Arguments

  • prompt_schema: An optional object to specify which prompt template should be applied (Default to PROMPT_SCHEMA = OpenAISchema)
  • prompt: Can be a string representing the prompt for the AI conversation, a UserMessage, a vector of AbstractMessage or an AITemplate
  • image_url: A string or vector of strings representing the URL(s) of the image(s) to scan.
  • image_path: A string or vector of strings representing the path(s) of the image(s) to scan.
  • image_detail: A string representing the level of detail to include for images. Can be "auto", "high", or "low". See OpenAI Vision Guide for more details.
  • attach_to_latest: A boolean how to handle if a conversation with multiple UserMessage is provided. When true, the images are attached to the latest UserMessage.
  • verbose: A boolean indicating whether to print additional information.
  • api_key: A string representing the API key for accessing the OpenAI API.
  • model: A string representing the model to use for generating the response. Can be an alias corresponding to a model ID defined in MODEL_ALIASES.
  • http_kwargs: A named tuple of HTTP keyword arguments.
  • api_kwargs: A named tuple of API keyword arguments.
  • kwargs: Prompt variables to be used to fill the prompt/template

Returns

  • msg: An AIMessage object representing the generated AI message, including the content, status, tokens, and elapsed time.

Use msg.content to access the extracted string.

See also: ai_str, aai_str, aigenerate, aiembed, aiclassify, aiextract

Notes

  • All examples below use model "gpt4v", which is an alias for model ID "gpt-4-vision-preview"
  • max_tokens in the api_kwargs is preset to 2500, otherwise OpenAI enforces a default of only a few hundred tokens (~300). If your output is truncated, increase this value

Example

Describe the provided image:

msg = aiscan("Describe the image"; image_path="julia.png", model="gpt4v")
+# AIMessage("Ah, strong feelings you have for your iPhone. A Jedi's path, this is not... <continues>")
source
PromptingTools.aiscanMethod

aiscan([promptschema::AbstractOpenAISchema,] prompt::ALLOWEDPROMPTTYPE; imageurl::Union{Nothing, AbstractString, Vector{<:AbstractString}} = nothing, imagepath::Union{Nothing, AbstractString, Vector{<:AbstractString}} = nothing, imagedetail::AbstractString = "auto", attachtolatest::Bool = true, verbose::Bool = true, model::String = MODELCHAT, httpkwargs::NamedTuple = (; retrynonidempotent = true, retries = 5, readtimeout = 120), apikwargs::NamedTuple = = (; maxtokens = 2500), kwargs...)

Scans the provided image (image_url or image_path) with the goal provided in the prompt.

Can be used for many multi-modal tasks, such as: OCR (transcribe text in the image), image captioning, image classification, etc.

It's effectively a light wrapper around aigenerate call, which uses additional keyword arguments image_url, image_path, image_detail to be provided. At least one image source (url or path) must be provided.

Arguments

  • prompt_schema: An optional object to specify which prompt template should be applied (Default to PROMPT_SCHEMA = OpenAISchema)
  • prompt: Can be a string representing the prompt for the AI conversation, a UserMessage, a vector of AbstractMessage or an AITemplate
  • image_url: A string or vector of strings representing the URL(s) of the image(s) to scan.
  • image_path: A string or vector of strings representing the path(s) of the image(s) to scan.
  • image_detail: A string representing the level of detail to include for images. Can be "auto", "high", or "low". See OpenAI Vision Guide for more details.
  • attach_to_latest: A boolean how to handle if a conversation with multiple UserMessage is provided. When true, the images are attached to the latest UserMessage.
  • verbose: A boolean indicating whether to print additional information.
  • api_key: A string representing the API key for accessing the OpenAI API.
  • model: A string representing the model to use for generating the response. Can be an alias corresponding to a model ID defined in MODEL_ALIASES.
  • http_kwargs: A named tuple of HTTP keyword arguments.
  • api_kwargs: A named tuple of API keyword arguments.
  • kwargs: Prompt variables to be used to fill the prompt/template

Returns

  • msg: An AIMessage object representing the generated AI message, including the content, status, tokens, and elapsed time.

Use msg.content to access the extracted string.

See also: ai_str, aai_str, aigenerate, aiembed, aiclassify, aiextract

Notes

  • All examples below use model "gpt4v", which is an alias for model ID "gpt-4-vision-preview"
  • max_tokens in the api_kwargs is preset to 2500, otherwise OpenAI enforces a default of only a few hundred tokens (~300). If your output is truncated, increase this value

Example

Describe the provided image:

msg = aiscan("Describe the image"; image_path="julia.png", model="gpt4v")
 # [ Info: Tokens: 1141 @ Cost: $0.0117 in 2.2 seconds
 # AIMessage("The image shows a logo consisting of the word "julia" written in lowercase")

You can provide multiple images at once as a vector and ask for "low" level of detail (cheaper):

msg = aiscan("Describe the image"; image_path=["julia.png","python.png"], image_detail="low", model="gpt4v")

You can use this function as a nice and quick OCR (transcribe text in the image) with a template :OCRTask. Let's transcribe some SQL code from a screenshot (no more re-typing!):

# Screenshot of some SQL code
 image_url = "https://www.sqlservercentral.com/wp-content/uploads/legacy/8755f69180b7ac7ee76a69ae68ec36872a116ad4/24622.png"
@@ -164,7 +164,7 @@
 
 # You can add syntax highlighting of the outputs via Markdown
 using Markdown
-msg.content |> Markdown.parse

Notice that we enforce max_tokens = 2500. That's because OpenAI seems to default to ~300 tokens, which provides incomplete outputs. Hence, we set this value to 2500 as a default. If you still get truncated outputs, increase this value.

source
PromptingTools.aitemplatesFunction
aitemplates

Find easily the most suitable templates for your use case.

You can search by:

  • query::Symbol which looks look only for partial matches in the template name
  • query::AbstractString which looks for partial matches in the template name or description
  • query::Regex which looks for matches in the template name, description or any of the message previews

Keyword Arguments

  • limit::Int limits the number of returned templates (Defaults to 10)

Examples

Find available templates with aitemplates:

tmps = aitemplates("JuliaExpertAsk")
+msg.content |> Markdown.parse

Notice that we enforce max_tokens = 2500. That's because OpenAI seems to default to ~300 tokens, which provides incomplete outputs. Hence, we set this value to 2500 as a default. If you still get truncated outputs, increase this value.

source
PromptingTools.aitemplatesFunction
aitemplates

Find easily the most suitable templates for your use case.

You can search by:

  • query::Symbol which looks look only for partial matches in the template name
  • query::AbstractString which looks for partial matches in the template name or description
  • query::Regex which looks for matches in the template name, description or any of the message previews

Keyword Arguments

  • limit::Int limits the number of returned templates (Defaults to 10)

Examples

Find available templates with aitemplates:

tmps = aitemplates("JuliaExpertAsk")
 # Will surface one specific template
 # 1-element Vector{AITemplateMetadata}:
 # PromptingTools.AITemplateMetadata
@@ -179,7 +179,7 @@
 {{ask}}"
 #   source: String ""

The above gives you a good idea of what the template is about, what placeholders are available, and how much it would cost to use it (=wordcount).

Search for all Julia-related templates:

tmps = aitemplates("Julia")
 # 2-element Vector{AITemplateMetadata}... -> more to come later!

If you are on VSCode, you can leverage nice tabular display with vscodedisplay:

using DataFrames
-tmps = aitemplates("Julia") |> DataFrame |> vscodedisplay

I have my selected template, how do I use it? Just use the "name" in aigenerate or aiclassify like you see in the first example!

source
PromptingTools.aitemplatesMethod

Find the top-limit templates whose name or description fields partially match the query_key::String in TEMPLATE_METADATA.

source
PromptingTools.aitemplatesMethod

Find the top-limit templates where provided query_key::Regex matches either of name, description or previews or User or System messages in TEMPLATE_METADATA.

source
PromptingTools.function_call_signatureMethod
function_call_signature(datastructtype::Struct; max_description_length::Int = 100)

Extract the argument names, types and docstrings from a struct to create the function call signature in JSON schema.

You must provide a Struct type (not an instance of it) with some fields.

Note: Fairly experimental, but works for combination of structs, arrays, strings and singletons.

Tips

  • You can improve the quality of the extraction by writing a helpful docstring for your struct (or any nested struct). It will be provided as a description.

You can even include comments/descriptions about the individual fields.

  • All fields are assumed to be required, unless you allow null values (eg, ::Union{Nothing, Int}). Fields with Nothing will be treated as optional.
  • Missing values are ignored (eg, ::Union{Missing, Int} will be treated as Int). It's for broader compatibility and we cannot deserialize it as easily as Nothing.

Example

Do you want to extract some specific measurements from a text like age, weight and height? You need to define the information you need as a struct (return_type):

struct MyMeasurement
+tmps = aitemplates("Julia") |> DataFrame |> vscodedisplay

I have my selected template, how do I use it? Just use the "name" in aigenerate or aiclassify like you see in the first example!

source
PromptingTools.aitemplatesMethod

Find the top-limit templates whose name or description fields partially match the query_key::String in TEMPLATE_METADATA.

source
PromptingTools.aitemplatesMethod

Find the top-limit templates where provided query_key::Regex matches either of name, description or previews or User or System messages in TEMPLATE_METADATA.

source
PromptingTools.function_call_signatureMethod
function_call_signature(datastructtype::Struct; max_description_length::Int = 100)

Extract the argument names, types and docstrings from a struct to create the function call signature in JSON schema.

You must provide a Struct type (not an instance of it) with some fields.

Note: Fairly experimental, but works for combination of structs, arrays, strings and singletons.

Tips

  • You can improve the quality of the extraction by writing a helpful docstring for your struct (or any nested struct). It will be provided as a description.

You can even include comments/descriptions about the individual fields.

  • All fields are assumed to be required, unless you allow null values (eg, ::Union{Nothing, Int}). Fields with Nothing will be treated as optional.
  • Missing values are ignored (eg, ::Union{Missing, Int} will be treated as Int). It's for broader compatibility and we cannot deserialize it as easily as Nothing.

Example

Do you want to extract some specific measurements from a text like age, weight and height? You need to define the information you need as a struct (return_type):

struct MyMeasurement
     age::Int
     height::Union{Int,Nothing}
     weight::Union{Nothing,Float64}
@@ -195,25 +195,25 @@
     measurements::Vector{MyMeasurement}
 end
 
-Or if you want your extraction to fail gracefully when data isn't found, use `MaybeExtract{T}` wrapper (inspired by Instructor package!):

using PromptingTools: MaybeExtract

type = MaybeExtract{MyMeasurement}

Effectively the same as:

struct MaybeExtract{T}

result::Union{T, Nothing}

error::Bool // true if a result is found, false otherwise

message::Union{Nothing, String} // Only present if no result is found, should be short and concise

end

If LLM extraction fails, it will return a Dict with error and message fields instead of the result!

msg = aiextract("Extract measurements from the text: I am giraffe", type)

Dict{Symbol, Any} with 2 entries:

:message => "Sorry, this feature is only available for humans."

:error => true

``` That way, you can handle the error gracefully and get a reason why extraction failed.

source
PromptingTools.load_templates!Function
load_templates!(; remove_templates::Bool=true)

Loads templates from folder templates/ in the package root and stores them in TEMPLATE_STORE and TEMPLATE_METADATA.

Note: Automatically removes any existing templates and metadata from TEMPLATE_STORE and TEMPLATE_METADATA if remove_templates=true.

source
PromptingTools.ollama_apiMethod
ollama_api(prompt_schema::AbstractOllamaManagedSchema, prompt::AbstractString,
+Or if you want your extraction to fail gracefully when data isn't found, use `MaybeExtract{T}` wrapper (inspired by Instructor package!):

using PromptingTools: MaybeExtract

type = MaybeExtract{MyMeasurement}

Effectively the same as:

struct MaybeExtract{T}

result::Union{T, Nothing}

error::Bool // true if a result is found, false otherwise

message::Union{Nothing, String} // Only present if no result is found, should be short and concise

end

If LLM extraction fails, it will return a Dict with error and message fields instead of the result!

msg = aiextract("Extract measurements from the text: I am giraffe", type)

Dict{Symbol, Any} with 2 entries:

:message => "Sorry, this feature is only available for humans."

:error => true

``` That way, you can handle the error gracefully and get a reason why extraction failed.

source
PromptingTools.load_templates!Function
load_templates!(; remove_templates::Bool=true)

Loads templates from folder templates/ in the package root and stores them in TEMPLATE_STORE and TEMPLATE_METADATA.

Note: Automatically removes any existing templates and metadata from TEMPLATE_STORE and TEMPLATE_METADATA if remove_templates=true.

source
PromptingTools.ollama_apiMethod
ollama_api(prompt_schema::AbstractOllamaManagedSchema, prompt::AbstractString,
     system::Union{Nothing, AbstractString} = nothing,
     endpoint::String = "generate";
     model::String = "llama2", http_kwargs::NamedTuple = NamedTuple(),
     stream::Bool = false,
     url::String = "localhost", port::Int = 11434,
-    kwargs...)

Simple wrapper for a call to Ollama API.

Keyword Arguments

  • prompt_schema: Defines which prompt template should be applied.
  • prompt: Can be a string representing the prompt for the AI conversation, a UserMessage, a vector of AbstractMessage
  • system: An optional string representing the system message for the AI conversation. If not provided, a default message will be used.
  • endpoint: The API endpoint to call, only "generate" and "embeddings" are currently supported. Defaults to "generate".
  • model: A string representing the model to use for generating the response. Can be an alias corresponding to a model ID defined in MODEL_ALIASES.
  • http_kwargs::NamedTuple: Additional keyword arguments for the HTTP request. Defaults to empty NamedTuple.
  • stream: A boolean indicating whether to stream the response. Defaults to false.
  • url: The URL of the Ollama API. Defaults to "localhost".
  • port: The port of the Ollama API. Defaults to 11434.
  • kwargs: Prompt variables to be used to fill the prompt/template
source
PromptingTools.renderMethod
render(schema::AbstractOllamaManagedSchema,
+    kwargs...)

Simple wrapper for a call to Ollama API.

Keyword Arguments

  • prompt_schema: Defines which prompt template should be applied.
  • prompt: Can be a string representing the prompt for the AI conversation, a UserMessage, a vector of AbstractMessage
  • system: An optional string representing the system message for the AI conversation. If not provided, a default message will be used.
  • endpoint: The API endpoint to call, only "generate" and "embeddings" are currently supported. Defaults to "generate".
  • model: A string representing the model to use for generating the response. Can be an alias corresponding to a model ID defined in MODEL_ALIASES.
  • http_kwargs::NamedTuple: Additional keyword arguments for the HTTP request. Defaults to empty NamedTuple.
  • stream: A boolean indicating whether to stream the response. Defaults to false.
  • url: The URL of the Ollama API. Defaults to "localhost".
  • port: The port of the Ollama API. Defaults to 11434.
  • kwargs: Prompt variables to be used to fill the prompt/template
source
PromptingTools.renderMethod
render(schema::AbstractOllamaManagedSchema,
     messages::Vector{<:AbstractMessage};
-    kwargs...)

Builds a history of the conversation to provide the prompt to the API. All unspecified kwargs are passed as replacements such that {{key}}=>value in the template.

Note: Due to its "managed" nature, at most 2 messages can be provided (system and prompt inputs in the API).

source
PromptingTools.renderMethod
render(schema::AbstractOpenAISchema,
+    kwargs...)

Builds a history of the conversation to provide the prompt to the API. All unspecified kwargs are passed as replacements such that {{key}}=>value in the template.

Note: Due to its "managed" nature, at most 2 messages can be provided (system and prompt inputs in the API).

source
PromptingTools.renderMethod
render(schema::AbstractOpenAISchema,
     messages::Vector{<:AbstractMessage};
     image_detail::AbstractString = "auto",
-    kwargs...)

Builds a history of the conversation to provide the prompt to the API. All unspecified kwargs are passed as replacements such that {{key}}=>value in the template.

Arguments

  • image_detail: Only for UserMessageWithImages. It represents the level of detail to include for images. Can be "auto", "high", or "low".
source
PromptingTools.replace_wordsMethod
replace_words(text::AbstractString, words::Vector{<:AbstractString}; replacement::AbstractString="ABC")

Replace all occurrences of words in words with replacement in text. Useful to quickly remove specific names or entities from a text.

Arguments

  • text::AbstractString: The text to be processed.
  • words::Vector{<:AbstractString}: A vector of words to be replaced.
  • replacement::AbstractString="ABC": The replacement string to be used. Defaults to "ABC".

Example

text = "Disney is a great company"
+    kwargs...)

Builds a history of the conversation to provide the prompt to the API. All unspecified kwargs are passed as replacements such that {{key}}=>value in the template.

Arguments

  • image_detail: Only for UserMessageWithImages. It represents the level of detail to include for images. Can be "auto", "high", or "low".
source
PromptingTools.replace_wordsMethod
replace_words(text::AbstractString, words::Vector{<:AbstractString}; replacement::AbstractString="ABC")

Replace all occurrences of words in words with replacement in text. Useful to quickly remove specific names or entities from a text.

Arguments

  • text::AbstractString: The text to be processed.
  • words::Vector{<:AbstractString}: A vector of words to be replaced.
  • replacement::AbstractString="ABC": The replacement string to be used. Defaults to "ABC".

Example

text = "Disney is a great company"
 replace_words(text, ["Disney", "Snow White", "Mickey Mouse"])
-# Output: "ABC is a great company"
source
PromptingTools.split_by_lengthMethod
split_by_length(text::String; separator::String=" ", max_length::Int=35000) -> Vector{String}

Split a given string text into chunks of a specified maximum length max_length. This is particularly useful for splitting larger documents or texts into smaller segments, suitable for models or systems with smaller context windows.

Arguments

  • text::String: The text to be split.
  • separator::String=" ": The separator used to split the text into minichunks. Defaults to a space character.
  • max_length::Int=35000: The maximum length of each chunk. Defaults to 35,000 characters, which should fit within 16K context window.

Returns

Vector{String}: A vector of strings, each representing a chunk of the original text that is smaller than or equal to max_length.

Notes

  • The function ensures that each chunk is as close to max_length as possible without exceeding it.
  • If the text is empty, the function returns an empty array.
  • The separator is re-added to the text chunks after splitting, preserving the original structure of the text as closely as possible.

Examples

Splitting text with the default separator (" "):

text = "Hello world. How are you?"
+# Output: "ABC is a great company"
source
PromptingTools.split_by_lengthMethod
split_by_length(text::String; separator::String=" ", max_length::Int=35000) -> Vector{String}

Split a given string text into chunks of a specified maximum length max_length. This is particularly useful for splitting larger documents or texts into smaller segments, suitable for models or systems with smaller context windows.

Arguments

  • text::String: The text to be split.
  • separator::String=" ": The separator used to split the text into minichunks. Defaults to a space character.
  • max_length::Int=35000: The maximum length of each chunk. Defaults to 35,000 characters, which should fit within 16K context window.

Returns

Vector{String}: A vector of strings, each representing a chunk of the original text that is smaller than or equal to max_length.

Notes

  • The function ensures that each chunk is as close to max_length as possible without exceeding it.
  • If the text is empty, the function returns an empty array.
  • The separator is re-added to the text chunks after splitting, preserving the original structure of the text as closely as possible.

Examples

Splitting text with the default separator (" "):

text = "Hello world. How are you?"
 chunks = splitbysize(text; max_length=13)
 length(chunks) # Output: 2

Using a custom separator and custom max_length

text = "Hello,World," ^ 2900 # length 34900 chars
 split_by_length(text; separator=",", max_length=10000) # for 4K context window
-length(chunks[1]) # Output: 4
source
PromptingTools.@aai_strMacro
aai"user_prompt"[model_alias] -> AIMessage

Asynchronous version of @ai_str macro, which will log the result once it's ready.

Example

Send asynchronous request to GPT-4, so we don't have to wait for the response: Very practical with slow models, so you can keep working in the meantime.

```julia m = aai"Say Hi!"gpt4;

...with some delay...

[ Info: Tokens: 29 @ Cost: 0.0011 in 2.7 seconds

[ Info: AIMessage> Hello! How can I assist you today?

source
PromptingTools.@ai_strMacro
ai"user_prompt"[model_alias] -> AIMessage

The ai"" string macro generates an AI response to a given prompt by using aigenerate under the hood.

Arguments

  • user_prompt (String): The input prompt for the AI model.
  • model_alias (optional, any): Provide model alias of the AI model (see MODEL_ALIASES).

Returns

AIMessage corresponding to the input prompt.

Example

result = ai"Hello, how are you?"
+length(chunks[1]) # Output: 4
source
PromptingTools.@aai_strMacro
aai"user_prompt"[model_alias] -> AIMessage

Asynchronous version of @ai_str macro, which will log the result once it's ready.

Example

Send asynchronous request to GPT-4, so we don't have to wait for the response: Very practical with slow models, so you can keep working in the meantime.

```julia m = aai"Say Hi!"gpt4;

...with some delay...

[ Info: Tokens: 29 @ Cost: 0.0011 in 2.7 seconds

[ Info: AIMessage> Hello! How can I assist you today?

source
PromptingTools.@ai_strMacro
ai"user_prompt"[model_alias] -> AIMessage

The ai"" string macro generates an AI response to a given prompt by using aigenerate under the hood.

Arguments

  • user_prompt (String): The input prompt for the AI model.
  • model_alias (optional, any): Provide model alias of the AI model (see MODEL_ALIASES).

Returns

AIMessage corresponding to the input prompt.

Example

result = ai"Hello, how are you?"
 # AIMessage("Hello! I'm an AI assistant, so I don't have feelings, but I'm here to help you. How can I assist you today?")

If you want to interpolate some variables or additional context, simply use string interpolation:

a=1
 result = ai"What is `$a+$a`?"
 # AIMessage("The sum of `1+1` is `2`.")

If you want to use a different model, eg, GPT-4, you can provide its alias as a flag:

result = ai"What is `1.23 * 100 + 1`?"gpt4
-# AIMessage("The answer is 124.")
source
+# AIMessage("The answer is 124.")
source