Releases: jackmpcollins/magentic
v0.7.0
What's Changed
- Add Asyncio section to README by @jackmpcollins in #28
- Add chatprompt decorator by @jackmpcollins in #34
- Make openai tests less flaky by @jackmpcollins in #35
Full Changelog: v0.6.0...v0.7.0
Chat Prompting
The @chatprompt
decorator works just like @prompt
but allows you to pass chat messages as a template rather than a single text prompt. This can be used to provide a system message or for few-shot prompting where you provide example responses to guide the model's output. Format fields denoted by curly braces {example}
will be filled in all messages - use the escape_braces
function to prevent a string being used as a template.
from magentic import chatprompt, AssistantMessage, SystemMessage, UserMessage
from magentic.chatprompt import escape_braces
from pydantic import BaseModel
class Quote(BaseModel):
quote: str
character: str
@chatprompt(
SystemMessage("You are a movie buff."),
UserMessage("What is your favorite quote from Harry Potter?"),
AssistantMessage(
Quote(
quote="It does not do to dwell on dreams and forget to live.",
character="Albus Dumbledore",
)
),
UserMessage("What is your favorite quote from {movie}?"),
)
def get_movie_quote(movie: str) -> Quote:
...
get_movie_quote("Iron Man")
# Quote(quote='I am Iron Man.', character='Tony Stark')
v0.6.0
What's Changed
- Move function schemas into own file by @jackmpcollins in #18
- Bump certifi from 2023.5.7 to 2023.7.22 by @dependabot in #21
- Bump jupyter-server from 2.7.0 to 2.7.2 by @dependabot in #20
- Bump tornado from 6.3.2 to 6.3.3 by @dependabot in #19
- Add example notebook for Chain of Verification by @jackmpcollins in #22
- Handle Iterable type with no item type by @jackmpcollins in #24
- Handle BaseModel parameters in functions by @jackmpcollins in #23
- Make AsyncIterableFunctionSchema.serialize_args raise NotImplementedError by @jackmpcollins in #25
- Rename chat_model files by @jackmpcollins in #26
New Contributors
- @dependabot made their first contribution in #21
Full Changelog: v0.5.0...v0.6.0
v0.5.0
What's Changed
- Add docstrings where useful by @jackmpcollins in #14
- Enable async prompt_chain. Remove FunctionCallMessage by @jackmpcollins in #17
Full Changelog: v0.4.1...v0.5.0
from magentic import prompt_chain
async def get_current_weather(location, unit="fahrenheit"):
"""Get the current weather in a given location"""
return {
"location": location,
"temperature": "72",
"unit": unit,
"forecast": ["sunny", "windy"],
}
@prompt_chain(
template="What's the weather like in {city}?",
functions=[get_current_weather],
)
async def describe_weather(city: str) -> str:
...
output = await describe_weather("Boston")
v0.4.1
What's Changed
- Add GitHub actions workflow to run unit-tests by @manuelzander in #1
- Enable more linting rules with Ruff. Update pre-commit hooks by @jackmpcollins in #13
New Contributors
- @manuelzander made their first contribution in #1
Full Changelog: v0.4.0...v0.4.1
v0.4.0
What's Changed
- Use pydantic models for OpenAI inputs/outputs by @jackmpcollins in #9
- Fix object streaming example in README by @jackmpcollins in #10
- Add test for async/coroutine function with FunctionCall by @jackmpcollins in #11
- Support setting OpenAI params using environment variables by @jackmpcollins in #12
Full Changelog: v0.3.0...v0.4.0
Configuration
The order of precedence of configuration is
- Arguments passed when initializing an instance in Python
- Environment variables
The following environment variables can be set.
Environment Variable | Description |
---|---|
MAGENTIC_OPENAI_MODEL | OpenAI model e.g. "gpt-4" |
MAGENTIC_OPENAI_TEMPERATURE | OpenAI temperature, float |
v0.3.0
What's Changed
- Tidy StreamedStr type hints. Improve README example. by @jackmpcollins in #7
- Add object streaming by @jackmpcollins in #8
Full Changelog: v0.2.0...v0.3.0
Object Streaming
Structured outputs can also be streamed from the LLM by using the return type annotation Iterable
(or AsyncIterable
). This allows each item to be processed while the next one is being generated. See the example in examples/quiz for how this can be used to improve user experience by quickly displaying/using the first item returned.
from collections.abc import Iterable
from time import time
@prompt("Create a Superhero team named {name}.")
def create_superhero_team(name: str) -> Iterable[Superhero]:
...
start_time = time()
for hero in create_superhero_team("The Food Dudes"):
print(f"{time() - start_time:.2f}s : {hero}")
# 2.23s : name='Pizza Man' age=30 power='Can shoot pizza slices from his hands' enemies=['The Hungry Horde', 'The Junk Food Gang']
# 4.03s : name='Captain Carrot' age=35 power='Super strength and agility from eating carrots' enemies=['The Sugar Squad', 'The Greasy Gang']
# 6.05s : name='Ice Cream Girl' age=25 power='Can create ice cream out of thin air' enemies=['The Hot Sauce Squad', 'The Healthy Eaters']
v0.2.0
What's Changed
- Add streaming for string output by @jackmpcollins in #5
- Add publish github workflow by @jackmpcollins in #6
Full Changelog: v0.1.4...v0.2.0
Streaming
The StreamedStr
(and AsyncStreamedStr
) class can be used to stream the output of the LLM. This allows you to process the text while it is being generated, rather than receiving the whole output at once. Multiple StreamedStr
can be created at the same time to stream LLM outputs concurrently. In the below example, generating the description for multiple countries takes approximately the same amount of time as for a single country.
from magentic import prompt, StreamedStr
@prompt("Tell me about {country}")
def describe_country(country: str) -> StreamedStr:
...
# Print the chunks while they are being received
for chunk in describe_country("Brazil"):
print(chunk, end="")
# 'Brazil, officially known as the Federative Republic of Brazil, is ...'
# Generate text concurrently by creating the streams before consuming them
streamed_strs = [describe_country(c) for c in ["Australia", "Brazil", "Chile"]]
[str(s) for s in streamed_strs]
# ["Australia is a country ...", "Brazil, officially known as ...", "Chile, officially known as ..."]
v0.1.4
What's Changed
- Remove ability to use function docstring as template by @jackmpcollins in #3
- Raise StructuredOutputError from ValidationError to clarify error by @jackmpcollins in #4
Full Changelog: v0.1.3...v0.1.4
v0.1.3
Main changes
- Support async prompt functions by @jackmpcollins in #2
Commits
- b7adc1a Support async prompt functions (#2)
- 428596e Add example for RAG with wikipedia
- 7e96aae Add test for parsing/serializing str|None
- 2d676c9 Use all to explicitly export from top-level
- c74d121 poetry add --group examples wikipedia
- 6fe55d9 Add examples/quiz
- e389547 Set --cov-report=term-missing for pytest-cov
Full Changelog: v0.1.2...v0.1.3
v0.1.2
Main Changes
- Handle pydantic models as dictionaries values in
DictFunctionSchema.serialize_args
- Exclude unset parameters when creating
FunctionCall
inFunctionCallFunctionSchema.parse_args
- Add
FunctionCall.__eq__
method - Increase test coverage
Commits
- 506d689 poetry update - address aiohttp CVE
- feac090 Update README: improve first example, add more explanation
- dab90cf poetry add jupyter --group examples
- 992e65e poetry add pytest-cov
- a05f057 Test FunctionCallFunctionSchema serialize_args, and FunctionCall
- ed8e9d9 Test AnyFunctionSchema serialize_args
- 606cb30 Test DictFunctionSchema serialize_args
- ae6218e Test OrderedDict works with parse_args
- 82c1d41 Tidy function_schemas creation in Model.complete
Full Changelog: v0.1.1...v0.1.2