Skip to content

Commit

Permalink
chore: setup workspace for exchange
Browse files Browse the repository at this point in the history
This moves the exchange package into this repo to speed up
iteration, while still making it possible to publish this separately
for reuse.

Includes updates to the github actions
  • Loading branch information
baxen committed Oct 2, 2024
1 parent 676ac78 commit aecdd62
Show file tree
Hide file tree
Showing 62 changed files with 5,868 additions and 52 deletions.
79 changes: 76 additions & 3 deletions .github/workflows/ci.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ on:
branches: [main]

jobs:
build:
exchange:
runs-on: ubuntu-latest

steps:
Expand All @@ -19,9 +19,82 @@ jobs:

- name: Ruff
run: |
uvx ruff check
uvx ruff format --check
uvx ruff check packages/exchange
uvx ruff format packages/exchange --check
- name: Run tests
working-directory: ./packages/exchange
run: |
uv run pytest tests -m 'not integration'
goose:
runs-on: ubuntu-latest

steps:
- uses: actions/checkout@v4

- name: Install UV
run: curl -LsSf https://astral.sh/uv/install.sh | sh

- name: Source Cargo Environment
run: source $HOME/.cargo/env

- name: Ruff
run: |
uvx ruff check src tests
uvx ruff format src tests --check
- name: Run tests
run: |
uv run pytest tests -m 'not integration'
# This runs integration tests of the OpenAI API, using Ollama to host models.
# This lets us test PRs from forks which can't access secrets like API keys.
ollama:
runs-on: ubuntu-latest

strategy:
matrix:
python-version:
# Only test the lastest python version.
- "3.12"
ollama-model:
# For quicker CI, use a smaller, tool-capable model than the default.
- "qwen2.5:0.5b"

steps:
- uses: actions/checkout@v4

- name: Install UV
run: curl -LsSf https://astral.sh/uv/install.sh | sh

- name: Source Cargo Environment
run: source $HOME/.cargo/env

- name: Set up Python
run: uv python install ${{ matrix.python-version }}

- name: Install Ollama
run: curl -fsSL https://ollama.com/install.sh | sh

- name: Start Ollama
run: |
# Run the background, in a way that survives to the next step
nohup ollama serve > ollama.log 2>&1 &
# Block using the ready endpoint
time curl --retry 5 --retry-connrefused --retry-delay 1 -sf http://localhost:11434
# Tests use OpenAI which does not have a mechanism to pull models. Run a
# simple prompt to (pull and) test the model first.
- name: Test Ollama model
run: ollama run $OLLAMA_MODEL hello || cat ollama.log
env:
OLLAMA_MODEL: ${{ matrix.ollama-model }}

- name: Run Ollama tests
run: uv run pytest tests -m integration -k ollama
working-directory: ./packages/exchange
env:
OLLAMA_MODEL: ${{ matrix.ollama-model }}
50 changes: 50 additions & 0 deletions .github/workflows/publish.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
name: Publish

# A release on goose will also publish exchange, if it has updated
# This means in some cases we may need to make a bump in goose without other changes to release exchange
on:
release:
types: [published]

jobs:
publish:
permissions:
id-token: write
contents: read
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4

- name: Get current version from pyproject.toml
id: get_version
run: |
echo "VERSION=$(grep -m 1 'version =' "pyproject.toml" | awk -F'"' '{print $2}')" >> $GITHUB_ENV
- name: Extract tag version
id: extract_tag
run: |
TAG_VERSION=$(echo "${{ github.event.release.tag_name }}" | sed -E 's/v(.*)/\1/')
echo "TAG_VERSION=$TAG_VERSION" >> $GITHUB_ENV
- name: Check if tag matches version from pyproject.toml
id: check_tag
run: |
if [ "${{ env.TAG_VERSION }}" != "${{ env.VERSION }}" ]; then
echo "::error::Tag version (${{ env.TAG_VERSION }}) does not match version in pyproject.toml (${{ env.VERSION }})."
exit 1
fi
- name: Install the latest version of uv
uses: astral-sh/setup-uv@v1
with:
version: "latest"

- name: Build Package
run: |
uv build -o dist --package goose-ai
uv build -o dist --package ai-exchange
- name: Publish package to PyPI
uses: pypa/gh-action-pypi-publish@release/v1
with:
skip-existing: true
47 changes: 0 additions & 47 deletions .github/workflows/pypi_release.yaml

This file was deleted.

95 changes: 95 additions & 0 deletions packages/exchange/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,95 @@
<p align="center">
<a href="https://opensource.org/licenses/Apache-2.0"><img src="https://img.shields.io/badge/License-Apache_2.0-blue.svg"></a>
</p>

<p align="center">
<a href="#example">Example</a> •
<a href="#plugins">Plugins</a>
</p>

<p align="center"><strong>Exchange</strong> <em>- a uniform python SDK for message generation with LLMs</em></p>

- Provides a flexible layer for message handling and generation
- Directly integrates python functions into tool calling
- Persistently surfaces errors to the underlying models to support reflection

## Example

> [!NOTE]
> Before you can run this example, you need to setup an API key with
> `export OPENAI_API_KEY=your-key-here`
``` python
from exchange import Exchange, Message, Tool
from exchange.providers import OpenAiProvider

def word_count(text: str):
"""Get the count of words in text
Args:
text (str): The text with words to count
"""
return len(text.split(" "))

ex = Exchange(
provider=OpenAiProvider.from_env(),
model="gpt-4o",
system="You are a helpful assistant.",
tools=[Tool.from_function(word_count)],
)
ex.add(Message.user("Count the number of words in this current message"))

# The model sees it has a word count tool, and should use it along the way to answer
# This will call all the tools as needed until the model replies with the final result
reply = ex.reply()
print(reply.text)

# you can see all the tool calls in the message history
print(ex.messages)
```

## Plugins

*exchange* has a plugin mechanism to add support for additional providers and moderators. If you need a
provider not supported here, we'd be happy to review [contributions][CONTRIBUTING]. But you
can also consider building and using your own plugin.

To create a `Provider` plugin, subclass `exchange.provider.Provider`. You will need to
implement the `complete` method. For example this is what we use as a mock in our tests.
You can see a full implementation example of the [OpenAiProvider][openaiprovider]. We
also generally recommend implementing a `from_env` classmethod to instantiate the provider.

``` python
class MockProvider(Provider):
def __init__(self, sequence: List[Message]):
# We'll use init to provide a preplanned reply sequence
self.sequence = sequence
self.call_count = 0

def complete(
self, model: str, system: str, messages: List[Message], tools: List[Tool]
) -> Message:
output = self.sequence[self.call_count]
self.call_count += 1
return output
```

Then use [python packaging's entrypoints][plugins] to register your plugin.

``` toml
[project.entry-points.'exchange.provider']
example = 'path.to.plugin:ExampleProvider'
```

Your plugin will then be available in your application or other applications built on *exchange*
through:

``` python
from exchange.providers import get_provider

provider = get_provider('example').from_env()
```

[CONTRIBUTING]: CONTRIBUTING.md
[openaiprovider]: src/exchange/providers/openai.py
[plugins]: https://packaging.python.org/en/latest/guides/creating-and-discovering-plugins/
48 changes: 48 additions & 0 deletions packages/exchange/pyproject.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
[project]
name = "ai-exchange"
version = "0.9.3"
description = "a uniform python SDK for message generation with LLMs"
readme = "README.md"
requires-python = ">=3.10"
author = [{ name = "Block", email = "[email protected]" }]
packages = [{ include = "exchange", from = "src" }]
dependencies = [
"griffe>=1.1.1",
"attrs>=24.2.0",
"jinja2>=3.1.4",
"tiktoken>=0.7.0",
"httpx>=0.27.0",
"tenacity>=9.0.0",
]

[tool.hatch.build.targets.wheel]
packages = ["src/exchange"]

[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"

[tool.uv]
dev-dependencies = ["pytest>=8.3.2", "pytest-vcr>=1.0.2", "codecov>=2.1.13"]

[project.entry-points."exchange.provider"]
openai = "exchange.providers.openai:OpenAiProvider"
azure = "exchange.providers.azure:AzureProvider"
databricks = "exchange.providers.databricks:DatabricksProvider"
anthropic = "exchange.providers.anthropic:AnthropicProvider"
bedrock = "exchange.providers.bedrock:BedrockProvider"
ollama = "exchange.providers.ollama:OllamaProvider"
google = "exchange.providers.google:GoogleProvider"

[project.entry-points."exchange.moderator"]
passive = "exchange.moderators.passive:PassiveModerator"
truncate = "exchange.moderators.truncate:ContextTruncate"
summarize = "exchange.moderators.summarizer:ContextSummarizer"

[project.entry-points."metadata.plugins"]
ai-exchange = "exchange:module_name"

[tool.pytest.ini_options]
markers = [
"integration: marks tests that need to authenticate (deselect with '-m \"not integration\"')",
]
9 changes: 9 additions & 0 deletions packages/exchange/src/exchange/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
"""Classes for interacting with the exchange API."""

from exchange.tool import Tool # noqa
from exchange.content import Text, ToolResult, ToolUse # noqa
from exchange.message import Message # noqa
from exchange.exchange import Exchange # noqa
from exchange.checkpoint import CheckpointData, Checkpoint # noqa

module_name = "ai-exchange"
Loading

0 comments on commit aecdd62

Please sign in to comment.