Skip to content

Commit

Permalink
Set version 0.1.0
Browse files Browse the repository at this point in the history
  • Loading branch information
svilupp committed Nov 9, 2023
1 parent cb91381 commit 0235c3c
Show file tree
Hide file tree
Showing 2 changed files with 9 additions and 6 deletions.
2 changes: 1 addition & 1 deletion Project.toml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
name = "PromptingTools"
uuid = "670122d1-24a8-4d70-bfce-740807c42192"
authors = ["J S @svilupp and contributors"]
version = "0.1.0-DEV"
version = "0.1.0"

[deps]
HTTP = "cd3eb016-35fb-5094-929b-558a96fad6f3"
Expand Down
13 changes: 8 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -91,11 +91,10 @@ Some features:
## Advanced Examples

TODO:
[ ] Add more practical examples (DataFrames!)
[ ] Show mini tasks with structured extraction
[ ] Add an example of how to build RAG in 50 lines


[ ] Add more practical examples (DataFrames!)
[ ] Add mini tasks with structured extraction
[ ] Add an example of how to build a RAG app in 50 lines

### Advanced Prompts / Conversations

Expand Down Expand Up @@ -188,7 +187,7 @@ aiclassify(:IsStatementTrue; statement = "Is two plus three a vegetable on Mars?
# unknown
```

In the above example, we used a prompt template `:IsStatementTrue`, which automatically expands into the following system prompt:
In the above example, we used a prompt template `:IsStatementTrue`, which automatically expands into the following system prompt (and a separate user prompt):

> "You are an impartial AI judge evaluating whether the provided statement is \"true\" or \"false\". Answer \"unknown\" if you cannot decide."
Expand All @@ -200,6 +199,8 @@ TBU... with `aiextract`

### More Examples

TBU...

Find more examples in the [examples/](examples/) folder.

## Package Interface
Expand All @@ -211,6 +212,8 @@ The package is built around three key elements:

Why this design? Different APIs require different prompt formats. For example, OpenAI's API requires an array of dictionaries with `role` and `content` fields, while Ollama's API for Zephyr-7B model requires a ChatML schema with one big string and separators like `<|im_start|>user\nABC...<|im_end|>user`. For separating sections in your prompt, OpenAI prefers markdown headers (`##Response`) vs Anthropic performs better with HTML tags (`<text>{{TEXT}}</text>`).

This package is heavily inspired by [Instructor](https://github.com/jxnl/instructor) and it's clever use of function calling API.

**Prompt Schemas**

The key type used for customization of logic of preparing inputs for LLMs and calling them (via multiple dispatch).
Expand Down

0 comments on commit 0235c3c

Please sign in to comment.