Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add mock client for fast prototyping code #176

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

allenanswerzq
Copy link

@allenanswerzq allenanswerzq commented Sep 19, 2024

This PR added a mock client for user to fast prototyping code, save some time to wait for LLM response. especially in some complex scenarios.

code is tested

import ell

@ell.simple(model="mock")
def hello(name: str):
    """You are a helpful assistant."""
    return f"Say hello to {name}!" 

greeting = hello("Sam Altman")
print(greeting)

with output like

mock_BH32NvWqNSmq9C3tRsuFnDzPXcz7eG

@MadcowD
Copy link
Owner

MadcowD commented Sep 19, 2024

this is v cute, not totally sure if this should be in core atm
maybe thoughts from @alex-dixon & @emfastic
whats your main use case for this :)

@allenanswerzq
Copy link
Author

allenanswerzq commented Sep 19, 2024

thanks for reply, I am doing some work, need to split a big task, into many small steps, and then call llm to have some output.

When starting it, I will pay all attention first on how task is splited, not the llm output for each small task. having a mock client can quickly for me to verify the whole flow.

If not, then have to wait for real output from llm, it takes much longer longer time to see the whole flow.

@allenanswerzq
Copy link
Author

Just found that LangChain also has fake model inside: https://github.com/search?q=repo%3Alangchain-ai%2Flangchain%20FakeListChatModel&type=code I guess this is a common need for people.

@MadcowD
Copy link
Owner

MadcowD commented Sep 23, 2024

I see
Can you give me an example of a task that you'd want to develop like this. am just generally curious :)

@allenanswerzq
Copy link
Author

allenanswerzq commented Sep 24, 2024

Thanks for the interests, @MadcowD, I am doing some experiments with the llm to "translate" one program language to others, like c/c++ to rust.

for doing that, needs to generally break a big c++ code, into many smaller pieces, and then using llm to convert each smaller one.

some demo code looks like this: https://github.com/allenanswerzq/llmcc/blob/master/b.rs

This is a fun project to work with, haha.

@alex-dixon
Copy link
Contributor

@MadcowD Might be useful for our tests…I’ve written a couple hackier versions. I’d vote we wait to see if more people are looking for a client like this before including it in the package itself

@MadcowD
Copy link
Owner

MadcowD commented Sep 25, 2024

okays i am more open to the idea now :)

@MadcowD
Copy link
Owner

MadcowD commented Sep 26, 2024

What about an @ell.mock decorator to seperate mocking from models.. I guess you'd like to mock both complex and simple outputs as well. hmm

@allenanswerzq
Copy link
Author

good advice, will find some time to change it

@allenanswerzq
Copy link
Author

allenanswerzq commented Sep 27, 2024

tested like: please take a look

import ell

@ell.mock(model="gpt-4o")
def hello(name: str):
    """You are a helpful assistant."""
    return f"Say hello to {name}!" 

greeting = hello("Sam Altman")
print(greeting)

with default output: mock_2xufdLOgJDo

This adds a customized function user can tweak according their needs.

@ell.mock(model="gpt-4o", mock_func=lambda: "fn mock() {}")
def hello(name: str):
    """You are a helpful assistant."""
    return f"Say hello to {name}!" 

greeting = hello("Sam Altman")
print(greeting)

with output: fn mock() {}

@MadcowD
Copy link
Owner

MadcowD commented Sep 29, 2024

Great I'll take a look :)

@ajac-zero
Copy link

@MadcowD if the aim is to avoid bloating the core lib, you could use a mock llm library like mockai as a dev dependency instead, which provides a fake server to the openai/anthropic clients.

It works with ell like this:

# hello.py
import ell
from openai import OpenAI

# Set the base url to the local server, runs on port 8100 by default
client = OpenAI(api_key=":)", base_url="http://localhost:8100/openai")

@ell.simple(model="gpt-4o", client=client)
def hello(world: str):
    """You are a helpful assistant that writes in lower case"""
    return f"Say hello to {world} with a poem."

print(hello("sama"))

Run the server:

$ pip install ai-mock 
$ mockai
Starting MockAI server ...

# or use uv and don't even install it
$ uvx --from ai-mock mockai
Starting MockAI server ...

Run the LMP:

$ python hello.py
# By default it echoes the input, but predefined responses are supported
Say hello to sama with a poem. 

Theres full supports for streaming/tools as well. Its used basically for testing the functionality of the code surrounding the llm.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants