-
Notifications
You must be signed in to change notification settings - Fork 309
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add mock client for fast prototyping code #176
base: main
Are you sure you want to change the base?
Conversation
this is v cute, not totally sure if this should be in core atm |
thanks for reply, I am doing some work, need to split a big task, into many small steps, and then call llm to have some output. When starting it, I will pay all attention first on how task is splited, not the llm output for each small task. having a mock client can quickly for me to verify the whole flow. If not, then have to wait for real output from llm, it takes much longer longer time to see the whole flow. |
Just found that LangChain also has fake model inside: https://github.com/search?q=repo%3Alangchain-ai%2Flangchain%20FakeListChatModel&type=code I guess this is a common need for people. |
I see |
Thanks for the interests, @MadcowD, I am doing some experiments with the llm to "translate" one program language to others, like c/c++ to rust. for doing that, needs to generally break a big c++ code, into many smaller pieces, and then using llm to convert each smaller one. some demo code looks like this: https://github.com/allenanswerzq/llmcc/blob/master/b.rs This is a fun project to work with, haha. |
@MadcowD Might be useful for our tests…I’ve written a couple hackier versions. I’d vote we wait to see if more people are looking for a client like this before including it in the package itself |
okays i am more open to the idea now :) |
What about an |
good advice, will find some time to change it |
d289013
to
10c3e2a
Compare
tested like: please take a look import ell
@ell.mock(model="gpt-4o")
def hello(name: str):
"""You are a helpful assistant."""
return f"Say hello to {name}!"
greeting = hello("Sam Altman")
print(greeting) with default output: mock_2xufdLOgJDo This adds a customized function user can tweak according their needs. @ell.mock(model="gpt-4o", mock_func=lambda: "fn mock() {}")
def hello(name: str):
"""You are a helpful assistant."""
return f"Say hello to {name}!"
greeting = hello("Sam Altman")
print(greeting) with output: fn mock() {} |
Great I'll take a look :) |
@MadcowD if the aim is to avoid bloating the core lib, you could use a mock llm library like mockai as a dev dependency instead, which provides a fake server to the openai/anthropic clients. It works with ell like this: # hello.py
import ell
from openai import OpenAI
# Set the base url to the local server, runs on port 8100 by default
client = OpenAI(api_key=":)", base_url="http://localhost:8100/openai")
@ell.simple(model="gpt-4o", client=client)
def hello(world: str):
"""You are a helpful assistant that writes in lower case"""
return f"Say hello to {world} with a poem."
print(hello("sama")) Run the server: $ pip install ai-mock
$ mockai
Starting MockAI server ...
# or use uv and don't even install it
$ uvx --from ai-mock mockai
Starting MockAI server ... Run the LMP: $ python hello.py
# By default it echoes the input, but predefined responses are supported
Say hello to sama with a poem. Theres full supports for streaming/tools as well. Its used basically for testing the functionality of the code surrounding the llm. |
This PR added a mock client for user to fast prototyping code, save some time to wait for LLM response. especially in some complex scenarios.
code is tested
with output like