-
Notifications
You must be signed in to change notification settings - Fork 213
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add LMStudioClient and update __init__.py #210
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good. I suggested a few changes.
assert isinstance(input, Sequence), "input must be a sequence of text" | ||
final_model_kwargs["input"] = input | ||
elif model_type == ModelType.LLM: | ||
messages = [] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should use type hints for messages
messages = [] | |
messages: List[Dict[str, str]] = [] |
ref: model_client/openai_client.py line 234
if input is not None and input != "": | ||
messages.append({"role": "system", "content": "You are a helpful assistant. Provide a direct and concise answer to the user's question. Do not include any URLs or references in your response."}) | ||
messages.append({"role": "user", "content": input}) | ||
assert isinstance(messages, Sequence), "input must be a sequence of messages" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this assert statement is needed since messages
is explicitly created as a list just a few lines above.
assert isinstance(messages, Sequence), "input must be a sequence of messages" | ||
final_model_kwargs["messages"] = messages | ||
|
||
# Set default values for controlling response length if not provided |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider using a for-loop instead
# Set default values for controlling response length if not provided | |
default_values = [("temperature", 0.1), ("frequency_penalty", 0.0), ("presence_penalty", 0.0), ("stop", ["\n", "###", "://"])] | |
for key, val in default_values: | |
final_model_kwargs.setdefault(key, val) |
response.raise_for_status() | ||
return response.json() | ||
|
||
def parse_chat_completion(self, completion: Dict) -> GeneratorOutput: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suggest writting more precise error messages
def parse_chat_completion(self, completion: Dict) -> GeneratorOutput: | |
def parse_chat_completion(self, completion: Dict) -> GeneratorOutput: | |
"""Parse the completion to a GeneratorOutput.""" | |
try: | |
if "choices" not in completion: | |
return GeneratorOutput(data=None, error="Error parsing the completion: 'choices' not in 'completion'.", raw_response=content) | |
elif not len(completion["choices"]) > 0: | |
return GeneratorOutput(data=None, error="Error parsing the completion: 'choices' length is 0.", raw_response=content) | |
else: | |
content = completion["choices"][0]["message"]["content"] | |
# Clean up the content | |
content = self._clean_response(content) | |
return GeneratorOutput(data=None, raw_response=content) | |
except Exception as e: | |
log.error(f"Error parsing the completion: {e}") | |
return GeneratorOutput(data=None, error=str(e), raw_response=completion) |
elif model_type == ModelType.LLM: | ||
messages = [] | ||
if input is not None and input != "": | ||
messages.append({"role": "system", "content": "You are a helpful assistant. Provide a direct and concise answer to the user's question. Do not include any URLs or references in your response."}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As we use uni-prompt where both system and user prompt are in the same jinja2 syntax, we only need one message here, and in default, we use role system.
messages.append({"role":"system", "content": input})
Please modify it to this.
|
||
# Set default values for controlling response length if not provided | ||
final_model_kwargs.setdefault("max_tokens", 50) | ||
final_model_kwargs.setdefault("temperature", 0.1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please use 0
as default temperature
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
or we should let the model provider decides the default behavior.
@@ -60,6 +60,10 @@ | |||
"adalflow.components.model_client.openai_client.get_probabilities", | |||
OptionalPackages.OPENAI, | |||
) | |||
LMStudioClient = LazyImport( | |||
"adalflow.components.model_client.lm_studio_client.LMStudioClient", | |||
OptionalPackages.LMSTUDIO, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The Optional package is not defined, please add it.
This dependency will also be added in the pyproejct.toml
under /adalflow
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Jacck Great effort! Sorry for the slowed review.
Please rebase and change as commented. Additionally, please try to add a test file under /tests
log = logging.getLogger(__name__) | ||
|
||
class LMStudioClient(ModelClient): | ||
"""A component wrapper for the LM Studio API client.""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add relevant links and instructions on how to set up the client and additionally some example in this doc_string.
LMStudioClient works with LM Studio provider of local LLM