-
Notifications
You must be signed in to change notification settings - Fork 37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
There is an Exception when i deploy this project and ask questions. #13
Comments
Is there a dashscope key injected into env, or this dashscope key can be used to call the Baichuan4 model directly? |
Yes, i have one. But it seems not support Baichuan4. If i want to use other models except OPENAI and QWEN,what should i do? |
You need to implement a client for calling Baichuan4 in memoryscope.core.models, maintaining the same interface as the base class. |
Yes, you can write a client for Baichuan4 based on the Basemodel framework. You can refer to the LlamaIndexGenerationModel for guidance. If you are familiar with llamaindex, a simpler way is to add a class corresponding to Baichuan in the llama_index at line 27 of the LlamaIndexGenerationModel. |
I am not familiar with llamaindex. I have to learn it now. So it means that we don't need a api_url, just a Baichuan4 llamaindex is enough. |
Yes, accessing through llamaindex is the most convenient. |
Thank you, now i will have a try. |
The text was updated successfully, but these errors were encountered: