We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Undecided
Ubuntu
Xeon-GNR
Single Node
N/A
TL;DR: Pydantic Model accepts streaming parameter and not stream parameter as defined by TGI api spec.
There is a discrepancy between the TGI interface standard and the pydantic models defined.
The current difference is that the repo models define the stream parameter as streaming.
This causes the json that is accepted, (marshaled and unmarshalled) to expect streaming as the json key rather than the TGI standard of stream
Discovered during this PR where I was noticing during testing that when I posted to the chat endpoint via curl with the stream json key,
The program would not successfully unmarshal the json object.
After talking with @xiguiw , and looking at the TGI documentation, I believe that this is an error in the codebase.
This might be a breaking change if fixed.
GenAIComps/comps/cores/proto/docarray.py
Line 187 in 8d6b4b0
Line 232 in 8d6b4b0
https://huggingface.github.io/text-generation-inference/
No response
The text was updated successfully, but these errors were encountered:
@jjmaturino
Thank for catching this! @XinyaoWa will help to fix it. It's working in process.
Sorry, something went wrong.
Hi @jjmaturino, the bug was fixed. Could you please help verify it with the latest code? Thanks.
XinyaoWa
No branches or pull requests
Priority
Undecided
OS type
Ubuntu
Hardware type
Xeon-GNR
Installation method
Deploy method
Running nodes
Single Node
What's the version?
N/A
Description
There is a discrepancy between the TGI interface standard and the pydantic models defined.
The current difference is that the repo models define the stream parameter as streaming.
This causes the json that is accepted, (marshaled and unmarshalled) to expect streaming as the json key rather than the TGI standard of stream
Discovered during this PR where I was noticing during testing that when I posted to the chat endpoint via curl with the stream json key,
The program would not successfully unmarshal the json object.
After talking with @xiguiw , and looking at the TGI documentation, I believe that this is an error in the codebase.
This might be a breaking change if fixed.
Reproduce steps
GenAIComps/comps/cores/proto/docarray.py
Line 187 in 8d6b4b0
GenAIComps/comps/cores/proto/docarray.py
Line 232 in 8d6b4b0
https://huggingface.github.io/text-generation-inference/
Raw log
No response
Attachments
No response
The text was updated successfully, but these errors were encountered: