Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Python backend with multiple instances cause unexpected and non-deterministic results #7907

Open
NadavShmayo opened this issue Dec 25, 2024 · 0 comments

Comments

@NadavShmayo
Copy link

NadavShmayo commented Dec 25, 2024

Description
When using a Python backend with multiple model instances and running inference with many identical requests, the results are not deterministic and not even close to the expected result.

Triton Information
24.09

Are you using the Triton container or did you build it yourself?
Triton container (with additional Python libraries)

To Reproduce
Clone the following repository and follow the steps in the README.md file:
https://github.com/NadavShmayo/fairseq-triton-example

Expected behavior
I expect the outputs from the Python model to be consistently the same for a request with the same input values.
The locust script in the example repository I created prints the output for every time it differs from the expected output.

Additional Information

  • I do believe this is an issue with Triton and not with my models since the error doesn't reproduce with instance count of 1.
  • I tried to avoid using multiple instances and instead used decoupled mode with a ThreadPoolExecutor, which lead to the same problem, even when moving every object initialization to inside the thread worker, to avoid non thread-safe behavior.
  • When trying to debug with print statements in the compiled models and the Python model, I noticed that sometimes the encoder output seems to have weird values after transferred to the Python model, but the problem seems to reproduce even when this is not the case.
  • It seems that the issue is less reproduceable when using a dynamic batcher with queue delay, which leads me to believe that it might be related to race condition in some shared memory between the BLS instances.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

1 participant