-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Test SDXL MLPerf inference on AMD GPU with ROCm for SCC'24 #300
Comments
Need to provide a working configuration. |
Hi @arjunsuresh . Which AMD GPU and ROCm version did you use to test this workflow? I would like to give it a try ... Thanks a lot! |
Hi @gfursin I'm not sure of the exact GPU name as it was tested by the AMD team. But any AMD GPU working with ROCm should be enough. We used ROCm 6.2 - the driver needs to be installed manually. Rest of the dependencies should be picked up by CM. We also have the SCC24 github action and here we can also add "rocm" if we have a machine for it. https://github.com/mlcommons/cm4mlops/blob/main/.github/workflows/test-scc24-sdxl.yaml#L17 |
I just tried to run the benchmark on AMD MI300X with ROCm 6.2 and PyTorch 2.6 - it resolved all dependencies but failed in loadgen. Please see mlcommons/cm4mlperf-inference#48 . |
Fixed it. Works now. |
https://docs.mlcommons.org/inference/benchmarks/text_to_image/reproducibility/scc24
The text was updated successfully, but these errors were encountered: