Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix webgpu precision errors #22718

Open
wants to merge 6 commits into
base: main
Choose a base branch
from
Open

Conversation

prathikr
Copy link
Contributor

@prathikr prathikr commented Nov 4, 2024

Description

Adds explicit casting to silence CI errors related to webgpu kernels

Motivation and Context

ORT Inf/Core DRI

@@ -116,7 +116,7 @@ Status SkipLayerNorm<simplified>::ComputeInternal(onnxruntime::webgpu::ComputeCo
auto* output = context.Output(0, x_shape);
auto* input_skip_bias_sum = context.Output(3, x_shape);

size_t data_size = x_shape.Size();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

trying to understand the issue - the fix is for android/32bit?

@guschmue guschmue added the ep:WebGPU ort-web webgpu provider label Nov 6, 2024
@@ -85,7 +85,7 @@ class SimpleCacheManager : public IBufferCacheManager {

void OnRefresh() override {
for (auto& buffer : pending_buffers_) {
buffers_[wgpuBufferGetSize(buffer)].push_back(buffer);
buffers_[static_cast<unsigned int>(wgpuBufferGetSize(buffer))].push_back(buffer);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wgpuBufferGetSize() returns uint64_t. casting to size_t should work

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same to the other occurrence.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ep:WebGPU ort-web webgpu provider
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants