You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When running the model, there are initializers and NodeArgs that are removed because they are no longer used. For example:
Removing initializer 'onnx_Mul_818'. It is no longer used by any node.
Removing NodeArg 'Transpose_token_428_out0'. It is no longer used by any node.
However, the QNN EP seems to remove some needed inputs. This results in some error messages like this:
Input name not exist: network_0_network_0_0_mlp_fc1_Conv.weight_dq at location qnn_model_wrapper.cc:138 onnxruntime::qnn::QnnModelWrapper::CreateQnnInputOutputTensors
nput name not exist: network_0_network_0_1_mlp_fc1_Conv.weight_dq at location qnn_model_wrapper.cc:138 onnxruntime::qnn::QnnModelWrapper::CreateQnnInputOutputTensors
And that leads to a crash in QNNHtpPrepare.
If I run the same model on the CPU, it works.
If I run the same model with DirectML EP it works. I also noticed the following messages when using DML EP:
Add MemcpyFromHost after network_0_network_0_0_mlp_fc1_Conv.weight_dq for DmlExecutionProvider
Notice that this is one of the names that does not exist when running with the QNN EP.
To reproduce
Run the model found in this AI Hub job: https://app.aihub.qualcomm.com/jobs/jgdxeoy6p with the QNN EP.
(Notice that this will only be available to members of the MSFT QNN EP team.)
Urgency
No response
Platform
Windows
OS Version
Windows 11
ONNX Runtime Installation
Built from Source
ONNX Runtime Version or Commit ID
1.19.2
ONNX Runtime API
C++
Architecture
ARM64
Execution Provider
Other / Unknown
Execution Provider Library Version
QNN EP
The text was updated successfully, but these errors were encountered:
john-dance
changed the title
QNN EP removes needed nodes from graph
QNN EP removes needed names from graph - leading to crash in QnnHtpPrepare
Nov 7, 2024
The specific model you are using (if different from the one linked).
Any modifications made to the model before running it with the QNN EP.
The complete log output when the error occurs.
Describe the issue
When running the model, there are initializers and NodeArgs that are removed because they are no longer used. For example:
Removing initializer 'onnx_Mul_818'. It is no longer used by any node.
Removing NodeArg 'Transpose_token_428_out0'. It is no longer used by any node.
However, the QNN EP seems to remove some needed inputs. This results in some error messages like this:
Input name not exist: network_0_network_0_0_mlp_fc1_Conv.weight_dq at location qnn_model_wrapper.cc:138 onnxruntime::qnn::QnnModelWrapper::CreateQnnInputOutputTensors
nput name not exist: network_0_network_0_1_mlp_fc1_Conv.weight_dq at location qnn_model_wrapper.cc:138 onnxruntime::qnn::QnnModelWrapper::CreateQnnInputOutputTensors
And that leads to a crash in QNNHtpPrepare.
If I run the same model on the CPU, it works.
If I run the same model with DirectML EP it works. I also noticed the following messages when using DML EP:
Add MemcpyFromHost after network_0_network_0_0_mlp_fc1_Conv.weight_dq for DmlExecutionProvider
Notice that this is one of the names that does not exist when running with the QNN EP.
To reproduce
Run the model found in this AI Hub job: https://app.aihub.qualcomm.com/jobs/jgdxeoy6p with the QNN EP.
(Notice that this will only be available to members of the MSFT QNN EP team.)
Urgency
No response
Platform
Windows
OS Version
Windows 11
ONNX Runtime Installation
Built from Source
ONNX Runtime Version or Commit ID
1.19.2
ONNX Runtime API
C++
Architecture
ARM64
Execution Provider
Other / Unknown
Execution Provider Library Version
QNN EP
The text was updated successfully, but these errors were encountered: