-
Notifications
You must be signed in to change notification settings - Fork 616
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
specify dtype for "default.qubit" [BUG] #6086
Comments
Thanks @fmozafari, yes my expectation would also be that We should fix this, but one option you could try now is to use another interface. For example, this code allows you to toggle the 32/64 precision: import pennylane as qml
from jax import numpy as np
from jax import config
config.update("jax_enable_x64", True)
dev = qml.device("default.qubit")
@qml.qnode(dev)
def circuit(params):
qml.RX(params[0], wires=0)
qml.RY(params[1], wires=1)
qml.CNOT(wires=[0, 1])
return qml.state()
params = np.array([0.1, 0.2])
circuit(params).dtype |
Thanks for the reply! I have a question. I would like to benchmark results for a cpu-based device that I found I can use "default.qubit". So, is the jax backend running on cpu or gpu? Is this based on whether the jax has been installed with a CUDA-enabled jaxlib? |
If you have the gpu available and enabled, any backprop simulation should be strictly on the gpu. As for your second question, do you mind providing a minimal example for what you are doing? |
Then how can I have cpu backend for single precision?
|
Hi @fmozafari , |
No, I always get double precision. It can not convert data type to the single precision. |
Thanks for clarifying @fmozafari Let me run some tests and get back to you on this. |
Hi @fmozafari , Going back to your original example, I tested this on Google Colab both on CPU and GPU and it works. You can test it for yourself and see that the output is
|
Hi, I don't have any problem with the first example. The second example that I provided the code doesn't work with complex64. |
Hi @fmozafari , |
Expected behavior
Hi,
I have a simple example using "default.qubit" device and I would like to specify c_dtype to be {np.complex64, np.complex128}
Actual behavior
I have 2 problems:
qml.device("default.qubit", wires=2, c_dtype=np.complex64)
and I get the error:"TypeError: DefaultQubit.init() got an unexpected keyword argument 'c_dtype'"
Could you please let me know how can I solve that?
Additional information
No response
Source code
Tracebacks
No response
System information
Name: PennyLane Version: 0.37.0 Summary: PennyLane is a cross-platform Python library for quantum computing, quantum machine learning, and quantum chemistry. Train a quantum computer the same way as a neural network. Home-page: https://github.com/PennyLaneAI/pennylane Author: Author-email: License: Apache License 2.0 Location: /home/scratch.fmozafari_ent/mambaforge/envs/py312/lib/python3.12/site-packages Requires: appdirs, autograd, autoray, cachetools, networkx, numpy, packaging, pennylane-lightning, requests, rustworkx, scipy, semantic-version, toml, typing-extensions Required-by: PennyLane_Lightning Platform info: Linux-5.8.0-53-generic-x86_64-with-glibc2.31 Python version: 3.12.4 Numpy version: 1.26.4 Scipy version: 1.14.0 Installed devices: - lightning.qubit (PennyLane_Lightning-0.37.0) - default.clifford (PennyLane-0.37.0) - default.gaussian (PennyLane-0.37.0) - default.mixed (PennyLane-0.37.0) - default.qubit (PennyLane-0.37.0) - default.qubit.autograd (PennyLane-0.37.0) - default.qubit.jax (PennyLane-0.37.0) - default.qubit.legacy (PennyLane-0.37.0) - default.qubit.tf (PennyLane-0.37.0) - default.qubit.torch (PennyLane-0.37.0) - default.qutrit (PennyLane-0.37.0) - default.qutrit.mixed (PennyLane-0.37.0) - default.tensor (PennyLane-0.37.0) - null.qubit (PennyLane-0.37.0)
Existing GitHub issues
The text was updated successfully, but these errors were encountered: