Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

specify dtype for "default.qubit" [BUG] #6086

Open
1 task done
fmozafari opened this issue Aug 8, 2024 · 10 comments
Open
1 task done

specify dtype for "default.qubit" [BUG] #6086

fmozafari opened this issue Aug 8, 2024 · 10 comments
Labels
bug 🐛 Something isn't working

Comments

@fmozafari
Copy link

Expected behavior

Hi,
I have a simple example using "default.qubit" device and I would like to specify c_dtype to be {np.complex64, np.complex128}

Actual behavior

I have 2 problems:

  1. I can not specify c_dtype argument anymore as qml.device("default.qubit", wires=2, c_dtype=np.complex64) and I get the error: "TypeError: DefaultQubit.init() got an unexpected keyword argument 'c_dtype'"
  2. I try to manage dtype by specifying params with the required dtype but once I print the dtype for the results, it is always complex128.
    Could you please let me know how can I solve that?

Additional information

No response

Source code

import pennylane as qml
from pennylane import numpy as np

dev = qml.device("default.qubit", wires=2)

@qml.qnode(dev)
def circuit(params):
    qml.RX(params[0], wires=0)
    qml.RY(params[1], wires=1)
    qml.CNOT(wires=[0, 1])
    return qml.state()

params = np.array([0.1, 0.2], dtype=np.complex64)

result = circuit(params)
print("Result:", result)
for i, res in enumerate(result):
    print(f"dtype of result[{i}]:", res.dtype)

Tracebacks

No response

System information

Name: PennyLane
Version: 0.37.0
Summary: PennyLane is a cross-platform Python library for quantum computing, quantum machine learning, and quantum chemistry. Train a quantum computer the same way as a neural network.
Home-page: https://github.com/PennyLaneAI/pennylane
Author: 
Author-email: 
License: Apache License 2.0
Location: /home/scratch.fmozafari_ent/mambaforge/envs/py312/lib/python3.12/site-packages
Requires: appdirs, autograd, autoray, cachetools, networkx, numpy, packaging, pennylane-lightning, requests, rustworkx, scipy, semantic-version, toml, typing-extensions
Required-by: PennyLane_Lightning

Platform info:           Linux-5.8.0-53-generic-x86_64-with-glibc2.31
Python version:          3.12.4
Numpy version:           1.26.4
Scipy version:           1.14.0
Installed devices:
- lightning.qubit (PennyLane_Lightning-0.37.0)
- default.clifford (PennyLane-0.37.0)
- default.gaussian (PennyLane-0.37.0)
- default.mixed (PennyLane-0.37.0)
- default.qubit (PennyLane-0.37.0)
- default.qubit.autograd (PennyLane-0.37.0)
- default.qubit.jax (PennyLane-0.37.0)
- default.qubit.legacy (PennyLane-0.37.0)
- default.qubit.tf (PennyLane-0.37.0)
- default.qubit.torch (PennyLane-0.37.0)
- default.qutrit (PennyLane-0.37.0)
- default.qutrit.mixed (PennyLane-0.37.0)
- default.tensor (PennyLane-0.37.0)
- null.qubit (PennyLane-0.37.0)

Existing GitHub issues

  • I have searched existing GitHub issues to make sure the issue does not already exist.
@fmozafari fmozafari added the bug 🐛 Something isn't working label Aug 8, 2024
@trbromley
Copy link
Contributor

Thanks @fmozafari, yes my expectation would also be that default.qubit follows the dtype of input parameters but that looks not to be the case here, at least for the autograd interface.

We should fix this, but one option you could try now is to use another interface. For example, this code allows you to toggle the 32/64 precision:

import pennylane as qml
from jax import numpy as np
from jax import config

config.update("jax_enable_x64", True)
dev = qml.device("default.qubit")

@qml.qnode(dev)
def circuit(params):
    qml.RX(params[0], wires=0)
    qml.RY(params[1], wires=1)
    qml.CNOT(wires=[0, 1])
    return qml.state()

params = np.array([0.1, 0.2])
circuit(params).dtype

@fmozafari
Copy link
Author

Thanks for the reply! I have a question. I would like to benchmark results for a cpu-based device that I found I can use "default.qubit". So, is the jax backend running on cpu or gpu? Is this based on whether the jax has been installed with a CUDA-enabled jaxlib?
Moreover, I have a circuit as input, I iterate on it using tape.operations to convert op.parameters to the required single or double precisions. But it doesn't work even with config.update("jax_enable_x64", False) and I always get complex128 data type.

@albi3ro
Copy link
Contributor

albi3ro commented Aug 14, 2024

If you have the gpu available and enabled, any backprop simulation should be strictly on the gpu.

As for your second question, do you mind providing a minimal example for what you are doing?

@fmozafari
Copy link
Author

fmozafari commented Aug 14, 2024

Then how can I have cpu backend for single precision?
I do this to convert dtype but doesn't work properly.

with pennylane.tape.QuantumTape() as tape:
            circuit()
        new_operations = []
        for op in tape.operations:
            new_params = [self.dtype(param) for param in op.parameters]
            if isinstance(op, pennylane.ControlledQubitUnitary):
                original_matrix = op.data[0]
                converted_matrix = np.array(original_matrix, dtype=self.dtype)
                new_op = pennylane.ControlledQubitUnitary(converted_matrix, control_wires=op.base.control_wires, wires=op.base.wires)
            else:
                new_op = op.__class__(*new_params, wires=op.wires)
            
            new_operations.append(new_op)

        def new_circuit():
            for op in new_operations:
                pennylane.apply(op)
            return pennylane.state()

        return new_circuit

@CatalinaAlbornoz
Copy link
Contributor

Hi @fmozafari ,
When you mention it doesn't work properly do you mean you get an error? If so, could you please provide the full error traceback?

@fmozafari
Copy link
Author

No, I always get double precision. It can not convert data type to the single precision.

@CatalinaAlbornoz
Copy link
Contributor

Thanks for clarifying @fmozafari
.

Let me run some tests and get back to you on this.

@CatalinaAlbornoz
Copy link
Contributor

Hi @fmozafari ,

Going back to your original example, I tested this on Google Colab both on CPU and GPU and it works. You can test it for yourself and see that the output is complex64. My guess is that you're mixing PennyLane numpy or vanilla numpy with Jax numpy. I would recommend importing jax numpy as jnp in order to avoid issues where they get mixed up. Let me know if this solves your problem!

import pennylane as qml
from jax import numpy as jnp
from jax import config

config.update("jax_enable_x64", False)
dev = qml.device("default.qubit", wires=2)

@qml.qnode(dev)
def circuit(params):
    qml.RX(params[0], wires=0)
    qml.RY(params[1], wires=1)
    qml.CNOT(wires=[0, 1])
    return qml.state()

params = jnp.array([0.1, 0.2])
result = circuit(params)
print(result.dtype)
print("Result:", result)
for i, res in enumerate(result):
    print(f"dtype of result[{i}]:", res.dtype)

@fmozafari
Copy link
Author

Hi, I don't have any problem with the first example. The second example that I provided the code doesn't work with complex64.

@CatalinaAlbornoz
Copy link
Contributor

Hi @fmozafari ,
If you run my code but with ControlledQubitUnitary does it fail for you? I'm just not sure why you need to go all the way into modifying the tape.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug 🐛 Something isn't working
Projects
None yet
Development

No branches or pull requests

4 participants