Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG]MottonenStatePreparation.compute_decomposition #4753

Closed
1 task done
Beeeam opened this issue Oct 30, 2023 · 1 comment · Fixed by #4767
Closed
1 task done

[BUG]MottonenStatePreparation.compute_decomposition #4753

Beeeam opened this issue Oct 30, 2023 · 1 comment · Fixed by #4767
Labels
bug 🐛 Something isn't working

Comments

@Beeeam
Copy link

Beeeam commented Oct 30, 2023

Expected behavior

I want to use MottonenStatePreparation.compute_decomposition to get op_list for batched tensor data. It seems different from issue #4589
For example
inputs = torch.rand(256,2**n_qubits)
norms = torch.norm(inputs, dim=1)
data = inputs/torch.stack([norms for i in range(inputs.size(dim=1))], dim=1)
op_list = MottonenStatePreparation.compute_decomposition(data, wires=range(n_qubits))

Actual behavior


RuntimeError Traceback (most recent call last)
in
----> 1 op_list = MottonenStatePreparation.compute_decomposition(data, wires=range(n_qubits))

~/opt/miniconda3/lib/python3.9/site-packages/pennylane/templates/state_preparations/mottonen.py in compute_decomposition(state_vector, wires)
359 # Apply inverse y rotation cascade to prepare correct absolute values of amplitudes
360 for k in range(len(wires_reverse), 0, -1):
--> 361 alpha_y_k = _get_alpha_y(a, len(wires_reverse), k)
362 control = wires_reverse[k:]
363 target = wires_reverse[k - 1]

~/opt/miniconda3/lib/python3.9/site-packages/pennylane/templates/state_preparations/mottonen.py in _get_alpha_y(a, n, k)
206
207 with np.errstate(divide="ignore", invalid="ignore"):
--> 208 division = numerator / denominator
209
210 # Cast the numerator and denominator to ensure compatibility with interfaces

RuntimeError: The size of tensor a (8) must match the size of tensor b (16) at non-singleton dimension 1

Additional information

It is ok if my inputs are inputs = np.random.rand(256,2**n_qubits), but it has some problem when the inputs are torch.tensor.

Source code

No response

Tracebacks

No response

System information

Name: PennyLane
Version: 0.31.1
Summary: PennyLane is a Python quantum machine learning library by Xanadu Inc.
Home-page: https://github.com/PennyLaneAI/pennylane
Author: 
Author-email: 
License: Apache License 2.0
Location: /Users/beam/opt/miniconda3/lib/python3.9/site-packages
Requires: appdirs, autograd, autoray, cachetools, networkx, numpy, pennylane-lightning, requests, rustworkx, scipy, semantic-version, toml, typing-extensions
Required-by: PennyLane-Lightning, PennyLane-qiskit

Platform info:           macOS-10.16-x86_64-i386-64bit
Python version:          3.9.1
Numpy version:           1.23.5
Scipy version:           1.10.0
Installed devices:
- default.gaussian (PennyLane-0.31.1)
- default.mixed (PennyLane-0.31.1)
- default.qubit (PennyLane-0.31.1)
- default.qubit.autograd (PennyLane-0.31.1)
- default.qubit.jax (PennyLane-0.31.1)
- default.qubit.tf (PennyLane-0.31.1)
- default.qubit.torch (PennyLane-0.31.1)
- default.qutrit (PennyLane-0.31.1)
- null.qubit (PennyLane-0.31.1)
- qiskit.aer (PennyLane-qiskit-0.31.0)
- qiskit.basicaer (PennyLane-qiskit-0.31.0)
- qiskit.ibmq (PennyLane-qiskit-0.31.0)
- qiskit.ibmq.circuit_runner (PennyLane-qiskit-0.31.0)
- qiskit.ibmq.sampler (PennyLane-qiskit-0.31.0)
- qiskit.remote (PennyLane-qiskit-0.31.0)
- lightning.qubit (PennyLane-Lightning-0.31.0)

Existing GitHub issues

  • I have searched existing GitHub issues to make sure the issue does not already exist.
@Beeeam Beeeam added the bug 🐛 Something isn't working label Oct 30, 2023
@albi3ro
Copy link
Contributor

albi3ro commented Oct 30, 2023

While this is different than #4589 , it may be a duplicate of #4460, as it seems caused by MottonenStatePrep not supporting a broadcast dimension.

mudit2812 added a commit that referenced this issue Nov 9, 2023
…sStatePreparation` `compute_decomposition` (#4767)

**Context:**
`MottonenStatePreparation` and `BasisStatePreparation` fails right now
when decomposing a broadcasted state vector because different state
vectors have different decompositions.

**Description of the Change:**
* Raise an error in `MottonenStatePreparation.compute_decomposition` and
`BasisStatePreparation.compute_decomposition` if the `batch_size` if not
`None` and suggest users to use `broadcast_expand`.

**Benefits:**
No failures or wrong results when decomposing Mottonen or BasisState

**Possible Drawbacks:**

**Related GitHub Issues:**
#4753
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug 🐛 Something isn't working
Projects
None yet
2 participants