You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I want to use MottonenStatePreparation.compute_decomposition to get op_list for batched tensor data. It seems different from issue #4589
For example
inputs = torch.rand(256,2**n_qubits)
norms = torch.norm(inputs, dim=1)
data = inputs/torch.stack([norms for i in range(inputs.size(dim=1))], dim=1)
op_list = MottonenStatePreparation.compute_decomposition(data, wires=range(n_qubits))
~/opt/miniconda3/lib/python3.9/site-packages/pennylane/templates/state_preparations/mottonen.py in compute_decomposition(state_vector, wires)
359 # Apply inverse y rotation cascade to prepare correct absolute values of amplitudes
360 for k in range(len(wires_reverse), 0, -1):
--> 361 alpha_y_k = _get_alpha_y(a, len(wires_reverse), k)
362 control = wires_reverse[k:]
363 target = wires_reverse[k - 1]
~/opt/miniconda3/lib/python3.9/site-packages/pennylane/templates/state_preparations/mottonen.py in _get_alpha_y(a, n, k)
206
207 with np.errstate(divide="ignore", invalid="ignore"):
--> 208 division = numerator / denominator
209
210 # Cast the numerator and denominator to ensure compatibility with interfaces
RuntimeError: The size of tensor a (8) must match the size of tensor b (16) at non-singleton dimension 1
Additional information
It is ok if my inputs are inputs = np.random.rand(256,2**n_qubits), but it has some problem when the inputs are torch.tensor.
…sStatePreparation` `compute_decomposition` (#4767)
**Context:**
`MottonenStatePreparation` and `BasisStatePreparation` fails right now
when decomposing a broadcasted state vector because different state
vectors have different decompositions.
**Description of the Change:**
* Raise an error in `MottonenStatePreparation.compute_decomposition` and
`BasisStatePreparation.compute_decomposition` if the `batch_size` if not
`None` and suggest users to use `broadcast_expand`.
**Benefits:**
No failures or wrong results when decomposing Mottonen or BasisState
**Possible Drawbacks:**
**Related GitHub Issues:**
#4753
Expected behavior
I want to use MottonenStatePreparation.compute_decomposition to get op_list for batched tensor data. It seems different from issue #4589
For example
inputs = torch.rand(256,2**n_qubits)
norms = torch.norm(inputs, dim=1)
data = inputs/torch.stack([norms for i in range(inputs.size(dim=1))], dim=1)
op_list = MottonenStatePreparation.compute_decomposition(data, wires=range(n_qubits))
Actual behavior
RuntimeError Traceback (most recent call last)
in
----> 1 op_list = MottonenStatePreparation.compute_decomposition(data, wires=range(n_qubits))
~/opt/miniconda3/lib/python3.9/site-packages/pennylane/templates/state_preparations/mottonen.py in compute_decomposition(state_vector, wires)
359 # Apply inverse y rotation cascade to prepare correct absolute values of amplitudes
360 for k in range(len(wires_reverse), 0, -1):
--> 361 alpha_y_k = _get_alpha_y(a, len(wires_reverse), k)
362 control = wires_reverse[k:]
363 target = wires_reverse[k - 1]
~/opt/miniconda3/lib/python3.9/site-packages/pennylane/templates/state_preparations/mottonen.py in _get_alpha_y(a, n, k)
206
207 with np.errstate(divide="ignore", invalid="ignore"):
--> 208 division = numerator / denominator
209
210 # Cast the numerator and denominator to ensure compatibility with interfaces
RuntimeError: The size of tensor a (8) must match the size of tensor b (16) at non-singleton dimension 1
Additional information
It is ok if my inputs are inputs = np.random.rand(256,2**n_qubits), but it has some problem when the inputs are torch.tensor.
Source code
No response
Tracebacks
No response
System information
Existing GitHub issues
The text was updated successfully, but these errors were encountered: