Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] TorchLayer does not work correctly with broadcasting and tuple returns #5762

Closed
1 task done
dwierichs opened this issue May 29, 2024 · 1 comment
Closed
1 task done
Labels
bug 🐛 Something isn't working

Comments

@dwierichs
Copy link
Contributor

Expected behavior

The code below runs.

Actual behavior

It errors out, because of invalid internal reshaping.

Additional information

If we return a list from the QNode instead, the example works fine.

Source code

import numpy as np
import pennylane as qml
import torch

n_qubits = 2
dev = qml.device("default.qubit", wires=n_qubits)

@qml.qnode(dev)
def qnode(inputs, weights):
    qml.templates.AngleEmbedding(inputs, wires=range(n_qubits))
    qml.templates.StronglyEntanglingLayers(weights, wires=range(n_qubits))
    return qml.expval(qml.Z(0)), qml.expval(qml.Z(1))

weight_shapes = {"weights": (3, n_qubits, 3)}

qlayer = qml.qnn.TorchLayer(qnode, weight_shapes)
x = torch.tensor(np.random.random((5, 2))) # Batched inputs with batch dim 5 for 2 qubits
qlayer.forward(x)

Tracebacks

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
Cell In[3], line 18
     16 qlayer = qml.qnn.TorchLayer(qnode, weight_shapes)
     17 x = torch.tensor(np.random.random((5, 2))) # Batched inputs with batch dim 5 for 2 qubits
---> 18 qlayer.forward(x)

File ~/repos/pennylane/pennylane/qnn/torch.py:402, in TorchLayer.forward(self, inputs)
    399     inputs = torch.reshape(inputs, (-1, inputs.shape[-1]))
    401 # calculate the forward pass as usual
--> 402 results = self._evaluate_qnode(inputs)
    404 if isinstance(results, tuple):
    405     if has_batch_dim:

File ~/repos/pennylane/pennylane/qnn/torch.py:439, in TorchLayer._evaluate_qnode(self, x)
    436     return torch.hstack(_res).type(x.dtype)
    438 if isinstance(res, tuple) and len(res) > 1:
--> 439     return tuple(_combine_dimensions(r) for r in res)
    441 return _combine_dimensions(res)

File ~/repos/pennylane/pennylane/qnn/torch.py:439, in <genexpr>(.0)
    436     return torch.hstack(_res).type(x.dtype)
    438 if isinstance(res, tuple) and len(res) > 1:
--> 439     return tuple(_combine_dimensions(r) for r in res)
    441 return _combine_dimensions(res)

File ~/repos/pennylane/pennylane/qnn/torch.py:435, in TorchLayer._evaluate_qnode.<locals>._combine_dimensions(_res)
    433 def _combine_dimensions(_res):
    434     if len(x.shape) > 1:
--> 435         _res = [torch.reshape(r, (x.shape[0], -1)) for r in _res]
    436     return torch.hstack(_res).type(x.dtype)

File ~/repos/pennylane/pennylane/qnn/torch.py:435, in <listcomp>(.0)
    433 def _combine_dimensions(_res):
    434     if len(x.shape) > 1:
--> 435         _res = [torch.reshape(r, (x.shape[0], -1)) for r in _res]
    436     return torch.hstack(_res).type(x.dtype)

RuntimeError: shape '[5, -1]' is invalid for input of size 1

System information

pl dev

Existing GitHub issues

  • I have searched existing GitHub issues to make sure the issue does not already exist.
@dwierichs dwierichs added the bug 🐛 Something isn't working label May 29, 2024
@dwierichs
Copy link
Contributor Author

dwierichs commented May 29, 2024

Apologies, I overlooked #5704, which likely is the same issue.
Context: The TorchLayer docstring contains the above example (in a less reduced version).

Related PR that introduced shot batching support, and may be interfering with tuple return types: #5492

PietropaoloFrisoni added a commit that referenced this issue Jun 12, 2024
…uple` returns (#5816)

**Context:** This bug was caught using the following code:

```
import numpy as np
import pennylane as qml
import torch

n_qubits = 2
dev = qml.device("default.qubit", wires=n_qubits)


@qml.qnode(dev)
def qnode(inputs, weights):
    qml.templates.AngleEmbedding(inputs, wires=range(n_qubits))
    qml.templates.StronglyEntanglingLayers(weights, wires=range(n_qubits))
    return qml.expval(qml.Z(0)), qml.expval(qml.Z(1))


weight_shapes = {"weights": [3, n_qubits, 3]}

qlayer = qml.qnn.TorchLayer(qnode, weight_shapes)
x = torch.tensor(np.random.random((5, 2)))  # Batched inputs with batch dim 5 for 2 qubits


qlayer.forward(x)
```

The `forward` function, in `pennylane/qnn/torch.py`, calls the
`_evaluate_qnode` function under the hood. The issue is caused by the
fact that the latter only works with tuples if these contain lists of
`torch.Tensor`. In the code above, the `_evaluate_qnode` function is
called with a tuple of `torch.Tensor`, which causes the error.

**Description of the Change:** We intercept such a case to temporarily
transform a tuple of `torch.Tensor` into a tuple of lists so that the
code works as originally expected. Obviously, this is not the only
possible solution to the problem. Still, it is a non-invasive way to
solve the problem without changing the original intended tuples workflow
(which is beyond the scope of this PR).

**Benefits:** The above code no longer throws an error.

**Possible Drawbacks:** None that I can think of caused by this PR since
we are not changing the original workflow. We are just fixing an
existing bug by intercepting a specific case that can occur.

**Related GitHub Issues:** #5762 

**Related Shortcut Stories:** [sc-64283]
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug 🐛 Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants