You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I think the cause is because metric_tensor only really works with a single weights argument. You can even see that the example has “# , extra_weight):” commented out.
If this is too hard to fix (which it might be) then at least we need a mention in the documentation for metric_tensor and QNGO saying that only a single trainable 'parameters' argument is supported, although this can have several dimensions I think.
Source code
# Device
n_qubits=2
# We create a device with one extra wire because we need an auxiliary wire when using QNGO
dev = qml.device('default.qubit', wires=n_qubits+1)
# QNode
diff_method='backprop'
@qml.qnode(dev,diff_method=diff_method)
def circuit(inputs,params):
# Data embedding
qml.RX(inputs[0],wires=0)
qml.RX(inputs[1],wires=1)
# Parametrized layer
qml.Rot(params[0],params[1],params[2],wires=0)
qml.Hadamard(wires=0)
qml.CNOT(wires=[0,1])
# Measurementreturn qml.expval(qml.Z(0))
# Initial value of the data and parameters
data = pnp.array([0.,1.],requires_grad=False)
params = pnp.array([1.,2.,3.],requires_grad=True)
# Initial value of the circuit
print(circuit(data,params))
# Cost function
def cost_f(inputs,params):
return pnp.abs(circuit(inputs,params))
# Optimizer
opt = qml.QNGOptimizer()
# If we're using QNGO we need to define a metric tensor function
mt_fn = qml.metric_tensor(circuit)
print(mt_fn(data,params))
# Optimization loopforitin range(10):
stuff = opt.step(cost_f,data,params,metric_tensor_fn=mt_fn)
print(stuff)
print('Cost: ', cost_f(data,params))
Tracebacks
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-38-393e103a7541>in<cell line: 42>()
41 # Optimization loop
42 foritin range(10):
---> 43 stuff = opt.step(cost_f,data,params,metric_tensor_fn=mt_fn)
44 print(stuff)
45 print('Cost: ', cost_f(data,params))
3 frames
/usr/local/lib/python3.10/dist-packages/pennylane/optimize/qng.py in step(self, qnode, grad_fn, recompute_tensor, metric_tensor_fn, *args, **kwargs)
251 array: the new variable values :math:`x^{(t+1)}`
252 """--> 253 new_args, _ = self.step_and_cost( 254 qnode, 255 *args,/usr/local/lib/python3.10/dist-packages/pennylane/optimize/qng.py in step_and_cost(self, qnode, grad_fn, recompute_tensor, metric_tensor_fn, *args, **kwargs) 201 202 g, forward = self.compute_grad(qnode, args, kwargs, grad_fn=grad_fn)--> 203 new_args = pnp.array(self.apply_grad(g, args), requires_grad=True) 204 205 if forward is None:/usr/local/lib/python3.10/dist-packages/pennylane/optimize/qng.py in apply_grad(self, grad, args) 275 grad_flat = pnp.array(list(_flatten(grad))) 276 x_flat = pnp.array(list(_flatten(args)))--> 277 x_new_flat = x_flat - self.stepsize * pnp.linalg.solve(self.metric_tensor, grad_flat) 278 return unflatten(x_new_flat, args)/usr/local/lib/python3.10/dist-packages/pennylane/numpy/tensor.py in __array_ufunc__(self, ufunc, method, *inputs, **kwargs) 153 # call the ndarray.__array_ufunc__ method to compute the result 154 # of the vectorized ufunc--> 155 res = super().__array_ufunc__(ufunc, method, *args, **kwargs) 156 157 if isinstance(res, Operator):ValueError: operands could not be broadcast together with shapes (5,) (3,)
The core issue here is that QNGOptimizer.apply_grad expects all arguments to be updated.
This means that one could provide the non-trainable data as keyword arguments instead, but that is arguably a limitation.
Using the logic from GradientDescentOptimizer.apply_grad in QNGO makes the code above work, also if the trainable arguments have a more complicated shape. Multiple trainable arguments will likely not work as expected. Related: #1991
Expected behavior
I expect to be able to run QNGO with data and parameters.
Actual behavior
I get an error when I try to run this.
Additional information
This originated from this user question.
I think the cause is because metric_tensor only really works with a single weights argument. You can even see that the example has “# , extra_weight):” commented out.
If this is too hard to fix (which it might be) then at least we need a mention in the documentation for metric_tensor and QNGO saying that only a single trainable 'parameters' argument is supported, although this can have several dimensions I think.
Source code
Tracebacks
System information
Name: PennyLane Version: 0.36.0 Summary: PennyLane is a cross-platform Python library for quantum computing, quantum machine learning, and quantum chemistry. Train a quantum computer the same way as a neural network. Home-page: https://github.com/PennyLaneAI/pennylane Author: Author-email: License: Apache License 2.0 Location: /usr/local/lib/python3.10/dist-packages Requires: appdirs, autograd, autoray, cachetools, networkx, numpy, pennylane-lightning, requests, rustworkx, scipy, semantic-version, toml, typing-extensions Required-by: PennyLane_Lightning Platform info: Linux-6.1.85+-x86_64-with-glibc2.35 Python version: 3.10.12 Numpy version: 1.25.2 Scipy version: 1.11.4 Installed devices: - default.clifford (PennyLane-0.36.0) - default.gaussian (PennyLane-0.36.0) - default.mixed (PennyLane-0.36.0) - default.qubit (PennyLane-0.36.0) - default.qubit.autograd (PennyLane-0.36.0) - default.qubit.jax (PennyLane-0.36.0) - default.qubit.legacy (PennyLane-0.36.0) - default.qubit.tf (PennyLane-0.36.0) - default.qubit.torch (PennyLane-0.36.0) - default.qutrit (PennyLane-0.36.0) - default.qutrit.mixed (PennyLane-0.36.0) - null.qubit (PennyLane-0.36.0) - lightning.qubit (PennyLane_Lightning-0.36.0)
Existing GitHub issues
The text was updated successfully, but these errors were encountered: