Skip to content

Commit

Permalink
Update docstring examples of some optimizers. (#6303)
Browse files Browse the repository at this point in the history
Updates several examples in optimizer docstrings to
- use `pnp` for PennyLane's `numpy` version,
- import tensorflow explicitly and use `tf.constant`/`tf.Variable` with
the TensorFlow interface, instead of
`np.array(...requires_grad=False/True)`
- use `adjoint_metric_tensor` correctly, namely without passing a device
as second argument.


[sc-74479]
  • Loading branch information
dwierichs authored Oct 1, 2024
1 parent 6b340fa commit 8f91b8a
Show file tree
Hide file tree
Showing 5 changed files with 20 additions and 16 deletions.
3 changes: 3 additions & 0 deletions doc/releases/changelog-dev.md
Original file line number Diff line number Diff line change
Expand Up @@ -162,6 +162,9 @@

<h3>Documentation 📝</h3>

* Fixed examples in the documentation of a few optimizers.
[(#6303)](https://github.com/PennyLaneAI/pennylane/pull/6303)

* Corrected examples in the documentation of `qml.jacobian`.
[(#6283)](https://github.com/PennyLaneAI/pennylane/pull/6283)

Expand Down
4 changes: 2 additions & 2 deletions pennylane/optimize/qng.py
Original file line number Diff line number Diff line change
Expand Up @@ -116,7 +116,7 @@ class QNGOptimizer(GradientDescentOptimizer):
optimizer's :meth:`~.step` function:
>>> eta = 0.01
>>> init_params = np.array([0.011, 0.012])
>>> init_params = pnp.array([0.011, 0.012])
>>> opt = qml.QNGOptimizer(eta)
>>> theta_new = opt.step(circuit, init_params)
>>> theta_new
Expand All @@ -126,7 +126,7 @@ class QNGOptimizer(GradientDescentOptimizer):
via the ``metric_tensor_fn`` keyword argument. For example, we can provide a function
to calculate the metric tensor via the adjoint method.
>>> adj_metric_tensor = qml.adjoint_metric_tensor(circuit, circuit.device)
>>> adj_metric_tensor = qml.adjoint_metric_tensor(circuit)
>>> opt.step(circuit, init_params, metric_tensor_fn=adj_metric_tensor)
tensor([ 0.01100528, -0.02799954], requires_grad=True)
Expand Down
18 changes: 9 additions & 9 deletions pennylane/optimize/rotosolve.py
Original file line number Diff line number Diff line change
Expand Up @@ -236,12 +236,12 @@ def cost_function(rot_param, layer_par, crot_param, rot_weights=None, crot_weigh
.. code-block :: python
init_param = (
np.array([0.3, 0.2, 0.67], requires_grad=True),
np.array(1.1, requires_grad=True),
np.array([-0.2, 0.1, -2.5], requires_grad=True),
pnp.array([0.3, 0.2, 0.67], requires_grad=True),
pnp.array(1.1, requires_grad=True),
pnp.array([-0.2, 0.1, -2.5], requires_grad=True),
)
rot_weights = np.ones(3)
crot_weights = np.ones(3)
rot_weights = pnp.ones(3)
crot_weights = pnp.ones(3)
nums_frequency = {
"rot_param": {(0,): 1, (1,): 1, (2,): 1},
Expand Down Expand Up @@ -271,7 +271,7 @@ def cost_function(rot_param, layer_par, crot_param, rot_weights=None, crot_weigh
... crot_weights=crot_weights,
... )
... print(f"Cost before step: {cost}")
... print(f"Minimization substeps: {np.round(sub_cost, 6)}")
... print(f"Minimization substeps: {pnp.round(sub_cost, 6)}")
... cost_rotosolve.extend(sub_cost)
Cost before step: 0.04200821039253547
Minimization substeps: [-0.230905 -0.863336 -0.980072 -0.980072 -1. -1. -1. ]
Expand All @@ -290,8 +290,8 @@ def cost_function(rot_param, layer_par, crot_param, rot_weights=None, crot_weigh
but their concrete values. For the example QNode above, this happens if the
weights are no longer one:
>>> rot_weights = np.array([0.4, 0.8, 1.2], requires_grad=False)
>>> crot_weights = np.array([0.5, 1.0, 1.5], requires_grad=False)
>>> rot_weights = pnp.array([0.4, 0.8, 1.2], requires_grad=False)
>>> crot_weights = pnp.array([0.5, 1.0, 1.5], requires_grad=False)
>>> spectrum_fn = qml.fourier.qnode_spectrum(cost_function)
>>> spectra = spectrum_fn(*param, rot_weights=rot_weights, crot_weights=crot_weights)
>>> spectra["rot_param"]
Expand All @@ -313,7 +313,7 @@ def cost_function(rot_param, layer_par, crot_param, rot_weights=None, crot_weigh
... crot_weights = crot_weights,
... )
... print(f"Cost before step: {cost}")
... print(f"Minimization substeps: {np.round(sub_cost, 6)}")
... print(f"Minimization substeps: {pnp.round(sub_cost, 6)}")
Cost before step: 0.09299359486191039
Minimization substeps: [-0.268008 -0.713209 -0.24993 -0.871989 -0.907672 -0.907892 -0.940474]
Cost before step: -0.9404742138557066
Expand Down
2 changes: 1 addition & 1 deletion pennylane/optimize/shot_adaptive.py
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@ class ShotAdaptiveOptimizer(GradientDescentOptimizer):
iteration, and across the life of the optimizer, respectively.
>>> shape = qml.templates.StronglyEntanglingLayers.shape(n_layers=2, n_wires=2)
>>> params = np.random.random(shape)
>>> params = pnp.random.random(shape)
>>> opt = qml.ShotAdaptiveOptimizer(min_shots=10, term_sampling="weighted_random_sampling")
>>> for i in range(60):
... params = opt.step(cost, params)
Expand Down
9 changes: 5 additions & 4 deletions pennylane/optimize/spsa.py
Original file line number Diff line number Diff line change
Expand Up @@ -89,15 +89,15 @@ class SPSAOptimizer:
>>> dev = qml.device("default.qubit", wires=num_qubits)
>>> @qml.qnode(dev)
... def cost(params, num_qubits=1):
... qml.BasisState(np.array([1, 1, 0, 0]), wires=range(num_qubits))
... qml.BasisState(pnp.array([1, 1, 0, 0]), wires=range(num_qubits))
... for i in range(num_qubits):
... qml.Rot(*params[i], wires=0)
... qml.CNOT(wires=[2, 3])
... qml.CNOT(wires=[2, 0])
... qml.CNOT(wires=[3, 1])
... return qml.expval(H)
...
>>> params = np.random.normal(0, np.pi, (num_qubits, 3), requires_grad=True)
>>> params = pnp.random.normal(0, pnp.pi, (num_qubits, 3), requires_grad=True)
Once constructed, the cost function can be passed directly to the
``step`` or ``step_and_cost`` function of the optimizer:
Expand All @@ -112,6 +112,7 @@ class SPSAOptimizer:
The algorithm provided by SPSA does not rely on built-in automatic differentiation capabilities of the interface being used
and therefore the optimizer can be used in more complex hybrid classical-quantum workflow with any of the interfaces:
>>> import tensorflow as tf
>>> n_qubits = 1
>>> max_iterations = 20
>>> dev = qml.device("default.qubit", wires=n_qubits)
Expand All @@ -127,8 +128,8 @@ class SPSAOptimizer:
... for _ in range(max_iterations):
... # Some classical steps before the quantum computation
... params_a, layer_res = opt.step_and_cost(layer_fn_spsa,
... np.tensor(tensor_in, requires_grad=False),
... np.tensor(params))
... tf.constant(tensor_in),
... tf.Variable(params))
... params = params_a[1]
... tensor_out = layer_res
... # Some classical steps after the quantum computation
Expand Down

0 comments on commit 8f91b8a

Please sign in to comment.