Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add qml.math.grad and qml.math.jacobian for differentiating any interface #6741

Merged
merged 17 commits into from
Dec 31, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 3 additions & 5 deletions doc/introduction/interfaces/numpy.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,6 @@
NumPy interface
===============

.. note:: This interface is the default interface supported by PennyLane's :class:`QNode <pennylane.QNode>`.


Using the NumPy interface
-------------------------
Expand Down Expand Up @@ -275,18 +273,18 @@ we would get an error message. This is because the `gradient <https://en.wikiped
only defined for scalar functions, i.e., functions which return a single value. In the case where the QNode
returns multiple expectation values, the correct differential operator to use is
the `Jacobian matrix <https://en.wikipedia.org/wiki/Jacobian_matrix_and_determinant>`_.
This can be accessed in PennyLane as :func:`~.jacobian`.
This can be accessed in PennyLane as :func:`~pennylane.jacobian`.

As the ``circuit5`` returns a tuple of numpy arrays instead of a single numpy array, the results need
to be stacked into a single array before use with :func:`~.jacobian`.
to be stacked into a single array before use with :func:`~pennylane.jacobian`.

>>> j1 = qml.jacobian(lambda x: np.stack(circuit5(x)))
>>> j1(params)
array([[ 0. , -0.98006658],
[-0.98006658, 0. ]])


The output of :func:`~.jacobian` is a two-dimensional vector, with the first/second element being
The output of :func:`~pennylane.jacobian` is a two-dimensional vector, with the first/second element being
the partial derivative of the first/second expectation value with respect to the input parameter.


Expand Down
4 changes: 4 additions & 0 deletions doc/releases/changelog-dev.md
Original file line number Diff line number Diff line change
Expand Up @@ -340,6 +340,10 @@ such as `shots`, `rng` and `prng_key`.

<h4>Other Improvements</h4>

* `qml.math.grad` and `qml.math.jacobian` added to differentiate a function with inputs of any
interface in a jax-like manner.
[(#6741)](https://github.com/PennyLaneAI/pennylane/pull/6741)

* `qml.GroverOperator` now has a `work_wires` property.
[(#6738)](https://github.com/PennyLaneAI/pennylane/pull/6738)

Expand Down
17 changes: 16 additions & 1 deletion pennylane/_grad.py
Original file line number Diff line number Diff line change
Expand Up @@ -251,6 +251,21 @@
return grad_value, ans


def _error_if_not_array(f):
"""A function decorator that raises an error if the function output is not an autograd, pennylane, or numpy array."""

@wraps(f)
def new_f(*args, **kwargs):
output = f(*args, **kwargs)
if output.__class__.__module__.split(".")[0] not in {"autograd", "pennylane", "numpy"}:
raise ValueError(
f"autograd can only differentiate with respect to arrays, not {type(output)}. Ensure the output class is an autograd array."
)
return output

return new_f


def jacobian(func, argnum=None, method=None, h=None):
"""Returns the Jacobian as a callable function of vector-valued (functions of) QNodes.
This function is compatible with Autograd and :func:`~.qjit`.
Expand Down Expand Up @@ -514,7 +529,7 @@
"If this is unintended, please add trainable parameters via the "
"'requires_grad' attribute or 'argnum' keyword."
)
jac = tuple(_jacobian(func, arg)(*args, **kwargs) for arg in _argnum)
jac = tuple(_jacobian(_error_if_not_array(func), arg)(*args, **kwargs) for arg in _argnum)

return jac[0] if unpack else jac

Expand All @@ -522,7 +537,7 @@


# pylint: disable=too-many-arguments
def vjp(f, params, cotangents, method=None, h=None, argnum=None):

Check notice on line 540 in pennylane/_grad.py

View check run for this annotation

codefactor.io / CodeFactor

pennylane/_grad.py#L540

Too many positional arguments (6/5) (too-many-positional-arguments)
"""A :func:`~.qjit` compatible Vector-Jacobian product of PennyLane programs.

This function allows the Vector-Jacobian Product of a hybrid quantum-classical function to be
Expand Down Expand Up @@ -588,7 +603,7 @@


# pylint: disable=too-many-arguments
def jvp(f, params, tangents, method=None, h=None, argnum=None):

Check notice on line 606 in pennylane/_grad.py

View check run for this annotation

codefactor.io / CodeFactor

pennylane/_grad.py#L606

Too many positional arguments (6/5) (too-many-positional-arguments)
"""A :func:`~.qjit` compatible Jacobian-vector product of PennyLane programs.

This function allows the Jacobian-vector Product of a hybrid quantum-classical function to be
Expand Down
11 changes: 7 additions & 4 deletions pennylane/gradients/classical_jacobian.py
Original file line number Diff line number Diff line change
Expand Up @@ -113,7 +113,7 @@ def classical_jacobian(qnode, argnum=None, expand_fn=None, trainable_only=True):
- ``tuple(array)``

[1] If there only is one trainable QNode argument, the tuple is unpacked to a
single ``array``, as is the case for :func:`.jacobian`.
single ``array``, as is the case for :func:`pennylane.jacobian`.

[2] For JAX, ``argnum=None`` defaults to ``argnum=0`` in contrast to all other
interfaces. This means that only the classical Jacobian with respect to the first
Expand Down Expand Up @@ -158,7 +158,7 @@ def qnode_wrapper(*args, **kwargs): # pylint: disable=inconsistent-return-state
if qnode.interface == "autograd":
jac = qml.jacobian(classical_preprocessing, argnum=wrapper_argnum)(*args, **kwargs)

if qnode.interface == "torch":
elif qnode.interface == "torch":
import torch

def _jacobian(*args, **kwargs): # pylint: disable=unused-argument
Expand All @@ -177,7 +177,7 @@ def _jacobian(*args, **kwargs): # pylint: disable=unused-argument

jac = _jacobian(*args, **kwargs)

if qnode.interface in ["jax", "jax-jit"]:
elif qnode.interface in ["jax", "jax-jit"]:
import jax

argnum = 0 if wrapper_argnum is None else wrapper_argnum
Expand All @@ -187,7 +187,7 @@ def _jacobian(*args, **kwargs):

jac = _jacobian(*args, **kwargs)

if qnode.interface == "tf":
elif qnode.interface == "tf":
import tensorflow as tf

def _jacobian(*args, **kwargs):
Expand All @@ -206,6 +206,9 @@ def _jacobian(*args, **kwargs):

jac = _jacobian(*args, **kwargs)

else:
raise ValueError(f"Undifferentiable interface {qnode.interface}.")

if old_interface == "auto":
qnode.interface = "auto"

Expand Down
3 changes: 3 additions & 0 deletions pennylane/math/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -101,6 +101,7 @@
get_interface,
Interface,
)
from .grad import grad, jacobian

sum = ar.numpy.sum
toarray = ar.numpy.to_numpy
Expand Down Expand Up @@ -168,10 +169,12 @@ def __getattr__(name):
"get_canonical_interface_name",
"get_deep_interface",
"get_trainable_indices",
"grad",
"in_backprop",
"is_abstract",
"is_independent",
"iscomplex",
"jacobian",
"marginal_prob",
"max_entropy",
"min_entropy",
Expand Down
246 changes: 246 additions & 0 deletions pennylane/math/grad.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,246 @@
# Copyright 2024 Xanadu Quantum Technologies Inc.
JerryChen97 marked this conversation as resolved.
Show resolved Hide resolved

# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at

# http://www.apache.org/licenses/LICENSE-2.0

# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
This submodule defines grad and jacobian for differentiating circuits in an interface-independent way.
"""

from typing import Callable, Sequence, Union

from pennylane._grad import grad as _autograd_grad
from pennylane._grad import jacobian as _autograd_jacobian

from .interface_utils import get_interface


# pylint: disable=import-outside-toplevel
def grad(f: Callable, argnums: Union[Sequence[int], int] = 0) -> Callable:
"""Compute the gradient in a jax-like manner for any interface.

Args:
f (Callable): a function with a single 0-D scalar output
argnums (Sequence[int] | int ) = 0 : which arguments to differentiate

Returns:
Callable: a function with the same signature as ``f`` that returns the gradient.

.. seealso:: :func:`pennylane.math.jacobian`

Note that this function follows the same design as jax. By default, the function will return the gradient
of the first argument, whether or not other arguments are trainable.

>>> import jax, torch, tensorflow as tf
>>> def f(x, y):
... return x * y
>>> qml.math.grad(f)(qml.numpy.array(2.0), qml.numpy.array(3.0))
tensor(3., requires_grad=True)
>>> qml.math.grad(f)(jax.numpy.array(2.0), jax.numpy.array(3.0))
Array(3., dtype=float32, weak_type=True)
>>> qml.math.grad(f)(torch.tensor(2.0, requires_grad=True), torch.tensor(3.0, requires_grad=True))
tensor(3.)
>>> qml.math.grad(f)(tf.Variable(2.0), tf.Variable(3.0))
<tf.Tensor: shape=(), dtype=float32, numpy=3.0>

``argnums`` can be provided to differentiate multiple arguments.

>>> qml.math.grad(f, argnums=(0,1))(torch.tensor(2.0, requires_grad=True), torch.tensor(3.0, requires_grad=True))
(tensor(3.), tensor(2.))

Note that the selected arguments *must* be of an appropriately trainable datatype, or an error may occur.

>>> qml.math.grad(f)(torch.tensor(1.0), torch.tensor(2.))
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

"""

argnums_integer = False
if isinstance(argnums, int):
argnums = (argnums,)
argnums_integer = True

def compute_grad(*args, **kwargs):
interface = get_interface(*args)

if interface == "autograd":
g = _autograd_grad(f, argnum=argnums)(*args, **kwargs)
return g[0] if argnums_integer else g

if interface == "jax":
import jax

g = jax.grad(f, argnums=argnums)(*args, **kwargs)
return g[0] if argnums_integer else g

if interface == "torch":
y = f(*args, **kwargs)
y.backward()
g = tuple(args[i].grad for i in argnums)
return g[0] if argnums_integer else g

if interface == "tensorflow":
import tensorflow as tf

with tf.GradientTape() as tape:
y = f(*args, **kwargs)

g = tape.gradient(y, tuple(args[i] for i in argnums))
return g[0] if argnums_integer else g

raise ValueError(f"Interface {interface} is not differentiable.")

return compute_grad


# pylint: disable=import-outside-toplevel
def _torch_jac(f, argnums, args, kwargs):
"""Calculate a jacobian via torch."""
from torch.autograd.functional import jacobian as _torch_jac

argnums_torch = (argnums,) if isinstance(argnums, int) else argnums
trainable_args = tuple(args[i] for i in argnums_torch)

# keep track of output type to know how to unpack
output_type_cache = []

def partial_f(*_trainables):
full_args = list(args)
for argnum, value in zip(argnums_torch, _trainables, strict=True):
full_args[argnum] = value
result = f(*full_args, **kwargs)
output_type_cache.append(type(result))
return result

jac = _torch_jac(partial_f, trainable_args)
if output_type_cache[-1] is tuple:
return tuple(j[0] for j in jac) if isinstance(argnums, int) else jac
# else array
return jac[0] if isinstance(argnums, int) else jac


# pylint: disable=import-outside-toplevel
def _tensorflow_jac(f, argnums, args, kwargs):
"""Calculate a jacobian via tensorflow"""
import tensorflow as tf

with tf.GradientTape() as tape:
y = f(*args, **kwargs)

if get_interface(y) != "tensorflow":
raise ValueError(
f"qml.math.jacobian does not work with tensorflow and non-tensor outputs. Got {y} of type {type(y)}."
)

argnums_integer = False
if isinstance(argnums, int):
argnums_tf = (argnums,)
argnums_integer = True
else:
argnums_tf = argnums

g = tape.jacobian(y, tuple(args[i] for i in argnums_tf))
return g[0] if argnums_integer else g


# pylint: disable=import-outside-toplevel
def jacobian(f: Callable, argnums: Union[Sequence[int], int] = 0) -> Callable:
PietropaoloFrisoni marked this conversation as resolved.
Show resolved Hide resolved
"""Compute the Jacobian in a jax-like manner for any interface.

Args:
f (Callable): a function with a vector valued output
argnums (Sequence[int] | int ) = 0 : which arguments to differentiate

Returns:
Callable: a function with the same signature as ``f`` that returns the jacobian

.. seealso:: :func:`pennylane.math.grad`

Note that this function follows the same design as jax. By default, the function will return the gradient
of the first argument, whether or not other arguments are trainable.

>>> import jax, torch, tensorflow as tf
>>> def f(x, y):
... return x * y
>>> qml.math.jacobian(f)(qml.numpy.array([2.0, 3.0]), qml.numpy.array(3.0))
array([[3., 0.],
[0., 3.]])
>>> qml.math.jacobian(f)(jax.numpy.array([2.0, 3.0]), jax.numpy.array(3.0))
Array([[3., 0.],
[0., 3.]], dtype=float32)
>>> x_torch = torch.tensor([2.0, 3.0], requires_grad=True)
>>> y_torch = torch.tensor(3.0, requires_grad=True)
>>> qml.math.jacobian(f)(x_torch, y_torch)
tensor([[3., 0.],
[0., 3.]])
>>> qml.math.jacobian(f)(tf.Variable([2.0, 3.0]), tf.Variable(3.0))
<tf.Tensor: shape=(2, 2), dtype=float32, numpy=
array([[3., 0.],
[0., 3.]], dtype=float32)>

``argnums`` can be provided to differentiate multiple arguments.

>>> qml.math.jacobian(f, argnums=(0,1))(x_torch, y_torch)
(tensor([[3., 0.],
[0., 3.]]),
tensor([2., 3.]))

While jax can handle taking jacobians of outputs with any pytree shape:

>>> def pytree_f(x):
... return {"a": 2*x, "b": 3*x}
>>> qml.math.jacobian(pytree_f)(jax.numpy.array(2.0))
{'a': Array(2., dtype=float32, weak_type=True),
'b': Array(3., dtype=float32, weak_type=True)}

Torch can only differentiate arrays and tuples:

>>> def tuple_f(x):
... return x**2, x**3
>>> qml.math.jacobian(tuple_f)(torch.tensor(2.0))
(tensor(4.), tensor(12.))
>>> qml.math.jacobian(pytree_f)(torch.tensor(2.0))
TypeError: The outputs of the user-provided function given to jacobian must be
either a Tensor or a tuple of Tensors but the given outputs of the user-provided
function has type <class 'dict'>.


But tensorflow and autograd can only handle array-valued outputs:

>>> qml.math.jacobian(tuple_f)(qml.numpy.array(2.0))
ValueError: autograd can only differentiate with respect to arrays, not <class 'tuple'>
>>> qml.math.jacobian(tuple_f)(tf.Variable(2.0))
ValueError: qml.math.jacobian does not work with tensorflow and non-tensor outputs.
Got (<tf.Tensor: shape=(), dtype=float32, numpy=4.0>,
<tf.Tensor: shape=(), dtype=float32, numpy=8.0>) of type <class 'tuple'>.

"""

def compute_jacobian(*args, **kwargs):
interface = get_interface(*args)

if interface == "autograd":
return _autograd_jacobian(f, argnum=argnums)(*args, **kwargs)

if interface == "jax":
import jax

return jax.jacobian(f, argnums=argnums)(*args, **kwargs)

if interface == "torch":
return _torch_jac(f, argnums, args, kwargs)

if interface == "tensorflow":
return _tensorflow_jac(f, argnums, args, kwargs)

raise ValueError(f"Interface {interface} is not differentiable.")

return compute_jacobian
Loading
Loading