Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Defer calculating classical cotransform and argnums till needed #6716

Merged
merged 287 commits into from
Jan 3, 2025

Conversation

albi3ro
Copy link
Contributor

@albi3ro albi3ro commented Dec 13, 2024

Context:

Currently, we do preprocessing and transform program setup in the qnode. We have to do this, because we have to cache the classical cotransforms and argnums in the qnode layer. If we tried to setup the full transform program later, we wouldn't have access to the classical cotransform information anymore.

But by caching the qnode, args, and kwargs instead of calculated cotransform information, we can further add to and modify the transform program after we have moved on from the qnode level. This allows us to setup the workflow inside of qml.execute. Now we only use the qnode-like kwargs from qml.execute. We don't use the "developer like" kwargs like inner_transform_program and config. We will no longer need two "modes" for qml.execute, where one mimics the qnode and one is more developer focused. We can strictly mirror the interface of the QNode in qml.execute. This also allows us to run our resolution and setup in qml.execute.

This does also provide some improvements. Where we previously got:

x = jax.numpy.array(0.5)

tape = qml.tape.QuantumScript([qml.RX(x, 0)], [qml.expval(qml.Z(0))], shots=100)
dev = qml.device('lightning.qubit', wires=2)
jax.jacobian(qml.execute)((tape,), device=dev, diff_method="backprop")
Invoked with: <pennylane_lightning.lightning_qubit_ops.StateVectorC128 object at 0x156749cf0>, [0], False, [Traced<ConcreteArray(0.5, dtype=float32, weak_type=True)>with<JVPTrace(level=2[/0](http://localhost:8888/0))> with
  primal = Array(0.5, dtype=float32, weak_type=True)
  tangent = Traced<ShapedArray(float32[], weak_type=True)>with<JaxprTrace(level=1[/0](http://localhost:8888/0))> with
    pval = (ShapedArray(float32[], weak_type=True), None)
    recipe = LambdaBinding()]

We now get:

QuantumFunctionError: Device <lightning.qubit device (wires=2) at 0x1275a7ce0> does not support backprop with requested circuit.

We can also specify diff_method="best" in qml.execute.

Description of the Change:

Benefits:

Possible Drawbacks:

Related GitHub Issues:

andrijapau and others added 30 commits November 18, 2024 13:20
Base automatically changed from add-dev-run-fxn to master December 13, 2024 17:34
@albi3ro albi3ro marked this pull request as draft December 13, 2024 21:29
@albi3ro albi3ro changed the title Defer calculating classical cotransform and argnums till needed [DRAFT] Defer calculating classical cotransform and argnums till needed Dec 13, 2024
@albi3ro albi3ro marked this pull request as ready for review December 13, 2024 22:20
Copy link
Contributor

Hello. You may have forgotten to update the changelog!
Please edit doc/releases/changelog-dev.md with:

  • A one-to-two sentence description of the change. You may include a small working example for new features.
  • A link back to this PR.
  • Your name (or GitHub username) in the contributors section.

@albi3ro albi3ro changed the title [DRAFT] Defer calculating classical cotransform and argnums till needed Defer calculating classical cotransform and argnums till needed Dec 31, 2024
Copy link

codecov bot commented Dec 31, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 99.60%. Comparing base (447f850) to head (a65dcea).
Report is 1 commits behind head on master.

Additional details and impacted files
@@           Coverage Diff           @@
##           master    #6716   +/-   ##
=======================================
  Coverage   99.60%   99.60%           
=======================================
  Files         476      476           
  Lines       45210    45224   +14     
=======================================
+ Hits        45033    45047   +14     
  Misses        177      177           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Contributor

@andrijapau andrijapau left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very nice! Always found this behaviour rather odd. 😄

Just a few minor comments.

pennylane/math/interface_utils.py Outdated Show resolved Hide resolved
pennylane/transforms/core/transform_program.py Outdated Show resolved Hide resolved
pennylane/workflow/resolution.py Show resolved Hide resolved
Copy link
Contributor

@mudit2812 mudit2812 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No blockers other than the CotransformCache typo.

@albi3ro albi3ro enabled auto-merge (squash) January 3, 2025 19:55
@albi3ro albi3ro merged commit 70e5195 into master Jan 3, 2025
46 checks passed
@albi3ro albi3ro deleted the transform-program-caching branch January 3, 2025 20:11
@PietropaoloFrisoni PietropaoloFrisoni added this to the v0.40 milestone Jan 3, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants