Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix objective.compile in Benchmarks #1483

Open
wants to merge 14 commits into
base: master
Choose a base branch
from
Open

Fix objective.compile in Benchmarks #1483

wants to merge 14 commits into from

Conversation

YigitElma
Copy link
Collaborator

@YigitElma YigitElma commented Dec 20, 2024

Some benchmarks were using obj.compile and jac_scaled_error together but obj.compile only compiles jac_scaled and compute_scaled. This cause some benchmarks to have very different min/max values, like,
image
With the change it is more consistent,
image

Misc

@YigitElma YigitElma added easy Short and simple to code or review skip_changelog No need to update changelog on this PR labels Dec 20, 2024
Copy link
Contributor

github-actions bot commented Dec 20, 2024

|             benchmark_name             |         dt(%)          |         dt(s)          |        t_new(s)        |        t_old(s)        | 
| -------------------------------------- | ---------------------- | ---------------------- | ---------------------- | ---------------------- |
 test_build_transform_fft_lowres         |     -1.75 +/- 10.22    | -9.57e-03 +/- 5.60e-02 |  5.38e-01 +/- 3.2e-02  |  5.48e-01 +/- 4.6e-02  |
 test_equilibrium_init_medres            |     -2.07 +/- 2.92     | -8.91e-02 +/- 1.26e-01 |  4.21e+00 +/- 4.4e-02  |  4.30e+00 +/- 1.2e-01  |
 test_equilibrium_init_highres           |     -0.01 +/- 5.06     | -3.34e-04 +/- 2.79e-01 |  5.52e+00 +/- 1.3e-01  |  5.52e+00 +/- 2.5e-01  |
 test_objective_compile_dshape_current   |     +3.20 +/- 6.97     | +1.37e-01 +/- 2.99e-01 |  4.42e+00 +/- 1.7e-01  |  4.29e+00 +/- 2.5e-01  |
 test_objective_compute_dshape_current   |     -3.05 +/- 6.92     | -1.73e-04 +/- 3.93e-04 |  5.50e-03 +/- 2.1e-04  |  5.67e-03 +/- 3.3e-04  |
 test_objective_jac_dshape_current       |     -2.59 +/- 7.60     | -1.15e-03 +/- 3.38e-03 |  4.34e-02 +/- 2.2e-03  |  4.45e-02 +/- 2.5e-03  |
 test_perturb_2                          |     -0.17 +/- 2.25     | -3.39e-02 +/- 4.52e-01 |  2.00e+01 +/- 2.2e-01  |  2.01e+01 +/- 4.0e-01  |
 test_proximal_freeb_jac                 |     -2.01 +/- 1.11     | -1.52e-01 +/- 8.43e-02 |  7.42e+00 +/- 3.5e-02  |  7.57e+00 +/- 7.7e-02  |
 test_solve_fixed_iter                   |     +0.07 +/- 2.00     | +2.29e-02 +/- 6.43e-01 |  3.22e+01 +/- 3.2e-01  |  3.22e+01 +/- 5.6e-01  |
 test_LinearConstraintProjection_build   |     +1.01 +/- 2.69     | +1.42e-01 +/- 3.78e-01 |  1.42e+01 +/- 3.6e-01  |  1.41e+01 +/- 1.1e-01  |
 test_build_transform_fft_midres         |     +2.07 +/- 7.80     | +1.32e-02 +/- 4.97e-02 |  6.50e-01 +/- 3.8e-02  |  6.37e-01 +/- 3.3e-02  |
 test_build_transform_fft_highres        |     +1.60 +/- 7.03     | +1.59e-02 +/- 7.00e-02 |  1.01e+00 +/- 6.1e-02  |  9.96e-01 +/- 3.5e-02  |
 test_equilibrium_init_lowres            |     +1.00 +/- 8.94     | +4.02e-02 +/- 3.59e-01 |  4.06e+00 +/- 2.5e-01  |  4.02e+00 +/- 2.6e-01  |
 test_objective_compile_atf              |     +1.41 +/- 6.28     | +1.16e-01 +/- 5.16e-01 |  8.33e+00 +/- 4.9e-01  |  8.22e+00 +/- 1.8e-01  |
 test_objective_compute_atf              |     -2.00 +/- 3.56     | -3.17e-04 +/- 5.65e-04 |  1.56e-02 +/- 1.7e-04  |  1.59e-02 +/- 5.4e-04  |
 test_objective_jac_atf                  |     +1.00 +/- 2.24     | +1.92e-02 +/- 4.32e-02 |  1.95e+00 +/- 3.1e-02  |  1.93e+00 +/- 3.0e-02  |
 test_perturb_1                          |     +0.68 +/- 2.62     | +9.87e-02 +/- 3.79e-01 |  1.46e+01 +/- 3.5e-01  |  1.45e+01 +/- 1.6e-01  |
 test_proximal_jac_atf                   |     +0.81 +/- 1.80     | +6.60e-02 +/- 1.47e-01 |  8.24e+00 +/- 1.4e-01  |  8.18e+00 +/- 4.8e-02  |
 test_proximal_freeb_compute             |     +0.96 +/- 1.09     | +1.91e-03 +/- 2.18e-03 |  2.01e-01 +/- 1.9e-03  |  1.99e-01 +/- 1.1e-03  |
 test_solve_fixed_iter_compiled          |     +0.26 +/- 2.73     | +5.27e-02 +/- 5.50e-01 |  2.03e+01 +/- 3.8e-01  |  2.02e+01 +/- 4.0e-01  |

Copy link

codecov bot commented Dec 20, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 95.64%. Comparing base (f8c66c7) to head (cb34cdd).

Additional details and impacted files
@@           Coverage Diff           @@
##           master    #1483   +/-   ##
=======================================
  Coverage   95.63%   95.64%           
=======================================
  Files         101      101           
  Lines       25542    25542           
=======================================
+ Hits        24428    24430    +2     
+ Misses       1114     1112    -2     
Files with missing lines Coverage Δ
desc/objectives/objective_funs.py 94.74% <100.00%> (ø)

... and 2 files with indirect coverage changes

Copy link

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

@YigitElma YigitElma requested review from a team, rahulgaur104, f0uriest, ddudt, dpanici, kianorr, sinaatalay and unalmis and removed request for a team December 20, 2024 20:34
sinaatalay
sinaatalay previously approved these changes Dec 20, 2024
Copy link
Member

@sinaatalay sinaatalay left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This PR looks good to me, although others should review it as I am looking at it with only my Python and GitHub knowledge, without DESC knowledge.

  • Workflows (benchmark.yaml, notebook_tests.yaml, and regression_tests.yaml works the same way as its previous version, except a new step is added, Action Details, which moves all the debugging-related commands to a separate step. It makes sense.
  • There is a change in desc.objectives.objective_funcs.ObjectiveFunction.compile method, which uses compute_scaled_error method instead of compute_scaled in lsq and all modes. This hasn't been explained in the PR or commit messages. Maybe @YigitElma should explain it, but I am sure it's okay.
  • The documentation is updated and seems okay.
  • Tests haven't been changed algorithmically (except changing rounds in benchmarks) but have been cleaned. It looks better, I don't see any errors.

docs/performance_tips.rst Outdated Show resolved Hide resolved
@@ -131,67 +131,53 @@ def build():
N = 25
_ = Equilibrium(L=L, M=M, N=N)

benchmark.pedantic(build, setup=setup, iterations=1, rounds=50)
benchmark.pedantic(build, setup=setup, iterations=1, rounds=10)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

reducing the number of rounds will make the statistics more noisy, and may lead to more false positives

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree. For example, in the last benchmark, this PR shows speed improvement for perturb but it shouldn't. We can decide on the exact number of rounds, but my intention is to balance the time spent on tests. Previously, these equilibrium initialization tests took more time than fixed_iter_solve and perturb tests. Given that benchmark workflow started to take around 50mins, I wanted to reduce them from 50, which is a bit overkill.



@pytest.mark.slow
@pytest.mark.benchmark
def test_proximal_freeb_compute(benchmark):
"""Benchmark computing free boundary objective with proximal constraint."""
jax.clear_caches()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why remove this?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This doesn't cause too much difference,(I can re add it) but technically only run is benchmarked, so clearing the cache here doesn't have much purpose.


def setup():
def run():
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

moving everything to the run function is changing what its actually profiling, this now includes a bunch of other stuff besides building the linear constraints, is that what we want?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The extra stuff is just building individual constraints and objectives, right? Previously, this was effectively just benchmarking factorize_linear_constraints. Do we usually pass built constraints to the LinearConstraintProjection? Then I can revert it.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we usually pass built constraints to it, yes

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if linear_constraint is not None and not linear_constraint.built:


Caching the Compiled (Jitted) Code
----------------------------------
Although the compiled code is fast, it still takes time to compile. If you are running the same optimization, or some similar optimization, multiple times, you can save time by caching the compiled code. This automatically happens for a single session (for example, until you restart your kernel in Jupyter Notebook) but once you start using another session, the code will need to be recompiled. Fortunately, there is a way to bypass this. First create a cache directory (i.e. ``jax-caches``), and put the following code at the beginning of your script:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you have this locally in your backend.py right? so you would not usually need this in every script you run if it is there?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, if you put it to backend.py, you shouldn't need it every time. But it can be messy for users, so I didn't mention that.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
easy Short and simple to code or review skip_changelog No need to update changelog on this PR
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Export compiled objectives for common equilibrium resolutions?
4 participants