Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GradientGrassmann and LazySum performance issue #121

Closed
lkdvos opened this issue Feb 21, 2024 · 4 comments · Fixed by #199
Closed

GradientGrassmann and LazySum performance issue #121

lkdvos opened this issue Feb 21, 2024 · 4 comments · Fixed by #199
Labels
invalid This doesn't seem right

Comments

@lkdvos
Copy link
Member

lkdvos commented Feb 21, 2024

Having a look at the output of the tests, it seems like there is some performance issue going on with the combination of LazySum and GradientGrassmann. My best guess is that the gradient is actually not computed entirely correctly, but the algorithm still converges because Hlazy = [0.5*H - H + 5.553H] is actually not that good of a testcase.

Test Summary:        | Pass  Total     Time
find_groundstate     |   40     40  1m16.4s
  Infinite 1         |    2      2     0.2s
  Infinite 2         |    2      2     0.8s
  Infinite 3         |    2      2     3.2s
  Infinite 4         |    2      2     7.1s
  Infinite 5         |    2      2     0.2s
  LazySum Infinite 1 |    3      3     2.2s
  LazySum Infinite 2 |    3      3     3.1s
  LazySum Infinite 3 |    3      3     2.9s
  LazySum Infinite 4 |    3      3     5.3s
  LazySum Infinite 5 |    3      3     0.9s
  Finite 1           |    2      2     1.3s
  Finite 2           |    2      2     2.6s
  Finite 3           |    2      2     7.2s
  LazySum Finite 1   |    3      3     2.9s
  LazySum Finite 2   |    3      3     3.0s
  LazySum Finite 3   |    3      3    33.4s
@lkdvos lkdvos added the invalid This doesn't seem right label Feb 21, 2024
@Gertian
Copy link
Collaborator

Gertian commented Feb 21, 2024

Does this means addition of MPO's is expected to give the wrong result at this point in time or am I misinterpreting what lazysum does ?

@maartenvd
Copy link
Collaborator

maartenvd commented Feb 21, 2024 via email

@lkdvos
Copy link
Member Author

lkdvos commented Feb 21, 2024

I think the only reason gradientgrassmann does not deadlock is that the sum in the test case is actually not a sum with different terms, and just reduces to factor * H. Thus, even if the gradient is wrong, it's probably only wrong by a factor, which makes it converge but just very slowly because it cannot reason about the norm of the gradient correctly. This is just my intuition though, I did not do any checks

@Gertian
Copy link
Collaborator

Gertian commented Feb 21, 2024

@maartenvd , thanks for this information. This stressed me out for a second...

@lkdvos lkdvos linked a pull request Dec 12, 2024 that will close this issue
3 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
invalid This doesn't seem right
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants