-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GradientGrassmann and LazySum performance issue #121
Comments
Does this means addition of MPO's is expected to give the wrong result at this point in time or am I misinterpreting what |
While it's possible that lazysum gives the incorrect gradient, I would've
expected gradientgrassmann to then completely deadlock, as optimkit's
linesearch is quite picky about the correctness of the gradient.
LazySum is not used for adding two MPO's together (though comparing the two
would be a great test for the gradient)
Op wo 21 feb 2024 om 12:29 schreef Gertian ***@***.***>:
… Does this means addition of MPO's is expected to give the wrong result at
this point in time or am I misinterpreting what lazysum does ?
—
Reply to this email directly, view it on GitHub
<#121 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAJKVCTEXU2T3C5M4KOTHJ3YUXLCVAVCNFSM6AAAAABDSRYFROVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSNJWGQ2TAMJUGE>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
I think the only reason gradientgrassmann does not deadlock is that the sum in the test case is actually not a sum with different terms, and just reduces to factor * H. Thus, even if the gradient is wrong, it's probably only wrong by a factor, which makes it converge but just very slowly because it cannot reason about the norm of the gradient correctly. This is just my intuition though, I did not do any checks |
@maartenvd , thanks for this information. This stressed me out for a second... |
Having a look at the output of the tests, it seems like there is some performance issue going on with the combination of
LazySum
andGradientGrassmann
. My best guess is that the gradient is actually not computed entirely correctly, but the algorithm still converges becauseHlazy = [0.5*H - H + 5.553H]
is actually not that good of a testcase.The text was updated successfully, but these errors were encountered: