Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pymanopt stiefel #4

Open
wants to merge 12 commits into
base: master
Choose a base branch
from
Open

Pymanopt stiefel #4

wants to merge 12 commits into from

Conversation

mencia
Copy link
Collaborator

@mencia mencia commented May 9, 2018

It seems to be working for h_Ising at criticality. Conjugate gradient works better than Steepest descent. In the end there were no problems with tf.while_loop. Why do define learning_rate? It seems to be doing nothing.

mencia added 8 commits May 3, 2018 15:23
…into entangle

# Conflicts:
#	test_tfimps.py
#	tfimps.py
…nce is better with Cojugate-Gradient than with Steepest descent. In the end, there were no difficulties with tf.while_loop, it stays as it was. I only needed to include the Stiefel variable to fix the dimension mismatch.
…nce is better with Cojugate-Gradient than with Steepest descent. In the end, there were no difficulties with tf.while_loop, it stays as it was. I only needed to include the Stiefel variable to fix the dimension mismatch. Is there any reason why you define learning_rate? It seems to be not doing anything.
2) I introduced r_prec which sets the condition of convergence the right eigenvector
3) I changed the definition of X1 for an equivalent simpler one.
4) I introduced two new methods to express the energy as a sum of an onsite and NN term (instead using the trick with identity that you use in the definition of X1). This gives the same result. But I am not using it, I sticked to the idenity trick in X1.
2) I introduced r_prec which sets the condition of convergence the right eigenvector
3) I changed the definition of X1 for an equivalent simpler one.
4) I introduced two new methods to express the energy as a sum of an onsite and NN term (instead using the trick with identity that you use in the definition of X1). This gives the same result. But I am not using it, I sticked to the idenity trick in X1.
2) I introduced r_prec which sets the condition of convergence the right eigenvector
3) I changed the definition of X1 for an equivalent simpler one.
4) I introduced two new methods to express the energy as a sum of an onsite and NN term (instead using the trick with identity that you use in the definition of X1). This gives the same result. But I am not using it, I sticked to the idenity trick in X1.
@AustenLamacraft
Copy link
Owner

All tests are failing for me at the moment. Seems to be because of this r_prec argument being introduced. Looks like that should really be a named argument with a default coming later. Then the existing tests won't fail without it.

@AustenLamacraft
Copy link
Owner

What's the idea behind the altered convergence condition? I suppose it must have different scaling with bond_d...

@AustenLamacraft
Copy link
Owner

Probably should avoid next in the lambda functions for tf.while as next is a built-in function. I think that's my fault.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants