-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pymanopt stiefel #4
base: master
Are you sure you want to change the base?
Conversation
…into pymanopt # Conflicts: # tfimps.py
…into entangle # Conflicts: # test_tfimps.py # tfimps.py
…nce is better with Cojugate-Gradient than with Steepest descent. In the end, there were no difficulties with tf.while_loop, it stays as it was. I only needed to include the Stiefel variable to fix the dimension mismatch.
…nce is better with Cojugate-Gradient than with Steepest descent. In the end, there were no difficulties with tf.while_loop, it stays as it was. I only needed to include the Stiefel variable to fix the dimension mismatch. Is there any reason why you define learning_rate? It seems to be not doing anything.
2) I introduced r_prec which sets the condition of convergence the right eigenvector 3) I changed the definition of X1 for an equivalent simpler one. 4) I introduced two new methods to express the energy as a sum of an onsite and NN term (instead using the trick with identity that you use in the definition of X1). This gives the same result. But I am not using it, I sticked to the idenity trick in X1.
2) I introduced r_prec which sets the condition of convergence the right eigenvector 3) I changed the definition of X1 for an equivalent simpler one. 4) I introduced two new methods to express the energy as a sum of an onsite and NN term (instead using the trick with identity that you use in the definition of X1). This gives the same result. But I am not using it, I sticked to the idenity trick in X1.
2) I introduced r_prec which sets the condition of convergence the right eigenvector 3) I changed the definition of X1 for an equivalent simpler one. 4) I introduced two new methods to express the energy as a sum of an onsite and NN term (instead using the trick with identity that you use in the definition of X1). This gives the same result. But I am not using it, I sticked to the idenity trick in X1.
All tests are failing for me at the moment. Seems to be because of this |
What's the idea behind the altered convergence condition? I suppose it must have different scaling with |
Probably should avoid |
It seems to be working for h_Ising at criticality. Conjugate gradient works better than Steepest descent. In the end there were no problems with tf.while_loop. Why do define learning_rate? It seems to be doing nothing.