-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cache CG coefficients #71
Conversation
The anger of @functools.cache
def f(n: int) -> np.ndarray:
return np.array(n)
x = f(1)
x += 41
print(f(1))
# prints 42 This is why I did not cache the function. @functools.cache
def _f(n: int) -> np.ndarray:
return np.array(n)
def f(n: int) -> np.ndarray:
return _f(n).copy()
x = f(1)
x += 41
print(f(1))
# prints 1 Please do something like that |
Thanks for the quick reply! |
This can happen accidentally. It happened to me on the pytorch version of e3nn and that's why I did cache this code. (oops closed by misclick) |
I am also getting stalled by the CG computing code (one of the asserts gets angry) when running lmax > 11. My work around is by explicitly calling coefficients cached in an Interface looks something like this. e3nn_jax.clebsch_gordan_basislib(l1, l2, l3) Can put in PR if it makes sense ? @mariogeiger I think for lower Ls the latency of calling from the disk might be more than just computing it on the fly as its done currently so maybe having some kind of if condition that calls this for higher L ? The same machinery can be used for spherical harmonics as well so can also file a PR there. |
Either we completely switch to BasisLib or we keep it local. Why having two different implementations of the same thing? |
Btw I'm waiting for the copy safe cache and we can already merge this |
@mariogeiger Is this what you meant ? mitkotak#1 |
I mean what I said in my fist comment. We need to systematically copy the cached data to avoid it getting modified inplace by accident. This causes bug what are extremely hard to debug. |
@mariogeiger Sorry seems like my push did not update Github before. I have the safe copy thing that you were requesting. Can you have a look again at the PR ? |
I don't see any changes |
Sorry for the confusion. Can you see this ? |
This this is a PR in your own fork repo |
Did not have write permissions to this one |
I guess it's because we are on @olivier-peltre own branch. Maybe you can create a new PR but this time select this repo as a target, not your own fork. |
Right now, Clebsch-Gordan coefficients are recomputed at every tensor product call.
This means they are also recomputed e.g. every time N scalars are used to gate N irreps with
IrrepsArray.__mul__
, as it callselementwise_tensor_product
, i.e. at every MACE message-passing step, symmetric contraction step, etc.The torch version of e3nn wraps CG coefs in functools.cache:
https://github.com/e3nn/e3nn/blob/ac3528f7fb5fe1a8838f1df087d2eefe60d91ea8/e3nn/o3/_wigner.py#L149
NB
functools.lru_cache(maxsize=None)
is an alias forfunctools.cache
NB it might also be more efficient to use a dense/spare array format than relying on a generic hashtable