-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Supervision loss in code and paper #4
Comments
Hi, thanks for your interest. The supervised loss in the code is the negative log-likelihood, the same as the paper. It is defined at https://github.com/yd-yin/FisherMatch/blob/main/fisher/fisher_utils.py#L19 Why do you think it is different from the paper? |
Hi I think I am a bit confused about the code is because it looks a bit different from the function (6) in the paper. What does log_exponent, overreg, log_normalizer represent respectively? I thought loss function will look like: -log(MF(y, A)) I also run the code using this loss function, the return loss is around negative 5. Is this normal? Loss is usually set to be positive (negative is fine as well), just want to double check I think it might because I am not familiar with fisher matrix distribution. Hope you will provide some explanation to the code here: https://github.com/yd-yin/FisherMatch/blob/main/fisher/fisher_utils.py#L19 Thanks! |
Given It's normal. [1] Probabilistic orientation estimation with matrix Fisher distributions. NeurIPS 2020 |
Hi, thanks for your work. It's very interesting. I realized that during supervision part, the loss function is different between the code and paper, right? In the paper, the loss function for supervision is simple (negative log likelihood of the groundtruth rotation in the predicted distributions). Why the loss function in the code is different? Does this loss function (in the code) have been mentioned in the paper?
Thanks!
The text was updated successfully, but these errors were encountered: