-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
All candidate pair scores? #4
Comments
Currently the code is doing some redundant computation, re-encoding the abstract for each entity pair. However, it is computing the full pairwise score tensor without any entity pair specific features which you can access here https://github.com/patverga/bran/blob/master/src/models/transformer.py#L468 . You can compute that tensor once and then aggregate scores for each of the entity pairs. This can be done efficiently using a gather/scatter but is not currently implemented. |
Hi, In the above link, a few lines above, it seems e1_mask and e2_mask are not used in any way. How does the model know the locations of the tokens of the current entity pairs? Thanks! |
Ah, never mind, just saw the ep_dist list. |
Did you know what exactly Thanks! |
Hi
I was trying to understand the code. I found that you are feeding an abstract separately for each candidate pair of the abstract in the model. However, in the paper, it is written once for all candidate pairs.
Am I missing something?
Thanks
The text was updated successfully, but these errors were encountered: