Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to train models with contrastive losses, masks, etc. #158

Open
18liumin opened this issue Nov 7, 2023 · 1 comment
Open

How to train models with contrastive losses, masks, etc. #158

18liumin opened this issue Nov 7, 2023 · 1 comment

Comments

@18liumin
Copy link

18liumin commented Nov 7, 2023

No description provided.

@n1o
Copy link

n1o commented May 29, 2024

I would like to see the actual pretraining of CodeT5, I am reading up on the Identifier Tagging objective, where you transform the contextual representation of the encoder into a vector of probabilities. Unfortunately there no discussion on how, I assume there is an ProjectionLayer with an L2 norm that gets fed into a sigmoid. I came to look up the code, but I either to struggle to find it, or I am just blind.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants