Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about continuous prompt initialization #2

Open
menglin0320 opened this issue May 27, 2023 · 1 comment
Open

Question about continuous prompt initialization #2

menglin0320 opened this issue May 27, 2023 · 1 comment

Comments

@menglin0320
Copy link

I wonder if you guys tried to simply average the embeddings for the title to initialize the continuous prompts. I feel that it can be a simpler solution to the problem you guys mentioned in paper. I want to try something with the ideas in the paper but two stage training kind of scares me away(I only have limited time for a project.)
Do you guys think "average the embeddings for the title to initialize the continuous prompts" is a valid idea?

@lileipisces
Copy link
Owner

You can give it a shot. I think titles that consist of words are more compatible with the model than random embeddings. This could make the model converge faster and shorten the training time.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants