-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
IncompatibleKeys Error when load pre-trained model to do the fine-tuning. #14
Comments
It's fine, there is a extra head during pretraining, which is not used for finetuning. |
I am not so sure. The loss looks pretty large. Have you tried only do the prompt tuning with the pretrained model? If it don't get a reasonable performance, it means the pretraining is not working well. |
Thank you again for your suggestion. My data is very different from the ones in your paper. My input is a sequence of unnormalized integers (i.e., |
Nice work and I have a question here:
I am trying to pre-train and finetune a model on my own datasets. However, some warnings were raised when loading the pre-trained model during finetuning:
Is everything correct here?
Thank you for your help!
The text was updated successfully, but these errors were encountered: