-
Notifications
You must be signed in to change notification settings - Fork 135
Issues: xlang-ai/instructor-embedding
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Some weights of the model checkpoint at /home/rnd/wmj/instructor-large/instructor-embedding/output/checkpoint-6500/ were not used when initializing T5EncoderModel: ['2.linear.weight'] - This IS expected if you are initializing T5EncoderModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing T5EncoderModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). max_seq_length 512
#117
opened May 11, 2024 by
EricPaul03
Inconsistency between the instruction template suggested vs that in the training data
#96
opened Nov 9, 2023 by
debraj135
You script to quantize the instructor models simply doesn't work.
#85
opened Sep 17, 2023 by
BBC-Esq
Previous Next
ProTip!
Updated in the last three days: updated:>2024-11-12.