-
Notifications
You must be signed in to change notification settings - Fork 2.1k
Now multiple checkpoints will be saved after using -stopk option in train_model.py #4978
base: main
Are you sure you want to change the base?
Conversation
Hi @hamjam! Thank you for your pull request and welcome to our community. Action RequiredIn order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you. ProcessIn order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA. Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with If you have received this in error or have any questions, please contact us at [email protected]. Thanks! |
Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks for adding this! Could you please include a test?
if ( | ||
opt['validation_metric_mode'] == 'max' | ||
and self.best_k_models[-1][1] >= opt['validation_cutoff'] | ||
) or ( | ||
opt['validation_metric_mode'] == 'min' | ||
and self.best_k_models[-1][1] <= opt['validation_cutoff'] | ||
): | ||
logging.info('task solved! stopping.') | ||
return True |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i think this check requires that we look at self.best_k_models[0]
, right? since we're looking at the best metric
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My idea here was that we want the last saved model's metric to be better than validation_cutoff
so I checked the last one with self.best_k_models[-1]
This PR has not had activity in 30 days. Closing due to staleness. |
@jxmsML any interest in taking over this PR? |
Hi Kurt,
Sorry for the late reply.
I was very busy lately and I forgot to check my GitHub
notifications recently. I can design tests as you mentioned in 2 weeks if
there is no one interested.
…On Mon, Jun 5, 2023 at 10:32 PM Kurt Shuster ***@***.***> wrote:
@jxmsML <https://github.com/jxmsML> any interest in taking over this PR?
—
Reply to this email directly, view it on GitHub
<#4978 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ADAJPOGUBJVRJVULJZYGWM3XJYULXANCNFSM6AAAAAAVWOHFD4>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Patch description
In this PR, the train_model.py script has the option to keep and store multiple checkpoints based on validation metrics. Some checkpoints that have best validation metrics during training will be kept in the directory specified by
--model-file
. Resolves #4970Testing steps
To test this PR, you can run something like
parlai train_model --task babi:task10k:1 --model seq2seq --model-file seq2seq/babi_task10k --batchsize 32 --validation-every-n-secs 30 -stopk 5 -vstep 50
Note that
-vstep
option must be exist to save and store 5 top checkpoints in this example. For me, the list of files in the-model-file
directory after 1350 steps of training is like this: