You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
During fine-tuning the models are automatically saved, the best being, say
\lag-llama-main\lightning_logs\version_12\checkpoints\epoch=34-step=5678.ckpt'
(this is displayed on the console)
For subsequent prediction, with zero-shot, should I provide LagLlamaEstimator(...) with epoch=34-step=5678.ckpt instead of lag-llama.ckpt ? Thank you.
The text was updated successfully, but these errors were encountered:
Hi, if you're using finetuning, then the evaluation is no longer "zero-shot". And yes, you should provide the right (latest) checkpoint if you want to evaluate results on that checkpoint.
During fine-tuning the models are automatically saved, the best being, say
\lag-llama-main\lightning_logs\version_12\checkpoints\
epoch=34-step=5678.ckpt
'(this is displayed on the console)
For subsequent prediction, with zero-shot, should I provide
LagLlamaEstimator(...)
withepoch=34-step=5678.ckpt
instead oflag-llama.ckpt
? Thank you.The text was updated successfully, but these errors were encountered: