You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Extract Logger and LoggerMixin from the loggers file, to avoid loading all the dependencies of all logging frameworks when you just need to import the abstract base class or the mixin. Alternatively, move the import of logging frameworks (e.g., mlflow) inside itwinai's wrapper classes.
Make Prov4ML an optional dependency #278 is somehow aligned with the above. Prov4ML (now yProvML) has many deps, slowing down the installation and dynamic import at runtime
is there a way to integrate Ray's reporting mechanism in the call to self.log(...) in the trainer? Example: if I am in a Ray trial, then also report the values I am logging with kind="metric". There may be better ways to do it...
Simplify log_freq, splitting it into log_freq_epoch (int) and log_freq_batch (int). In both cases freq=0 means no logging.
Checkpoint and resume full training state from checkpoint (epoch number, model weights, optim, lr scheduler). Some of it is already in place, but I am not sure whether we are currently able to resume training from a checkpoint.
Improvements to the loggers
self.log(...)
in the trainer? Example: if I am in a Ray trial, then also report the values I am logging withkind="metric"
. There may be better ways to do it...log_freq
, splitting it intolog_freq_epoch
(int) andlog_freq_batch
(int). In both cases freq=0 means no logging.@annaelisalappe please add other things you had in mind
The text was updated successfully, but these errors were encountered: