-
Notifications
You must be signed in to change notification settings - Fork 152
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Monitors always write to stdout #36
Comments
Should we maybe move to a full-blown logging integration? That way we'd get all the configuration options for writing to (rotating) files, filtering the messages, sending via http/smtp and so on for free. |
In my opinion full blown logging (á la |
I think a basic implementation could just add a |
We agree, it does not look like we need full blown logging. @untom: There are three alternatives we can suggest monitoring large amount of logs, say from update-wise logs:
Is there a reason why having a write to file option directly in the monitors/trainer would be preferable over these? |
1 and 3 aren't options because they don't allow you to monitor things while training happens (e.g. it's frustrating to wait for 2 days for a run to finish only to see that there were problems 2 iterations after the start). Option 2 would work, but is a way more roundabout way, IMO. |
The Regarding adding a direct option to dump logs to a file, I'm in favor of this if it seems useful. I think it should be an option to trainer though, not the hooks, for simplicity. However, the reason I thought it should be a hook is that hooks already provide control over the timescale and interval for their execution. If we put this as an option in the Trainer, we'll need to provide these options separately (which would be sort of a duplication). |
The problem with having this option is that I might want to have some output displayed on screen (e.g. per-epoch accuracy or even per-epoch parameter statistics) and other output on file (e.g. per-update parameter statistics). |
In that case you would have verbosity for As I noted earlier though, we'll either need to provide additional args to |
That solution seems a bit hackish to me, I don't think it's very intuitive for new users. It also makes it impossible to log different hooks into different files. |
To be clear, the verbosity behavior is how the hooks work already. It does seem like a bit of flexibility that may not be needed. We have this so that you don't have to specify the verbosity for each hook separately (just saves some typing). Edit: I agree it's non-intuitive for new users, specially without any docs ;) Firstly, is there a use case where it'd be required to log different hooks into different files? Secondly, the hooks currently don't print the logs or maintain them. They simply return the data to trainer whose job is to aggregate all the logs. This makes some things less complicated e.g.
|
@untom, any ideas for tackling this issue? |
This sounds like a perfect introductory exercise for me. Several years ago, I used If you guys agree on what options should be available, I could draft a PR soon. |
@ntfrgl, nice to see you paying attention to the issues already! Here's my current take on this issue. I don't feel that we gain much from incorporating full blown logging currently (in agreement with @untom). Gains from not using logging in the short-term:
In my experience, having used Pylearn2, Caffe, Pybrain, Pylstm etc. etc., I have not had a proper use case for a logging mechanism with very fine controls. Caffe does use glog, but only provides a couple of logging levels for debug mode etc. In addition, one can easily combine We can incorporate proper logging in the future if this becomes a common feature request and the use cases become more evident. Additionally, we might make slight changes to how hooks work in the next month, so we should hold off on incorporating logging at least until then. |
I've added the |
I'd strongly suggest that whatever the end result may be, it should either directly write to pythons logging, so that when brainstorm is used from other code, together with other libraries (which is maybe not the norm, but is definitely our use case), you end up with all the log output in one place, rather than having several different logging facilities. Edit: I've added a small pull request to that end that simply allows for passing any function to trainer or hooks, which will then call that function instead of print (with print() being the default function used, so if print is fine, no code using trainer or hooks needs to be changed) |
So after giving this a bit more thought, we agree that having a way to direct the printing makes sense for using brainstorm as part of some application. We can discuss how to make that happen in #70 . Note though, that all the important information is kept by the trainer in |
Currently, all monitors write to stdout. If brainstorm is used from an IPython notebook, and some monitor as an
update
interval, this will inevitably lead to a completely frozen browser session, as IPython notebooks usually cannot deal with the massive amount of output produced by many monitors.It would be nice if the user could set where each monitors writes its output.
sys.stdout
is a sensible default, but for many applications it makes more sense to log to a file instead. Ideally this setting could be changed on a per-monitor basis, where thedestination
(or whatever one wants to call it) parameter could either be a file-like object or a string denoting a filename.(Optionally, we could still print to stdout and a file if
verbose==True
, and just to the file ifverbose==False
)The text was updated successfully, but these errors were encountered: