-
Notifications
You must be signed in to change notification settings - Fork 152
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Describing and serializing objects #23
Comments
Why not implement the pickle serialization methods ( |
The reason for not using pickle is security, since saved networks will probably be shared by users with others. I recommend that we use hdf5 as much as possible. Brainstorm already supports Describable objects, so we can generate a |
Is pickle insecure? |
Indeed: http://www.benfrederickson.com/dont-pickle-your-data/ On 11 August 2015 at 22:52, untom [email protected] wrote:
|
Thanks for the link! |
We can still provide the None of this should be difficult once I've fixed the serialization issue. The network architecture is JSON serializable, and so will be the initializers, weight-modifiers, and gradient-modifiers. All that remains to be done then is to save the weights (and maybe the state), and those are just big arrays. |
Ok, I implemented describing the network and the handlers. From here it should be only a small step to pickleable, and once we've fixed the format for hdf5 that should be rather easy too. |
Format suggestion for HDF5The simplest format for storing a network as hdf5 would have two groups:
We could unfold both of these more, to make the network easier to inspect by just looking at the hdf5 file. Buffer structureHDF5 supports links (hard and soft) to pieces of the data. So we could actually mirror the whole buffer structure in the file:
I do like this idea, and I think this should not be difficult. The only drawback is, that it might be confusing if you want to write a network file without using brainstorm. Network DescriptionWe could unfold the JSON description of the network into the hdf5 file by (mis)using their internal structure. So dictionaries would become groups; integer, float and strings would become attributes. numerical lists become datasets, and other lists would have to be shoehorned into something. This would allow browsing the architecture of the network using just a normal hd5 viewer. On the con side it is quite a bit of work, and it feels a bit artificial. I do not think this is worth the effort. Optional Full StateWe could allow to (optionally) save the full internal state of the network to the hdf5 file. Not too much work, but I'm not sure about good usecases. |
Overall that sounds good. But how are you going to store the JSON? I didn't know HDF5 was meant/well suited for storing large strings? OTOH, I don't like the idea of translating the JSON into HDF5 structure, it sounds artifical/forced to me, too. Also, I'm not sure there's ever a need to store the Full State, since it usually can be easily computed afterwards (and who needs the full state for a very specific input data point, anyhow?) |
JSON + array in HDF5 is good. Full state/gradients are probably not needed. What would actually help is to have helper functions (which can be used as hooks) which allow the saving of features/gradients/states/deltas as HDF5 files. These would be used for feature extraction, introspection etc. |
Ok, I'll just store the JSON encoded as utf-8 as a byte array. |
Done for saving and loading the network. |
Are we doing anything about this for the release? Currently we can:
So the only thing that would be missing is a way to continue interrupted training somehow. |
We can continue training from a network and trainer description. |
The trainer description not enough, because it (currently) discards "fleeting" information like And you are right of course: data iterators don't really allow for anything but restarting after a completed epoch. |
Unless we can save stepper states, epoch/update info, and optionally some info about the iterator state (batch index?), there does not seem to be much point in continuing training. The batch index can actually be deduced from the update number+batch size, so we could just 'skip' data up to that point when restarting. |
From what I can see, the mechanism for generating descriptions and serializing networks etc. does not fully work yet. @Qwlouse, you were working on this. Any comments on what else is needed?
I think we should have network.save() and trainer.save() methods for dumping to disk, and load_network() and load_trainer() functions for reading the dumps. It shouldn't be more complicated than this, I think. Thoughts?
The text was updated successfully, but these errors were encountered: