-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Retraining model with new atlas #18
Comments
for kwyk, we are redoing the model with nobrainer. i think @kaczmarj has already created the variational pieces. also i have the data with the full freesurfer parcellations. we can indeed retrain the existing model with the more extensive parcellation and then tune it further with the neuromorphometrics atlases. this should not be hard at this point and indeed an example would be good. |
hi all, the variational model is now part of nobrainer (https://github.com/neuronets/nobrainer/blob/master/nobrainer/models/bayesian.py). i will work on creating a minimal example (jupyter notebook) of retraining it. |
@armaneshaghi - i created a jupyter notebook guide for transfer learning using the kwyk model: https://github.com/neuronets/nobrainer/blob/add/bayesian-transfer/guide/transfer_learning-bayesian.ipynb you can access this on google colab at https://colab.research.google.com/github/neuronets/nobrainer/blob/add%2Fbayesian-transfer/guide/transfer_learning-bayesian.ipynb |
@kaczmarj - the instructions are still off. there is no pretrained model in nobrainer-models. how does one find this: https://dl.dropbox.com/s/rojjoio9jyyfejy/nobrainer_spikeslab_32iso_weights.h5 we should do what we had said to turn nobrainer-models into a datalad repo. |
also had to do: also getting this error during training:
|
thanks @satra - i have to debug the model because i get nan loss sometimes, even with low learning rates. then i will save it to nobrainer-models |
Hi all.
|
@cristidonos - thanks for the report. we were retraining with two new models and larger block sizes (128^3).
however, we are presently running into an out of memory error that happens partway through training and is somehow linked to tensorflow probability. it's been a difficult error to track down since it happens not at the initial model allocation but somewhere down the road. |
Thank you for the quick response, I am looking forward for the new models. |
ideally i'd like to be at 256^3 as that integrates information over the entire brain. regarding learning rates and epochs, the defaults for this model are a good place to start and then you can monitor training and validation error to see if any adjustments could be made.
you would likely not be able to go up to 128^3 (we are barely doing that on our v100s). you can try it and see. the version of code released here in the repo uses tensorflow 1, while the master version of nobrainer uses tensorflow 2.
i believe this would be ok if you start from the existing model and use transfer learning. however this won't let you change block. it will also probably depend on the quality of segmentation you give the model. the reason we want to make these larger block size models on larger datasets available is so that they can be used for transfer learning. |
Would it be possible to perform transfer learning and retrain the model with new parcellations?
If so, would be great if you could provide a minimal example.
I would like to use Neuromorphometrics Atlases and believe retraining the model will perform better than training a new one from scratch.
The text was updated successfully, but these errors were encountered: