-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
uncertainty quantification #10
Comments
Hey @jyang68sh ! Sure, the whole method is designed to produce uncertainty for models' outputs and do that better than MC-Dropout (and on par with Ensembles). You can just drop in (or insert) Masksembles layers instead of Dropout and get uncertainties. Please refer to our examples for more of information: link Best, |
@nikitadurasov
From the given example, I can see that Masksembles produce submodels which give similar predictions, but how can I get uncertainty for models' outputs? |
for easy or in-distribution samples Masksembles indeed should produce similar predictions (because our model is confident about them). E.g. in the notebook above you could run
after the last cell to get final uncertainty for your image. In For more information about the methods how to turn predictions of several model into uncertainty please refer to this paper . Hope that helps! |
@nikitadurasov I just have a quick question. Masksembles can be worked in segmentation methods too right? But some segmentation methods, e.g. STDC, have a classification head and a detail head. Can masksembles be used in non-classification head? Thanks! |
Hi nice paper.
I was wondering from the methods, could you get a quantified number of uncertainty for the model output?
The text was updated successfully, but these errors were encountered: