You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We know that the accuracy of Vamb differs quite a lot from run to run. When benchmarking, we run each setting three to five times in order to average out the run variability.
But where does this come from? Almost certainly, it comes from the randomness in the neural network training, or the randomness in the clustering algorithm.
It would be nice to get a breakdown of how much of the variance is explained by these two steps.
The text was updated successfully, but these errors were encountered:
We know that the accuracy of Vamb differs quite a lot from run to run. When benchmarking, we run each setting three to five times in order to average out the run variability.
But where does this come from? Almost certainly, it comes from the randomness in the neural network training, or the randomness in the clustering algorithm.
It would be nice to get a breakdown of how much of the variance is explained by these two steps.
The text was updated successfully, but these errors were encountered: