You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the notebook, I noticed that the accuracy is calculated as follows:
def evaluate_accuracy(data_iterator, net):
acc = mx.metric.Accuracy()
for i, (data, label) in enumerate(data_iterator):
data = data.as_in_context(ctx).reshape((-1, 784))
label = label.as_in_context(ctx)
output = net(data)
predictions = nd.argmax(output, axis=1)
acc.update(preds=predictions, labels=label)
return acc.get()[1]
I am little confused as I think during the training the test (or validation) accuracy is evaluated as if the dropout was still 0.5.
I can understand simplification for the training purposes, but shouldn't the training accuracy be evaluated with the dropout and the validation and test accuracy values be 0?
Would solution to that be additional parameter include_dropoutas below:
def evaluate_accuracy(data_iterator, net, include_dropout=True):
acc = mx.metric.Accuracy()
with autograd.record(train_mode=include_dropout):
for i, (data, label) in enumerate(data_iterator):
data = data.as_in_context(ctx).reshape((-1, 784))
label = label.as_in_context(ctx)
output = net(data)
predictions = nd.argmax(output, axis=1)
acc.update(preds=predictions, labels=label)
return acc.get()[1]
Regards
Wojciech
The text was updated successfully, but these errors were encountered:
In the notebook, I noticed that the accuracy is calculated as follows:
I am little confused as I think during the training the test (or validation) accuracy is evaluated as if the dropout was still 0.5.
I can understand simplification for the training purposes, but shouldn't the training accuracy be evaluated with the dropout and the validation and test accuracy values be 0?
Would solution to that be additional parameter include_dropoutas below:
Regards
Wojciech
The text was updated successfully, but these errors were encountered: