You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
I realize this repo is no longer actively maintained, but I would like to understand your reasoning behind these variables in the RPN loss calculation.
bbox_inside_weights
Purpose:
Used as a mask to select the best anchors generated.
bbox_outside_weights
Purpose:
Added mechanism to bias the loss towards positive or negative examples.
Issue/Question:
This mechanism also divides out the regression loss values, but you do this a second time with:
loss_box = loss_box.mean()
So this doubles the division by total number of samples. Is this a mistake or intended as a way to stabilize the loss function? You dont appear to do this with the second stage loss (bbox_outside_weights is a duplicate of the bbox_inside_weights tensor).
Thanks!
The text was updated successfully, but these errors were encountered:
Hi,
I realize this repo is no longer actively maintained, but I would like to understand your reasoning behind these variables in the RPN loss calculation.
bbox_inside_weights
Purpose:
Used as a mask to select the best anchors generated.
Issue/Question:
http://www.telesens.co/2018/03/11/object-detection-and-classification-using-r-cnns/
states that negative examples(background) should not be used in the loss calculation, but you do. Is this a mistake or did you find that this yielded better results?
bbox_outside_weights
Purpose:
Added mechanism to bias the loss towards positive or negative examples.
Issue/Question:
This mechanism also divides out the regression loss values, but you do this a second time with:
loss_box = loss_box.mean()
So this doubles the division by total number of samples. Is this a mistake or intended as a way to stabilize the loss function? You dont appear to do this with the second stage loss (bbox_outside_weights is a duplicate of the bbox_inside_weights tensor).
Thanks!
The text was updated successfully, but these errors were encountered: