-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
bbox loss increases when using compute_ciou #7
Comments
give more description of your problem, and your terminal output |
@Zzh-tju Thanks. More description: Here is some output examples, we can see that the bbox loss is always becoming bigger and bigger even if it has gone to more than 1500 iterations. 020-05-16 13:58:04,031 | callback.py | line 40 : Batch [1120] Speed: 2.09 samples/sec Train-rpn_cls_loss=0.113447, rpn_bbox_loss=0.103411, rcnn_accuracy=0.965366, cls_loss=0.145091, bbox_loss=0.013169, mask_loss=0.421295 |
It seems that you are using the other detection repository. |
Thanks. Yes, I am using a MaskRCNN repository. When I train more iterations (currently 30k iterations), the loss does not increase but fixed at 0.019, however, it does not decreases, either. The loss weight for the regression is set to 1. |
In our experiment, the regression loss weight is set to 12 to balance with the classification loss. But judging from the above terminal output, it is obvious that the loss of classification and regression is very imbalanced. |
@Zzh-tju Thanks. I'll try to settle the balance issue. |
Hi, I have tried different weight for the ciou loss, however, the performance all decreased. Do I need to pay attention to some other hyper-parameters? Thanks. |
more details will be good |
Thanks to your great work.
I have called the compute_ciou function to generate the bbox loss,
self.bbox_loss = compute_ciou
_, bbox_loss = self.bbox_loss(bbox_pred, bbox_target, bbox_inside_weight, bbox_outside_weight, transform_weights=config.network.bbox_reg_weights)
However, I found the bbox_loss increases after training. I have checked the compute_ciou, I think it is the loss instead of ciou. Can you please provide some comments?
The text was updated successfully, but these errors were encountered: