Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

YOLOv2: Weight quantization per-layer vs per-channel #82

Open
kojiwoow opened this issue Apr 29, 2020 · 1 comment
Open

YOLOv2: Weight quantization per-layer vs per-channel #82

kojiwoow opened this issue Apr 29, 2020 · 1 comment

Comments

@kojiwoow
Copy link

kojiwoow commented Apr 29, 2020

Has anyone tested per-channel quantization on YOLOv2?

The report http://cs231n.stanford.edu/reports/2017/pdfs/808.pdf shows, more than 15% mAP drop on YOLOv2 PASCAL VOC 2007 testset, when using 'per-layer' INT8-quantization for all convolutional layers.

The drop was significant, so we decided to use 'per-channel' quantization for weights, described on https://arxiv.org/pdf/1806.08342.pdf#page=30&zoom=100,0,621

For the experiment, we stored the calculated scale factor of l->input_quant_multipler for each channel. Then, we de-quantized for output activation values.

However, compared to the 'per-layer' quantization, the increase was only 0.1% mAP.

I wonder if we did the wrong experiment, or if other people had similar results, since the second paper I posted shows better results on 'per-channel' quantization compared to 'per-layer' quantization for classification networks (ResNet, MobileNet)

The test environment:

  • Symmetric quantization
  • Scale factor of weights: absolute maximum
  • Scale factor of input activation: KL-divergence

Thank you.

@gr-rahimi
Copy link

Hi @kojiwoow,

Were you able to find the reason of low improvement with per channel quantization?
Did you find any other quantization technique that helps improving the accuracy?

I appreciate if you can share some of your experience with me.

@AlexeyAB , do you have any suggestion to improve the accuracy that we can try? I see that in the YOLOv3, for some of the layers, you refuse to quantize them ( such as the first layer). When we quantized them, we saw huge accuracy drop. Do you have any suggestion to compensate the accuracy loss?

Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants