Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[scripts] implement max-change within customized SGD optimizer #4032

Open
wants to merge 1 commit into
base: pybind11
Choose a base branch
from

Conversation

aadps
Copy link
Contributor

@aadps aadps commented Apr 8, 2020

Needs further tests and reviews. The total change per minibatch is logged, should be very easy to add to TensorBoard plot at a later point.

from torch.optim.optimizer import Optimizer, required


class SGD_MC(Optimizer):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think SgdMaxChange might be clearer? and called sgd_max_change.py? probably closer to google style guide.
Please make sure the added parameters are documented.

@danpovey
Copy link
Contributor

danpovey commented Apr 8, 2020

... and how about the results? Does it actually perform better than Adam?

@megazone87 megazone87 self-requested a review April 10, 2020 02:18
@aadps
Copy link
Contributor Author

aadps commented Apr 10, 2020

Want to doublecheck several things. The max_change and max_change_per_layer we trying to implement are the norms of the proposed tensor delta. But in the case of SGD, we may first get the norm of the gradient, but the gradient and tensor delta miss by a factor of the learning rate?

So for individul layers, it should be like:
if norm * group['lr'] > max_change_per_layer:
d_p.mul_(max_change_per_layer / norm / group['lr'])
?

Then, when computing the norm for the entire model, should we use norms of individual layers before or after the adjustment of max_change_per_layer?

Lastly, if the max_change constraint works as intended, we no longer need to apply the pytorch gradient clipping?

@danpovey
Copy link
Contributor

danpovey commented Apr 10, 2020 via email

@aadps
Copy link
Contributor Author

aadps commented Apr 11, 2020

Some initial results (I am still working on the SgdMaxChange implementation):

Adam global average objf:
adamobjf

SgdMaxChange global average objf:
mcobjf

SgdMaxChange change for the whole model:
mcchange

What other quantities would you like to see and compare?

@aadps
Copy link
Contributor Author

aadps commented Apr 11, 2020

although you mean a / (b / c), not a / b / c.

For this one I wasn't sure. norm is the norm of d_p (gradient adjusted by weight_decay, momentum, etc.), so norm * group['lr'] should be the proposed change to the matrix?

If it is greater than the max_change, we should limit it by multiplying max_change / (norm * group['lr']) or max_change / norm / group['lr'], which would be a factor less than 1?

@danpovey
Copy link
Contributor

danpovey commented Apr 11, 2020 via email

@aadps
Copy link
Contributor Author

aadps commented Apr 11, 2020

My bad, I just went for a quick fix but this is indeed poor coding style.

@danpovey
Copy link
Contributor

danpovey commented Apr 11, 2020 via email

@aadps
Copy link
Contributor Author

aadps commented Apr 11, 2020

Adam:
==> exp/chain_pybind/tdnn_sp/train/decode_res/test/scoring_kaldi/best_cer <==
%WER 7.15 [ 7491 / 104765, 178 ins, 465 del, 6848 sub ] exp/chain_pybind/tdnn_sp/train/decode_res/test/cer_10_0.5

==> exp/chain_pybind/tdnn_sp/train/decode_res/test/scoring_kaldi/best_wer <==
%WER 15.47 [ 9968 / 64428, 918 ins, 1511 del, 7539 sub ] exp/chain_pybind/tdnn_sp/train/decode_res/test/wer_12_0.0
==> exp/chain_pybind/tdnn_sp/train/decode_res/dev/scoring_kaldi/best_cer <==
%WER 6.06 [ 12439 / 205341, 321 ins, 591 del, 11527 sub ] exp/chain_pybind/tdnn_sp/train/decode_res/dev/cer_10_0.0

==> exp/chain_pybind/tdnn_sp/train/decode_res/dev/scoring_kaldi/best_wer <==
%WER 13.79 [ 17608 / 127698, 1454 ins, 2772 del, 13382 sub ] exp/chain_pybind/tdnn_sp/train/decode_res/dev/wer_11_0.0

SgdMaxChange:
==> exp/chain_pybind/tdnn_sp/train/decode_res/test/scoring_kaldi/best_cer <==
%WER 7.36 [ 7715 / 104765, 187 ins, 474 del, 7054 sub ] exp/chain_pybind/tdnn_sp/train/decode_res/test/cer_10_0.5

==> exp/chain_pybind/tdnn_sp/train/decode_res/test/scoring_kaldi/best_wer <==
%WER 15.83 [ 10202 / 64428, 804 ins, 1685 del, 7713 sub ] exp/chain_pybind/tdnn_sp/train/decode_res/test/wer_11_0.5
==> exp/chain_pybind/tdnn_sp/train/decode_res/dev/scoring_kaldi/best_cer <==
%WER 6.29 [ 12908 / 205341, 296 ins, 555 del, 12057 sub ] exp/chain_pybind/tdnn_sp/train/decode_res/dev/cer_9_0.5

==> exp/chain_pybind/tdnn_sp/train/decode_res/dev/scoring_kaldi/best_wer <==
%WER 14.13 [ 18048 / 127698, 1583 ins, 2644 del, 13821 sub ] exp/chain_pybind/tdnn_sp/train/decode_res/dev/wer_10_0.0

@danpovey
Copy link
Contributor

danpovey commented Apr 11, 2020 via email

@aadps
Copy link
Contributor Author

aadps commented Apr 11, 2020

Learning rate schedule is 1e-3 * pow(0.4, epoch)
Btw I have updated my commit.

@danpovey
Copy link
Contributor

Try with double the learning rate.

@aadps
Copy link
Contributor Author

aadps commented Apr 12, 2020

Double the learning rate:
lr

change

==> exp/chain_pybind/tdnn_sp/train/decode_res/test/scoring_kaldi/best_cer <==
%WER 7.33 [ 7676 / 104765, 189 ins, 447 del, 7040 sub ] exp/chain_pybind/tdnn_sp/train/decode_res/test/cer_10_0.5

==> exp/chain_pybind/tdnn_sp/train/decode_res/test/scoring_kaldi/best_wer <==
%WER 15.79 [ 10172 / 64428, 947 ins, 1492 del, 7733 sub ] exp/chain_pybind/tdnn_sp/train/decode_res/test/wer_12_0.0
==> exp/chain_pybind/tdnn_sp/train/decode_res/dev/scoring_kaldi/best_cer <==
%WER 6.18 [ 12700 / 205341, 285 ins, 519 del, 11896 sub ] exp/chain_pybind/tdnn_sp/train/decode_res/dev/cer_9_0.5

==> exp/chain_pybind/tdnn_sp/train/decode_res/dev/scoring_kaldi/best_wer <==
%WER 14.03 [ 17917 / 127698, 1600 ins, 2626 del, 13691 sub ] exp/chain_pybind/tdnn_sp/train/decode_res/dev/wer_10_0.0

@danpovey
Copy link
Contributor

OK. It looks like right now this isn't giving us improvement over Adam: let's merge the code, but please change the top-level script so it still uses Adam, as I don't want to regress the results.
At some point we need to come up with a mechanism to run different-versoined experiments; but for now the way it is is OK, I think.

@aadps
Copy link
Contributor Author

aadps commented Apr 13, 2020

Top-level script reverted to Adam.

@danpovey
Copy link
Contributor

Thanks!! @songmeixu do you want to go through this? Or should I just merge?

@megazone87
Copy link
Contributor

Thanks!! @songmeixu do you want to go through this? Or should I just merge?

Please give me two days to go through this. I am doing it now. Thank @aadps for waiting!

@stale
Copy link

stale bot commented Jun 19, 2020

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale Stale bot on the loose label Jun 19, 2020
@stale
Copy link

stale bot commented Jul 19, 2020

This issue has been automatically closed by a bot strictly because of inactivity. This does not mean that we think that this issue is not important! If you believe it has been closed hastily, add a comment to the issue and mention @kkm000, and I'll gladly reopen it.

@stale stale bot closed this Jul 19, 2020
@kkm000 kkm000 reopened this Jul 19, 2020
@stale stale bot removed the stale Stale bot on the loose label Jul 19, 2020
@stale
Copy link

stale bot commented Sep 17, 2020

This issue has been automatically marked as stale by a bot solely because it has not had recent activity. Please add any comment (simply 'ping' is enough) to prevent the issue from being closed for 60 more days if you believe it should be kept open.

@stale stale bot added the stale Stale bot on the loose label Sep 17, 2020
@jtrmal
Copy link
Contributor

jtrmal commented Aug 16, 2022

@songmeixu ?

@stale stale bot removed the stale Stale bot on the loose label Aug 16, 2022
@stale
Copy link

stale bot commented Oct 15, 2022

This issue has been automatically marked as stale by a bot solely because it has not had recent activity. Please add any comment (simply 'ping' is enough) to prevent the issue from being closed for 60 more days if you believe it should be kept open.

@stale stale bot added the stale Stale bot on the loose label Oct 15, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
stale Stale bot on the loose
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants