You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, thanks for releasing this reference implementation for stylegan!
I have a question about EqualLinear. In stylegan codebase (https://github.com/rosinality/style-based-gan-pytorch/blob/07fa60be77b093dd13a46597138df409ffc3b9bc/model.py#L203) there's an explicit "equal_lr" operation. But in this codebase I can't find any similar code. Does it mean that I don't need to adjust the learning rate dynamically in training, only to set different learning rates for different layers at the very beginning is sufficient? Thanks!
The text was updated successfully, but these errors were encountered:
Yeah, but as you can see, the weight for convolution calculation is self.weight * self.scale.
In my opinion, It can be said that we use the gradient from "self.weight * self.scale" to update self.weight.
As you already known that EqualXX is designed due to different scale of weights layer-wise, the purpose of using "self.weight * self.scale" is actually scaling the gradient! (w' = w - lr * pL/pw * scale). above is how I realize, but not sure about correctness....
Hi, thanks for releasing this reference implementation for stylegan!
I have a question about EqualLinear. In stylegan codebase (https://github.com/rosinality/style-based-gan-pytorch/blob/07fa60be77b093dd13a46597138df409ffc3b9bc/model.py#L203) there's an explicit "equal_lr" operation. But in this codebase I can't find any similar code. Does it mean that I don't need to adjust the learning rate dynamically in training, only to set different learning rates for different layers at the very beginning is sufficient? Thanks!
The text was updated successfully, but these errors were encountered: