You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In layers.py, there is a normalize function that has a constant of 127.5:
def normalize(layer):
return layer/127.5 - 1.
I'm a little confused as to where the 127.5 comes from. It's a very specific question of course, but i'm interested in extending the regularization loss function with other types of transforms outside of the identity mapping used in the paper. If you have any tips or pointers in modifying that I'd love to hear. Great work and thanks for doing this!
The text was updated successfully, but these errors were encountered:
It looks like the purpose of this normalization is to bring the values in range [-1.0,1.0]. As values for a pixel in grayscale are in range [0,255], we needed to divide by 127.5, to bring 255 to 2.0.
Looking over the code, I could tell that it admits images with however many channels (only GPU memory limits you). You have to set the "input_channel" command line parameter to 3 and you should change the output layer of the refiner to output "self.input_channel" channels instead of 1.
In layers.py, there is a normalize function that has a constant of 127.5:
def normalize(layer):
return layer/127.5 - 1.
I'm a little confused as to where the 127.5 comes from. It's a very specific question of course, but i'm interested in extending the regularization loss function with other types of transforms outside of the identity mapping used in the paper. If you have any tips or pointers in modifying that I'd love to hear. Great work and thanks for doing this!
The text was updated successfully, but these errors were encountered: