You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello,I can not understand one sentence in your paper,'When the training error keeps unchanged in five sequential epochs, we merge the parameters of each batch normalization into the adjacent convolution filters'.I try to figure it out by reading your code but fail to deal with matlab...Thanks!
The text was updated successfully, but these errors were encountered:
hii please help me out , what is the training dataset being used in this and where is the code written for generating kernels. I will be very thankful to you
hii please help me out , what is the training dataset being used in this and where is the code written for generating kernels. I will be very thankful to you
You can find the codes about your questions in SRMD/TrainingCodes/generatepatches.m and SRMD/TrainingCodes/Demo_Get_PCA_matrix.m. The training dataset used in here has been mentioned in paper, which is a mixture of other dataset with some preprocess.
I don’t understand the training process. The blur kernel is to determine the Gaussian kernel in a range (is this the meaning of the non-blind model?), and then stretch the dimension as the input with the LR image together, the LR image is still the result of bicubic interpolation, why Will there be better results by adding blur kernel?
Hello,I can not understand one sentence in your paper,'When the training error keeps unchanged in five sequential epochs, we merge the parameters of each batch normalization into the adjacent convolution filters'.I try to figure it out by reading your code but fail to deal with matlab...Thanks!
The text was updated successfully, but these errors were encountered: