Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hi ! Any kinds of pictures can be fit for your net? #38

Open
wqz960 opened this issue May 31, 2019 · 12 comments
Open

Hi ! Any kinds of pictures can be fit for your net? #38

wqz960 opened this issue May 31, 2019 · 12 comments

Comments

@wqz960
Copy link

wqz960 commented May 31, 2019

thank you for your wonderful codes! And I want to test your code on another dataset , which is about 3000+ pictures, do I need to do some data argument? And I resize all the pictures to 450*450 pixels. Is that suitable?

@yu4u
Copy link
Owner

yu4u commented May 31, 2019

Try it!
The models in this repo were basically trained using images with "ideal" noises.
Therefore, it might not work well for real noises.

@wqz960
Copy link
Author

wqz960 commented Jun 2, 2019

@yu4u I used your script command:
python3 train.py --image_dir dataset/291 --test_dir dataset/Set14 --image_size 128 --batch_size 8 --lr 0.001 --source_noise_model text,0,50 --target_noise_model text,0,50 --val_noise_model text,25,25 --loss mae --output_path text_noise
python3 train.py --image_dir dataset/291 --test_dir dataset/Set14 --image_size 128 --batch_size 8 --lr 0.001 --source_noise_model impulse,0,95 --target_noise_model impulse,0,95 --val_noise_model impulse,70,70 --loss l0 --output_path impulse_noise
and use --model unet for training, but when I test the result , why they are noised all by white Gaussian noise? i see that result image, the middle image is noised by white Gaussian . Why?

@yu4u
Copy link
Owner

yu4u commented Jun 2, 2019

Please refer to README.

python3 test_model.py --weight_file [trained_model_path] --image_dir dataset/Set14

optional arguments:
  -h, --help            show this help message and exit
  --image_dir IMAGE_DIR
                        test image dir (default: None)
  --model MODEL         model architecture ('srresnet' or 'unet') (default:
                        srresnet)
  --weight_file WEIGHT_FILE
                        trained weight file (default: None)
  --test_noise_model TEST_NOISE_MODEL
                        noise model for test images (default: gaussian,25,25)
  --output_dir OUTPUT_DIR
                        if set, save resulting images otherwise show result
                        using imshow (default: None)

@wqz960
Copy link
Author

wqz960 commented Jun 3, 2019

@yu4u I am new in denoise and image reconstruction, the function for calculating PSNR is right? def cal_PSNR(pred_img, gt_img): return 10.0 * np.log10((255.0 ** 2) / (np.mean(np.square(pred_img - gt_img))))
can you help me check this? I test the model with random-impulse noise, I do not think the denoisy images are good, but the PSNR calculate from the function is almost 25. So are there some problems? Thank you!

@yu4u
Copy link
Owner

yu4u commented Jun 3, 2019

PSNR=25 indicates poor image quality as you felt.

@wqz960
Copy link
Author

wqz960 commented Jun 5, 2019

Hi! @yu4u I am sorry for interrupting you again! But there is a problem about Evaluation. the PSNR from the validation can see from the h5df file. And I use PSNR formula on the Internet to calculate the PSNR again, but the two values are not the same. why?
the PSNR formula on the Internet:
def cal_PSNR(pre,gt): return 10.0*np.log10((255.0 ** 2) / (np.mean(np.square(pre-gt)))

@yu4u
Copy link
Owner

yu4u commented Jun 6, 2019

The two implementations are same.
... Is this not what you meant?

    a = np.random.uniform(0, 255, (100, 100, 3))
    b = np.random.uniform(0, 255, (100, 100, 3))
    print(cal_PSNR(a, b))
    sess = tf.Session()
    r = PSNR(tf.constant(a), tf.constant(b))
    with sess.as_default():
        print(r.eval())

@wqz960
Copy link
Author

wqz960 commented Jun 7, 2019

@yu4u after training, I see the value from the filename, such as the val_PSNR is 25, but I test the images which are validate dataset again, but the PSNR calculate by the following formula is not the same with yours. I do not know why, so this is the question... hoping for your kind respond.

@yu4u
Copy link
Owner

yu4u commented Jun 10, 2019

How different? If the difference is not so large, it would have been caused by the randomness of the noisy images. The noisy images are created online without fixing the random seed (yes this should be fixed for reproducibility).

@wqz960
Copy link
Author

wqz960 commented Jun 11, 2019

The difference is big, I used your code on my own dataset, the performance is good! But the only thing confused me is that the values of val_PSNR and PSNR that I tested latter are not the same, the val_PSNR is almost 27, but the test PSNR using the formula above is almost 32. I am sure the val_folder and test_folder are the same..... and for comparison, I have crop my images all to 1281283 size. The random seed? The val generator calculated in order, is it related to randomness? @yu4u

@yu4u
Copy link
Owner

yu4u commented Jun 11, 2019

Hmm, something is wrong.
As you indicated, PSNR calculated in test_model.py is higher than that in training log...

@wqz960
Copy link
Author

wqz960 commented Jun 11, 2019

yes! But I have debugged the Val_inputimg, the input seems correct, for API in keras, I don’t know how to fix it. Can you help me for this? I need the validation log for showing something.Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants