Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The results in Table 2 are strange #22

Open
csjunxu opened this issue May 5, 2019 · 2 comments
Open

The results in Table 2 are strange #22

csjunxu opened this issue May 5, 2019 · 2 comments

Comments

@csjunxu
Copy link

csjunxu commented May 5, 2019

In Table 2, the PSNR and SSIM results on the 15 cropped images provided by Nam et al. in CVPR 2016 are not consistent with the paper of Nam et al. (CVPR 2016), MCWNNM, TWSC, the method of NI (neat image software). How do you compute the PSNR and SSIM for Table 2?

Here are my PSNR results:

NI & CC & MCWNNM & TWSC & DnCNN+ &FFDNet+&CBDNet
35.68 & 38.37 & 41.13 & 40.76 & 38.02 & 39.35 & 36.68
34.03 & 35.37 & 37.28 & 36.02 & 35.87 & 36.99 & 35.58
32.63 & 34.91 & 36.52 & 34.99 & 35.51 & 36.50 & 35.27
31.78 & 34.98 & 35.53 & 35.32 & 34.75 & 34.96 & 34.01
35.16 & 35.95 & 37.02 & 37.10 & 35.28 & 36.70 & 35.19
39.98 & 41.15 & 39.56 & 40.90 & 37.43 & 40.94 & 39.80
34.84 & 37.99 & 39.26 & 39.23 & 37.63 & 38.62 & 38.03
38.42 & 40.36 & 41.43 & 41.90 & 38.79 & 41.45 & 40.40
35.79 & 38.30 & 39.55 & 39.06 & 37.07 & 38.76 & 36.86
38.36 & 39.01 & 38.91 & 40.03 & 35.45 & 40.09 & 38.75
35.53 & 36.75 & 37.41 & 36.89 & 35.43 & 37.57 & 36.52
40.05 & 39.06 & 39.39 & 41.49 & 34.98 & 41.10 & 38.42
34.08 & 34.61 & 34.80 & 35.47 & 31.12 & 34.11 & 34.13
32.13 & 33.21 & 33.95 & 34.05 & 31.93 & 33.64 & 33.45
31.52 & 33.22 & 33.94 & 33.88 & 31.79 & 33.68 & 33.45
Average
35.33 & 36.88 & 37.71 & 37.81 & 35.40 & 37.63 & 36.44

@csjunxu csjunxu changed the title The results in Table 2 are weird, if not wrong~ The results in Table 2 are weird, if I am not wrong~ May 5, 2019
@csjunxu csjunxu changed the title The results in Table 2 are weird, if I am not wrong~ The results in Table 2 are strange May 5, 2019
@GuoShi28
Copy link
Owner

GuoShi28 commented May 6, 2019

Hi,
(1) I test 25 randomly chosen patches of Nam. These patches are provided in the testset folder.
(2) The MCWNNM and TWSC codes are directly downloaded from the Github page. And I run the code on these patches without changing any parameters.
(3) CBDNet(JPEG) model is recommended to test on JPEG images. Original CBDNet model may not perform start-of-the-art since noisy images in Nam are stored after JPEG compression. CBDNet model did not consider this aspect.
(4) I am also surprised that WNNM and BM3D can perform really well on Nam. I gauss this is because the noise level in Nam is quite low, so even they did not consider the real-world noise distribution, they still obtain a not-bad psnr value.

@csjunxu
Copy link
Author

csjunxu commented May 6, 2019

Thank you for your detailed answer!

The key difference among the results is that, you did not test on the default 15 cropped images provided by Nam et al. in CVPR 2016. Instead, you cropped 25 images from their 11 large images and test on the 25 images. And yes, it is not your job to tune the parameters of other methods.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants