Mutual affine network for spatially variant kernel estimation in blind image super-resolution

J Liang, G Sun, K Zhang… - Proceedings of the …, 2021 - openaccess.thecvf.com
Proceedings of the IEEE/CVF International Conference on …, 2021openaccess.thecvf.com
Existing blind image super-resolution (SR) methods mostly assume blur kernels are spatially
invariant across the whole image. However, such an assumption is rarely applicable for real
images whose blur kernels are usually spatially variant due to factors such as object motion
and out-of-focus. Hence, existing blind SR methods would inevitably give rise to poor
performance in real applications. To address this issue, this paper proposes a mutual affine
network (MANet) for spatially variant kernel estimation. Specifically, MANet has two …
Abstract
Existing blind image super-resolution (SR) methods mostly assume blur kernels are spatially invariant across the whole image. However, such an assumption is rarely applicable for real images whose blur kernels are usually spatially variant due to factors such as object motion and out-of-focus. Hence, existing blind SR methods would inevitably give rise to poor performance in real applications. To address this issue, this paper proposes a mutual affine network (MANet) for spatially variant kernel estimation. Specifically, MANet has two distinctive features. First, it has a moderate receptive field so as to keep the locality of degradation. Second, it involves a new mutual affine convolution (MAConv) layer that enhances feature expressiveness without increasing receptive field, model size and computation burden. This is made possible through exploiting channel interdependence, which applies each channel split with an affine transformation module whose input are the rest channel splits. Extensive experiments on synthetic and real images show that the proposed MANet not only performs favorably for both spatially variant and invariant kernel estimation, but also leads to state-of-the-art blind SR performance when combined with non-blind SR methods.
openaccess.thecvf.com