Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Feature Denoising Diffusion Model For Blind Image Quality Assessment

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

Feature Denoising Diffusion Model for Blind Image Quality Assessment

Xudong Li1 , Jingyuan Zheng2 , Runze Hu3 , Yan Zhang1,∗ , Ke Li4 , Yunhang Shen4 ,
Xiawu Zheng1 , Yutao Liu5 , ShengChuan Zhang1 , Pingyang Dai1 , Rongrong Ji1
1
Key Laboratory of Multimedia Trusted Perception and Efficient Computing,
Ministry of Education of China, Xiamen University 2 School of Medicine, Xiamen University
3
School of Information and Electronics, Beijing Institute of Technology
4
Tencent Youtu Lab 5 School of Computer Science and Technology, Ocean University of China
{lxd761050753, jyzheng0606, bzhy986, hrzlpk2015, shenyunhang01, tristanli.sh}@gmail.com,
liuyutao@ouc.edu.cn,{zhengxiawu, zsc 2016, pydai, rrji}@xmu.edu.cn
arXiv:2401.11949v1 [cs.CV] 22 Jan 2024

Abstract Mos: 48.69

Blind Image Quality Assessment (BIQA) aims to


evaluate image quality in line with human per-
ception, without reference benchmarks. Currently,
deep learning BIQA methods typically depend on
using features from high-level tasks for transfer Semantic Feature T Diffusion steps Denoised Feature
learning. However, the inherent differences be-
tween BIQA and these high-level tasks inevitably
introduce noise into the quality-aware features. In ... ...
this paper, we take an initial step towards explor-
ing the diffusion model for feature denoising in
Mos: 60.95 48.06 ( 11.63 )
BIQA, namely Perceptual Feature Diffusion for
IQA (PFD-IQA), which aims to remove noise from Figure 1: Image on top: the sample image. Images at bottom: Be-
quality-aware features. Specifically, (i) We pro- fore and after diffusion denoising, the feature map significantly re-
pose a Perceptual Prior Discovery and Aggregation fines, effectively pinpointing areas with visible image quality degra-
module to establish two auxiliary tasks to discover dation. The initial semantic focus is on “human,“ but after denois-
potential low-level features in images that are used ing, attention notably shifts to the fuzzy region (the orange region
to aggregate perceptual text conditions for the dif- with the blurred crowd and arms), resulting in a closer alignment
fusion model. (ii) We propose a Perceptual Prior- with the actual Mean Opinion Scores (MOS).
based Feature Refinement strategy, which matches
noisy features to predefined denoising trajectories
and then performs exact feature denoising based age restoration [Banham and Katsaggelos, 1997] and super-
on text conditions. Extensive experiments on eight resolution [Dong et al., 2015] without references.
standard BIQA datasets demonstrate the superior Data-driven BIQA models based on deep neural networks
performance to the state-of-the-art BIQA methods, [Bosse et al., 2017; Wu et al., 2020] have made significant
i.e., achieving the PLCC values of 0.935 (↑ 3.0% progress. The quality score of distorted images is typically
vs. 0.905 in KADID) and 0.922 (↑ 2.8% vs. 0.894 measured using the Mean Opinion Score (MOS), making
in LIVEC). BIQA a small-sample task. To address this, a promising strat-
egy utilizes a pre-training and fine-tuning paradigm, transfer-
ring shared features from the large-scale ImageNet source do-
1 Introduction main to the IQA target domain to accomplish the IQA task.
Image Quality Assessment (IQA) methods aim to match However, during the pre-training of large-scale classification
the human perception of image distortions [Wang et al., tasks, synthetic distortions are typically used as a data aug-
2004]. Reliable IQA models are important for image-driven mentation method, inevitably reducing the sensitivity of the
applications, while also serving as benchmarks for image pro- model to image distortion [Zhang et al., 2023; Hendrycks and
cessing. Objective IQA includes Full-Reference IQA (FR- Dietterich, 2019]. Consequently, the pre-trained features’ in-
IQA) [Shi and Lin, 2020], Reduced-Reference IQA (RR- sensitivity to distortion degradation can lead the model to ex-
IQA) [Tao et al., 2009], and no-reference or Blind IQA cessively concentrate on high-level information during qual-
(BIQA) [Zhou et al., 2019]. As reference images are of- ity assessment, overlooking distortions information critical to
ten unavailable, BIQA gains attention for tasks like im- quality perception [Zhao et al., 2023; Zhang et al., 2023]. We
provide an example to explain such a problem in Fig. 1. As

Corresponding author we observe, the baseline focuses excessively on the high-level
information (such as semantic information, i.e., “human“ in pseudo features. Experiments show that we achieve superior
the red box) in the foreground of the distorted image, while performance with very few sampling steps (e.g. 5 iterations)
neglecting the low-level quality-relevant information (such as and a lightweight diffusion model for denoising. We summa-
the blur and geometric distortion in the yellow box), leading rize the contributions of this work as follows:
to inaccurate quality predictions. Therefore, these pre-trained
features are not always beneficial, and some may even be con- • We make the first attempt to convert the challenges of
sidered as noise in the quality-aware features. It is necessary BIQA to the diffusion problem for feature denoising. We
to meticulously filter out the noisy features. introduce a novel PFD-IQA, which effectively filters out
quality-irrelevant information from features.
The diffusion model [Ho et al., 2020; Rombach et al.,
2022] defines a Markov chain where noise is gradually added • We propose a Perceptual Prior Discovery and Aggrega-
to the input samples (forward process) and then used to re- tion Module to identify perceptual features of distortion
move the corresponding noise from the noisy samples (re- types and quality levels. By leveraging the correlation
verse process), showcasing its effective noise removal ability. between perceptual prior and text embedding, we adap-
Inspired by this, a novel BIQA framework based on the dif- tively aggregate perceptual text prompts to guide the
fusion model is first proposed, namely the Perceptual Feature diffusion denoising process, which ensures attention to
Diffusion model for IQA (PFD-IQA). We formulate the fea- quality-aware features during the denoising process.
ture noise filtering problem as a progressive feature denois- • We introduce a novel Perceptual Prior-based Feature Re-
ing process, enabling effective enhancement of quality-aware finement Strategy for BIQA. Particularly, we pre-define
features. However, there are two challenges in directly utiliz- denoising trajectories of teacher pseudo-labels. Then,
ing diffusion models for denoising in BIQA: (i) Traditional by employing an adaptive noise alignment module, we
diffusion models may offer limited control over quality-aware match the student noise features to predefined denoising
feature preservation and noise feature elimination, possibly trajectories and subsequently perform precise feature de-
leading to suboptimal denoising. (ii) In BIQA, explicit bench- noising based on the given prompt conditions.
marks or ground truths are often absent for denoising targets.
This makes it challenging to define a clear denoising trajec-
tory for the diffusion model. 2 Related Work
To this end, our PFD-IQA consists of two main modules 2.1 BIQA with Deep Learning.
to overcome the above two challenges in the diffusion model
for BIQA. To address the problem (i), we introduce a Per- The early BIQA method [Liu et al., 2017; Zhou et al.,
ceptual Prior Discovery and Aggregation module that merges 2019; Li et al., 2009] was based on the convolutional neural
text prompts representing various quality perceptions to guide network (CNN) thanks to its powerful feature expression abil-
the diffusion model. Specifically, we initially acquire poten- ity. The CNN-based BIQA methods [Zhang et al., 2018; Su
tial distortion-aware and quality-level priors through auxil- et al., 2020] generally treated the IQA task as the downstream
iary tasks. These are then combined based on their similarity task of object recognition, following the standard pipeline of
to text prompts, creating perceptual text prompts, which serve pre-training and fine-tuning. Such a strategy is useful as these
as conditions to guide the model for more accurate feature pre-trained features share a certain degree of similarity with
denoising. We select these priors for text descriptions pri- the quality-aware features of images [Su et al., 2020]. Re-
marily for two reasons: Firstly, understanding the diversity in cently, Vision Transformer (ViT) based methods for Blind
distortion enhances prediction accuracy and generalization in Image Quality Assessment (BIQA) have become popular due
IQA[Song et al., 2023; Zhang et al., 2023]. Secondly, qual- to their strong capability in modeling non-local perceptual
ity level recognition categorizes distorted images into lev- features in images. There are two main types of architectures
els (e.g. high and bad ), based on human-perceptible seman- used: the hybrid transformer and the pure transformer. Exist-
tic features. This natural language-based, range-oriented ap- ing ViT-based methods typically rely on the CLS token for as-
proach helps minimize errors in absolute scoring across dif- sessing image quality. Originally designed for describing im-
ferent subjects [Yang et al., 2020]. age content, like object recognition, the CLS token focuses on
To address the problem (ii), we introduce a novel Percep- higher-level visual abstractions, such as semantics and spatial
tual Prior-based Feature Refinement Strategy for BIQA. Ini- relationships of objects. Therefore, it is still a challenge to
tially, we use pre-trained teacher pseudo features to establish fully adapt these methods from classification tasks to image
a quality-aware denoising process based on text-conditioned quality assessment (IQA) tasks, due to the abstract nature of
DDIM. We then consider student features as noisy versions classification features.
of teacher pseudo features. Through an adaptive noise align-
ment mechanism, we adaptively assess the noise level in 2.2 Diffusion Models.
each student feature and apply corresponding Gaussian noise, Diffusion models [Rombach et al., 2022; Huang et al.,
aligning these features with the teacher’s predefined denois- 2023], generally comprising a forward process for adding
ing path. During the reverse denoising, cross-attention with noise and a reverse process for denoising, gained popularity
text conditions is conducted to precisely refine the quality- with Ho et al.’s introduction of the denoising diffusion prob-
aware features. Notably, our goal is to utilize the powerful abilistic model. Building on this, methods like [Rombach
denoising modeling capability of the diffusion model for fea- et al., 2022] have integrated attention mechanisms into dif-
ture refinement rather than learning the distribution of teacher fusion models, stabilizing them and producing high-quality
Channel-Wise Perceptual Prior Discovery Prompt Aggregation
Masked Map 
pˆ q ( x) Noise Level
Predictor 1
Gaussian Fq Similarity
Perception
Noise Generation & Softmax
Feature Fˆh 
Quality
Random mask
Prior Fq
Noise Alignment
Prior E Gq  pˆ  j∣x G i
 (1   )
Fˆt
i q

Image a photo with


Encoder
Perception {q} quality Conditional Diffusion Refinement
Enhancement a photo with
CLS {d} artifacts eˆada Image
Enhanced Decoder
Feature Fs Perceptual Text
Prior Fd Feature Fˆh Gd
E  pˆ  j∣x G
i
i
d

Sum Operation Fd
Perception Similarity Denoising Network
Multiplication Generation & Softmax ...
Distortion
Operation Random mask pˆ d ( x) Feature Fˆt Feature F̂0
Prior

(a) Perceptual Prior Discovery and Aggregation (b) Perceptual Prior-based Diffusion Refinement
Figure 2: The overview of PFD-IQA, which consists of a teacher model used for creating pseudo-labels and a student model equipped with
PDA and PDR modules. Specifically, we begin by developing a learning perceptual prior (Sec. 3.2) through the random mask reconstruction
process. Subsequently, we use the prior knowledge to aggregate text information as the condition to guide the feature-denoising process of
the diffusion model and refine the features (Sec. 3.3).

images. To further extend diffusion models (dm) into main- F̂ d and Perceptual priors F̂ q , which then adaptively aggre-
stream computer vision tasks, latent representation learning gate perceptual text embeddings as conditions for the diffu-
methods based on dm have been proposed, including Diffu- sion process (Sec. 3.2). Next, in the PDR module, these prior
sionDet for object detection [Chen et al., 2023] and SegDiff features are used to modulate F s for feature enhancement to
for segmentation [Amit et al., 2021]. However, diffusion obtain F̂h . This is followed by matching it to a predefined
models are seldom used for specific feature denoising. In this noise level F̂ t through an adaptive noise matching module ϵ,
study, we treat the feature optimization process in IQA as an and finally employing a lightweight feature denoising mod-
inverse denoising approximation and iteratively use diffusion ule to progressively denoise under the guidance of the per-
models to enhance representations for accurate quality aware- ceptual text embeddings (Sec. 3.3). After the PDR module,
ness. To the best of our knowledge, ours is the first work to a layer of transformer decoder is used to further interpret the
introduce diffusion models into IQA for feature denoising. denoised features for predicting the final quality score [Qin et
al., 2023]. It is important to emphasize that pseudo-labels are
3 Methodology only used for training.
In the context of BIQA, we introduce common notations.
Bold formatting is used to denote vectors (e.g., x, y), ma- 3.2 Perceptual Prior Discovery and Aggregation
trices (e.g., X, Y ), and tensors. The training data consists Considering the intricate nature of image distortions in
of D = {x, yg , yd , yq }, where x is the labeled image with the real world, the evaluation of image quality necessitates
ground-truth scores yg . yd , and yq represent the distortion discriminative representations that can distinguish different
type and quality level pseudo-labels associated with the input types of distortions [Zhang et al., 2022], as well as the degrees
image, respectively. Additionally, image embeddings F and of degradation. To achieve this, an auxiliary task involving
textual embeddings G are denoted. The probability distribu- the classification of distortion types is introduced Which is
tion of logits for the network is represented as p. designed to refine the differentiation among diverse distor-
tion types, thereby providing nuanced information. Addition-
3.1 Overview ally, the quality levels classification task is further employed
The paper introduces a model called the Perceptual Feature to offer a generalized classification that compensates for the
Diffusion model for Image Quality Assessment (PFD-IQA), uncertainty and error inherent in predicting absolute image
which progressively refines quality-aware features. As de- quality scores.
picted in Fig. 2, PFD-IQA seamlessly integrates two main Perceptual Prior Discovery. In this context, two feature re-
components: A Perceptual Prior Discovery and Aggregation constructors denoted as R(·) are trained to reconstruct the
(PDA) and A Perceptual Prior-based Diffusion Refinement mentioned two prior features, respectively. These reconstruc-
Module (PDR). Initially, PFD-IQA inputs the given image x tors consist of two components: (1) a stochastic channel
into a Vision Transformer (ViT) encoder [Dosovitskiy et al., mask module and (2) a module for the feature reconstruc-
2021] to obtain a feature representation F s . Under the super- tions. Specifically, given an image x and its feature F s that
vision of pseudo-labels for distortion types and quality levels, has been generated by a VIT encoder. The first step involves
we use the PDA module to discover potential distortion priors applying a channel-wise random mask Mc to the channel di-
mension of this feature to obtain F m . Diffusion Process
 F tea Ft tea
0, if Rc < β Predefined Denoising Trajectories
Mc = , F m = falign (F s ) · Mc , (1)
1, Otherwise F0tea × (T 1)
Denoising Denoising Network
where Rc is a random number in (0, 1) and c are channel
number of the feature. β is a hyper-parameter that denotes the
masked ratio and falign is a adaptation layer with 1×1 convo-  eˆada
lution. The random mask helps to train a more robust feature Perceptual Text Key , Value ò
Fˆh
reconstructor [Yang et al., 2022]. Subsequently, we utilize
(1 ) Ft tea

Bottleneck
the two feature reconstruction modules R(·) to generate prior

Qu ery
ReLu

Conv
Conv
BN

CA
features. Each R(·) consists of a sequence of operations in- Noise
cluding a 1×1 convolution Wl1 , a Batch Normalization (BN)
layer, and another 1×1 convolutional layer. Wl2 .
t Bottleneck
Time Embedding
Matrix Addition CA Cross Attention
F̂ j = R(F m ) = Wl2,j · (ReLU (Wl1,j (F m ))) , (2)
Figure 3: The predefined denoising trajectory starts with a teacher
where, j ∈ {d, q}, F̂ d and F̂ q stands for distortion and qual- pseudo-feature label for forward diffusion. During each reverse de-
ity level classification auxiliary tasks. These tasks are linked noising phase, image and text information are fused to accurately
to the original image feature F s and involve capturing differ- predict the noise in the features. For student denoising, the noise
ent aspects of information. level matched by the noise alignment mechanism is used as the in-
put for noise prediction.
To effectively supervise the auxiliary tasks related to qual-
ity level classification Q and distortion classification D for
the discovery of potential prior features, we divide the tasks for the distortion and quality levels, for each prompt, we can
into five quality levels and eleven types of distortions fol- further get its logit(ji |I) by:
lowing previous study [Zhang et al., 2023]. As illustrated
exp(logit(ji |x))
in Fig. 2, let D denote the set of image distortions, i.e., p̂(ci |x) = PK , (5)
D = {d1 , d2 , . . . , dK }, where di is an image distortion type, k=1 exp(logit(jk |x))
e.g., “noise”. Let Q denote the set of image quality lev- where ji is the i-th element of Tj . Next, we obtain the
els, i.e., Q = {q1 , q2 , . . . , qK }, where qi is quality level, adaptive perceptual text embedding êada via the following
e.g., “bad”, and K is the number of distortions or quality weighted aggregation:
levels we consider. The textural description prompt set is
K
Td = {Td | Td = “a photo of with {d} artifact.”, d ∈ D} X
and Tq = {Tq | Tq = “a photo of with {q} quality.”, q ∈ Q}. êada = p̂(ji |x)GiJ , j ∈ {d, q} (6)
Given an image x, we compute the cosine similarity be- i=1

tween image prior embedding F̂j and each prompt embed- It is worth noting that êada is capable of effectively repre-
ding Gj = E(Tj ) ∈ RK×C from text encoder E resulting in senting the multi-distortion mixture information in real dis-
the logits output for the auxiliary tasks, namely p̂d and p̂q , torted images as soft label weightings. This approach is more
which the parameters of the text encoder is freeze: informative compared to the hard label method, which relies
solely on a single image-text pair.
F̂ · GTj
p̂j (x) = logit(j|x) = . (3) 3.3 Perceptual Prior-based Diffusion Refinement
∥F̂ j ∥2 ∥Gj ∥2
In this section, we introduce our Perceptual Prior Fu-
To supervise the feature reconstruction module, we utilize sion (PPF) Module and Perceptual Prior-based Diffusion Re-
the soft pseudo-labels pd for distortion and pq for quality, finement Module (PDR), as well as discuss how to automat-
which are generated by the pre-trained teacher model. This ically aggregate perceptual text embeddings that can be used
guidance is accomplished by applying the KL divergence as as conditions to guide feature denoising.
follows: Perceptual Prior Enhancement. Due to the primary empha-
X X pj (x) sis on global semantic features in pre-trained models, there
LjKL pj , pˆj =

LKL = px (j) log exists a gap in capturing quality-aware information across dif-
pˆj (x)
j∈{d,q} j∈{d,q} ferent granularities. To address this, we propose the integra-
(4) tion of perceptual prior information to enhance feature rep-
Perceptual Prompt Aggregation (PPA). Psychological re- resentations. Specifically, we introduce the Perceptual Prior
search suggests that humans prefer using natural language Fusion module (PPF), which is designed to merge both dis-
for qualitative rather than quantitative evaluations [Hou et al., tortion perception and quality degradation perception into the
2014]. In practice, this means qualitative descriptors like ’ex- framework. The proposed PPF Module operates sequentially
cellent’ or ’bad’ are often used to assess image quality. Build- on normalized features, incorporating additional convolutions
ing on this, we’ve developed an approach to automatically ag- and SiLU layers [Elfwing et al., 2018] to facilitate the fusion
gregate natural language prompts that qualitatively represent of features across different granularities. In the implementa-
image quality perception. Specifically, we compute the logit tion, we first apply a two-dimensional scaling modulation to
the normalized feature norm F s and then employ two convo- 1×1 convolution. In addition, A cross-attention layer [Rom-
lutional transformations modulate the normalized feature F s bach et al., 2022] is added after each bottleneck block to ag-
with scaling and shifting parameters from additive features gregate the text and image features. We empirically find that
F̂ dq , resulting in the feature representation F̂ h : this lightweight network is capable of effective noise removal
with less than 5 sampling iterations which is more than 200×
F̂ h = (conv(F̂ dq ) × norm(F s ) + conv(F̂ dq )) + F s . (7) faster sampling speed compared to the DDPM.
During the sampling process, with the initial noise F t ob-
Predefined Conditional Denoising Trajectories. The pro- tained in Equ. 10, the trained network is employed for itera-
posed PFD-IQA iteratively optimizes the feature F̂h to at- tive denoising to reconstruct the feature F̂ 0 :
tain accurate and quality-aware representations. This process
can be conceptualized as an approximation of the inverse fea-
     
pθ F̂ t−1 | F̂ t := N F̂ t−1 ; ϵθ F̂ t , êada , t , σt2 I
ture denoising procedure. However, the features represent-
ing the ground truth are often unknown. Therefore, we in- (11)
troduce features F tea generated by a pre-trained teacher as Subsequently, we employ the features F tea derived from the
pseudo-ground truth to pre-construct a denoising trajectory pseudo-labels generated by the pre-trained teacher to super-
of quality-aware features. As depicted in Fig. 3, for the for- vise the denoising procedure using MSE loss which ensures
ward diffusion process, F tea t is a linear combination of the the stability of the feature denoising process.
initial data F tea and the noise variable ϵt .
√ √ Lf ea = ∥F̂ 0 − F tea )∥22 (12)
F tea
t = ᾱt F tea + 1 − ᾱt ϵt . (8)
Qt To sum up, the overall loss at the training stage is described
The parameter ᾱt is defined as ᾱt := s=0 αs =
as follows:
Qt
s=0 (1 − β s ), offering a method to directly sample F tea
t
at any time step using a noise variance schedule denoted L = λ1 LKL + λ2 Lldm + λ3 Lf ea + ∥ŷ − y g )∥1 (13)
by β [Ho et al., 2020]. During training, a neural net-
tea tea Here, ŷ represents the predicted score of image x based on
work ϵθ (F tea
t , êada , t) conditioned on perceptual text êada the denoised feature obtained from the transformer decoder.
is trained to predict the noise ϵt ∈ N (0, I) by minimizing
y g stands for the ground truth corresponding to the image
the ℓ2 loss, i.e.,
x. The notation ∥ · ∥1 denotes the ℓ1 regression loss. In this
tea
Lldm = ∥ϵt − ϵθ (F tea 2
t , êada , t)∥2 , (9) paper, We simply set λ1 = 0.5, λ2 = 1, and λ3 = 0.01 in all
experiments.
Adaptive Noise-Level Alignment (ANA). We treat the fea-
ture representations extracted by students according to the
fine-tuning paradigm as noisy versions of the teacher’s 4 Experiments
quality-aware features. However, the extent of noise that sig-
nifies the dissimilarity between the teacher and student fea- 4.1 Benchmark Datasets and Evaluation Protocols
tures remains elusive and may exhibit variability across dis-
tinct training instances. As a result, identifying the optimal We evaluate the performance of the proposed PFD-IQA on
initial time step to initiate the diffusion process presents a eight typical BIQA datasets, including four synthetic datasets
challenging task. To overcome this, we introduce an Adap- of LIVE [Sheikh et al., 2006], CSIQ [Larson and Chandler,
tive Noise Matching Module to match the noise level of stu- 2010], TID2013 [Ponomarenko et al., 2015], KADID [Lin et
dent features with a predefined noise level. al., 2019], and four authentic datasets of LIVEC [Ghadiyaram
As depicted in Fig. 2, we develop a Noise-level predictor and Bovik, 2015] KONIQ [Hosu et al., 2020], LIVEFB [Ying
using a straightforward convolutional module aimed at learn- et al., 2020], SPAQ [Fang et al., 2020]. Specifically, for
ing a weight γ to combine the fusion feature F̂ h of the student the authentic dataset, LIVEC contains 1162 images from
different mobile devices and photographers. SPAQ com-
with Gaussian noise, resulting in F̂ t that aligns with F t . This
prises 11,125 photos from 66 smartphones. KonIQ-10k in-
weight ensures that the student’s outputs are harmonized with
cludes 10,073 images from public sources, while LIVEFB
the noise level corresponding to the initial time step t. Con-
is the largest real-world dataset to date, with 39,810 images.
sequently, the initial noisy feature involved in the denoising
The synthetic datasets involve original images distorted arti-
process is altered subsequently:
ficially using methods like JPEG compression and Gaussian
F̂ t = γ ⊙ F̂ h + (1 − γ) ⊙ N (0, 1) (10) blur. LIVE and CSIQ have 779 and 866 synthetically dis-
torted images, respectively, with five and six distortion types
Lightweight Architecture. Considering the huge dimension each. TID2013 and KADID include 3000 and 10,125 syn-
of transformers, performing the denoising process on features thetically distorted images, respectively, spanning 24 and 25
during training requires considerable iterations, which may distortion types.
result in a huge computational load. To address this issue, this In our experiments, we employ two widely used metrics:
paper proposes a lightweight diffusion model ϵθ (·) as an alter- Spearman’s Rank Correlation Coefficient (SRCC) and Pear-
native to the U-net architecture, as shown in Fig. 3. It consists son’s Linear Correlation Coefficient (PLCC). These metrics
of two bottleneck blocks from ResNet [He et al., 2016] and a evaluate prediction monotonicity and accuracy, respectively.
LIVE CSIQ TID2013 KADID LIVEC KonIQ LIVEFB SPAQ
Method PLCC SRCC PLCC SRCC PLCC SRCC PLCC SRCC PLCC SRCC PLCC SRCC PLCC SRCC PLCC SRCC
DIIVINE [Saad et al., 2012] 0.908 0.892 0.776 0.804 0.567 0.643 0.435 0.413 0.591 0.588 0.558 0.546 0.187 0.092 0.600 0.599
BRISQUE [Mittal et al., 2012] 0.944 0.929 0.748 0.812 0.571 0.626 0.567 0.528 0.629 0.629 0.685 0.681 0.341 0.303 0.817 0.809
ILNIQE [Zhang et al., 2015] 0.906 0.902 0.865 0.822 0.648 0.521 0.558 0.534 0.508 0.508 0.537 0.523 0.332 0.294 0.712 0.713
BIECON [Kim and Lee, 2016] 0.961 0.958 0.823 0.815 0.762 0.717 0.648 0.623 0.613 0.613 0.654 0.651 0.428 0.407 - -
MEON [Ma et al., 2017] 0.955 0.951 0.864 0.852 0.824 0.808 0.691 0.604 0.710 0.697 0.628 0.611 0.394 0.365 - -
WaDIQaM [Bosse et al., 2017] 0.955 0.960 0.844 0.852 0.855 0.835 0.752 0.739 0.671 0.682 0.807 0.804 0.467 0.455 - -
DBCNN [Zhang et al., 2018] 0.971 0.968 0.959 0.946 0.865 0.816 0.856 0.851 0.869 0.851 0.884 0.875 0.551 0.545 0.915 0.911
TIQA [You and Korhonen, 2021] 0.965 0.949 0.838 0.825 0.858 0.846 0.855 0.85 0.861 0.845 0.903 0.892 0.581 0.541 - -
MetaIQA [Zhu et al., 2020] 0.959 0.960 0.908 0.899 0.868 0.856 0.775 0.762 0.802 0.835 0.856 0.887 0.507 0.54 - -
P2P-BM [Ying et al., 2020] 0.958 0.959 0.902 0.899 0.856 0.862 0.849 0.84 0.842 0.844 0.885 0.872 0.598 0.526 - -
HyperIQA [Su et al., 2020] 0.966 0.962 0.942 0.923 0.858 0.840 0.845 0.852 0.882 0.859 0.917 0.906 0.602 0.544 0.915 0.911
TReS [Golestaneh et al., 2022] 0.968 0.969 0.942 0.922 0.883 0.863 0.858 0.859 0.877 0.846 0.928 0.915 0.625 0.554 - -
MUSIQ [Ke et al., 2021] 0.911 0.940 0.893 0.871 0.815 0.773 0.872 0.875 0.746 0.702 0.928 0.916 0.661 0.566 0.921 0.918
DACNN [Pan et al., 2022] 0.980 0.978 0.957 0.943 0.889 0.871 0.905 0.905 0.884 0.866 0.912 0.901 - - 0.921 0.915
DEIQT [Qin et al., 2023] 0.982 0.980 0.963 0.946 0.908 0.892 0.887 0.889 0.894 0.875 0.934 0.921 0.663 0.571 0.923 0.919
PFD-IQA (ours) 0.985 0.985 0.972 0.962 0.937 0.924 0.935 0.931 0.922 0.906 0.945 0.930 0.667 0.572 0.925 0.922

Table 1: Performance comparison measured by averages of SRCC and PLCC, where bold entries indicate the best results, underlines indicate
the second-best.

Training LIVEFB LIVEC KonIQ LIVE CSIQ 4.3 Overall Prediction Performance Comparison
Testing KonIQ LIVEC KonIQ LIVEC CSIQ LIVE For competing models, we either directly adopt the pub-
licly available implementations, or re-train them on our
DBCNN 0.716 0.724 0.754 0.755 0.758 0.877
P2P-BM 0.755 0.738 0.740 0.770 0.712 - datasets with the training codes provided by the respective
HyperIQA 0.758 0.735 0.772 0.785 0.744 0.926 authors. Tab. 1 reports the comparison results between the
TReS 0.713 0.740 0.733 0.786 0.761 - proposed PFD-IQA and 14 state-of-the-art BIQA methods,
DEIQT 0.733 0.781 0.744 0.794 0.781 0.932 including hand-crafted feature-based BIQA methods, such
as ILNIQE (Zhang, Zhang, and Bovik 2015) and BRISQUE
PFD-IQA 0.775 0.783 0.796 0.818 0.817 0.942
(Mittal, Moorthy, and Bovik 2012), and deep learning-based
methods, i.e., MUSIQ [Ke et al., 2021] and MetaIQA [Zhu et
Table 2: SRCC on the cross datasets validation. The best results are
highlighted in bold, second-best is underlined.
al., 2020]. It is observed from these eight datasets that PFD-
IQA achieves superior performance over all other methods
across the 8 datasets. Since the images on these 8 datasets
cover various image content and distortion types, it is very
4.2 Implementation Details challenging to consistently achieve leading performance on
all these datasets. Accordingly, these observations confirm
For the student network, we follow the typical training the effectiveness and superiority of PFD-IQA in characteriz-
strategy of randomly cropping the input image into 10 im- ing image quality.
age patches with a resolution of 224 × 224. Each image patch
is then reshaped as a sequence of patches with patch size p 4.4 Generalization Capability Validation
= 16 and the dimension of input tokens as in D = 384. We We further evaluate the generalization ability of PFD-
create the Transformer encoder based on the ViT-B proposed IQA by a cross-dataset validation approach, where the BIQA
in DeiT III [Touvron et al., 2022]. The encoder depth is set model is trained on one dataset and then tested on the oth-
to 12 and the number of heads h = 12. For Decoder, the depth ers without any fine-tuning or parameter adaptation. Tab.
is set to one. Our model is trained for 9 epochs. The learn- 2 reports the experimental results of SRCC averages on the
ing rate is set to 2 × 10−4 with a decay factor of 10 every five datasets. As observed, PFD-IQA achieves the best per-
3 epochs. The batch size depends on the size of the dataset, formance on six cross-datasets, achieving clear performance
which is 16 and 128 for LIVEC and KonIQ, respectively. For gains on the LIVEC dataset and competitive performance on
each dataset, 80% of the images are used for training and the the KonIQ dataset. These results strongly verify the general-
remaining 20% of the images are used for testing. We re- ization ability of PFD-IQA.
peat this process 10 times to mitigate performance bias and
report the average of SRCC and PLCC. For the pre-trained 4.5 Qualitative Analysis
teacher network, We adopt ViT-B/16 [Radford et al., 2021]
Visualization of activation map We employ GradCAM [Sel-
as the visual encoder and text encoder. The re-training hyper-
varaju et al., 2017] to visualize the feature attention map,
parameter Settings are consistent with [Zhang et al., 2023].
as shown in Fig. 4. Our findings indicate that PFD-IQA ef-
fectively focuses on the quality degradation areas, while the
DEIQT[Qin et al., 2023] (the second best in Tab. 1) overly
LIVE CSIQ TID2013 KADID LIVEC
Index P DA P DR Avg.
PLCC SRCC PLCC SRCC PLCC SRCC PLCC SRCC PLCC SRCC
a) 0.966 0.964 0.952 0.935 0.899 0.888 0.878 0.884 0.881 0.863 0.911
b) " 0.984 0.981 0.963 0.954 0.927 0.915 0.925 0.925 0.911 0.895 0.938
c) " 0.983 0.982 0.968 0.959 0.910 0.890 0.918 0.919 0.916 0.897 0.934
d) " " 0.985+1.9% 0.985+2.1% 0.972+2.0% 0.962+2.7% 0.937+3.8% 0.924+3.6% 0.935+6.0% 0.931+4.7% 0.922+4.1% 0.906+4.3% 0.946+3.5%

Table 3: Ablation experiments on LIVE, CSIQ, TID2013, KADAD and LIVEC datasets. Here, P DA and P DR refer to the Perceptual
Prior Discovery and Aggregation module and Diffusion Refinement module, where bold entries indicate the best result.

6.39 13.78 8.41 5.37 Sampling Number T


LIVE LIVEC
Avg.
PLCC SRCC PLCC SRCC
1 0.982 0.980 0.913 0.890 0.941
3 0.984 0.982 0.918 0.899 0.945
5 0.985 0.985 0.922 0.906 0.950
10 0.983 0.983 0.921 0.901 0.947
9.91 16.41 10.21 6.20
Table 5: Ablation experiments about the number of sampling itera-
tions. Bold entries indicate the best performance.

(a) Input Image


the module’s efficacy in augmenting the model’s awareness
(b) DEIQT (c) PFD-IQA
of quality through the assimilation of supplementary quality-
Figure 4: Visualization of the feature of DEIQT and PFD-IQA. related knowledge. A similar positive impact is observed
in the SRCC, manifesting as an improvement ranging from
LIVEC KonIQ
0.7% to 4.1% across different datasets.
Index AN A PPA Avg. Std. Feature Diffusion Refinement Module. Upon sole inte-
PLCC SRCC PLCC SRCC
gration of the PDR module (referred to as scenario c)),
a) 0.881 0.863 0.929 0.914 ±0.011
discernible improvements are discerned in the PLCC, with
b) " 0.914 0.896 0.938 0.928 ±0.005
c) " 0.918 0.900 0.940 0.928 ±0.008
advancements ranging from 0.9% to 4.0% across diverse
d) " " 0.922 0.906 0.945 0.930 ±0.004
datasets. Correspondingly, there is a notable enhancement
in SRCC, with gains varying from 0.8% to 3.5% across these
Table 4: Ablation experiments about PDR component on LIVEC datasets. This observation suggests that the PDR module pos-
and KonIQ datasets. Bold entries indicate the best performance. sesses the capacity to effectively optimize features, even in
the absence of the perceptual text prompts condition. Ad-
ditionally, combining the PDR and PDA modules (scenario
relies on semantics and even focuses on less important for d)) notably enhances PLCC (3.8 to 6.0%) and SRCC (3.6 to
the image quality(e.g., in the second line, DEIQT incorrectly 4.7%). This highlights their synergistic effect, substantially
focuses on the little girl in the bottom left, while ignoring improving PFD-IQA’s robustness and accuracy.
the overexposure and blurriness in the center of the picture.). ANA and PPA module. As shown in Tab. 4, the PPA mod-
Furthermore, Fig. 4 presents a comparison of quality predic- ule alone enhances performance by up to 3.4% over baseline,
tions between our proposed PFD-IQA and the DEIQT. PFD- leveraging perceptual text embedding cues, though lacking
IQA consistently outperforms the DEIQT across all levels of in stability. The ANA module, on the other hand, signifi-
image quality assessment, particularly demonstrating signif- cantly lowers the model’s standard deviation and offers com-
icant improvement for moderately distorted images, refer to petitive performance improvements. When combined, ANA
the supplementary material for more visualizations. aligns features with predefined denoising modules, reducing
randomness, while PPA precisely refines perceptual features
via image-text interaction. Consequently, the synergistic co-
4.6 Ablation Study operation of both modules helps achieve SOTA performance.
This section presents an ablation experiment, and the re- Number of Sampling Iterations. In our study, we employ
sults are shown in Tab. 3. The experiment is focused on the DDIM [Song et al., 2020] for acceleration. The experiments
examination of two main modules: the Perceptual Prior Dis- emphasize how different sampling numbers impact perfor-
covery and Aggregation (PDA) module and the Perceptual mance. Tab. 5 demonstrates that single-step denoising sig-
Prior-based Diffusion Refinement (PDR) module. nificantly outperforms the baseline. We find that an iteration
Perceptual Prior Discovery and Aggregation. When ex- of 5 is adequate for effective performance in our approach.
clusively incorporating the PDA module (referred to as sce- Therefore, this method is adopted in all experiments to bal-
nario b)), discernible enhancements are observed in the PLCC ance efficiency and accuracy.
(Pearson Linear Correlation Coefficient) ranging from 0.8%
to 4.7% across distinct datasets. This outcome underscores
5 Conclusion Deepti Ghadiyaram and Alan C Bovik. Massive on-
line crowdsourced study of subjective and objective pic-
In conclusion, our study introduces the PFD-IQA, a pio- ture quality. IEEE Transactions on Image Processing,
neering Blind Image Quality Assessment framework lever- 25(1):372–387, 2015.
aging the diffusion model’s noise removal capabilities. Ad-
dressing key challenges in BIQA, PFD-IQA features two S Alireza Golestaneh, Saba Dadsetan, and Kris M Kitani.
novel modules: the Perceptual Prior Discovery and Aggre- No-reference image quality assessment via transformers,
gation module for improved feature preservation and noise relative ranking, and self-consistency. In Proceedings of
elimination, and the Perceptual Prior-based Feature Refine- the IEEE/CVF Winter Conference on Applications of Com-
ment Strategy for defining denoising trajectories in the ab- puter Vision, pages 1220–1230, 2022.
sence of explicit benchmarks. These innovations, combin-
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.
ing text prompts with perceptual priors and employing an
Deep residual learning for image recognition. In Proceed-
adaptive noise alignment mechanism, enable PFD-IQA to re-
ings of the IEEE conference on computer vision and pat-
fine quality-aware features with precision. Our experiments
tern recognition, pages 770–778, 2016.
demonstrate that PFD-IQA achieves exceptional performance
with minimal sampling steps and a lightweight model, mark- Dan Hendrycks and Thomas Dietterich. Benchmarking neu-
ing a significant advancement in the application of diffusion ral network robustness to common corruptions and pertur-
models to image quality assessment. bations. arXiv preprint arXiv:1903.12261, 2019.
Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffu-
References sion probabilistic models. Advances in neural information
Tomer Amit, Tal Shaharbany, Eliya Nachmani, and Lior processing systems, 33:6840–6851, 2020.
Wolf. Segdiff: Image segmentation with diffusion prob- Vlad Hosu, Hanhe Lin, Tamas Sziranyi, and Dietmar Saupe.
abilistic models. arXiv preprint arXiv:2112.00390, 2021. Koniq-10k: An ecologically valid database for deep learn-
Mark R Banham and Aggelos K Katsaggelos. Digital image ing of blind image quality assessment. IEEE Transactions
restoration. IEEE signal processing magazine, 14(2):24– on Image Processing, 29:4041–4056, 2020.
41, 1997. Weilong Hou, Xinbo Gao, Dacheng Tao, and Xuelong Li.
Sebastian Bosse, Dominique Maniry, Klaus-Robert Müller, Blind image quality assessment via deep learning. IEEE
Thomas Wiegand, and Wojciech Samek. Deep neural transactions on neural networks and learning systems,
networks for no-reference and full-reference image qual- 26(6):1275–1286, 2014.
ity assessment. IEEE Transactions on image processing, Tao Huang, Yuan Zhang, Mingkai Zheng, Shan You, Fei
27(1):206–219, 2017. Wang, Chen Qian, and Chang Xu. Knowledge diffusion
Shoufa Chen, Peize Sun, Yibing Song, and Ping Luo. Diffu- for distillation. arXiv preprint arXiv:2305.15712, 2023.
siondet: Diffusion model for object detection. In Proceed- Junjie Ke, Qifei Wang, Yilin Wang, Peyman Milanfar, and
ings of the IEEE/CVF International Conference on Com- Feng Yang. Musiq: Multi-scale image quality transformer.
puter Vision, pages 19830–19843, 2023. In Proceedings of the IEEE/CVF International Conference
Chao Dong, Chen Change Loy, Kaiming He, and Xiaoou on Computer Vision, pages 5148–5157, 2021.
Tang. Image super-resolution using deep convolutional Jongyoo Kim and Sanghoon Lee. Fully deep blind image
networks. IEEE transactions on pattern analysis and ma- quality predictor. IEEE Journal of selected topics in signal
chine intelligence, 38(2):295–307, 2015. processing, 11(1):206–220, 2016.
Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Eric Cooper Larson and Damon Michael Chandler. Most ap-
Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, parent distortion: full-reference image quality assessment
Mostafa Dehghani, Matthias Minderer, Georg Heigold, and the role of strategy. Journal of electronic imaging,
Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An im- 19(1):011006, 2010.
age is worth 16x16 words: Transformers for image recog-
nition at scale. In International Conference on Learning Xuelong Li, Dacheng Tao, Xinbo Gao, and Wen Lu. A nat-
Representations, 2021. ural image quality evaluation metric. Signal Processing,
89(4):548–555, 2009.
Stefan Elfwing, Eiji Uchibe, and Kenji Doya. Sigmoid-
weighted linear units for neural network function approxi- Hanhe Lin, Vlad Hosu, and Dietmar Saupe. Kadid-10k:
mation in reinforcement learning. Neural networks, 107:3– A large-scale artificially distorted iqa database. In 2019
11, 2018. Eleventh International Conference on Quality of Multime-
dia Experience (QoMEX), pages 1–3. IEEE, 2019.
Yuming Fang, Hanwei Zhu, Yan Zeng, Kede Ma, and Zhou
Wang. Perceptual quality assessment of smartphone pho- Xialei Liu, Joost van de Weijer, and Andrew D. Bagdanov.
tography. In Proceedings of the IEEE/CVF Conference Rankiqa: Learning from rankings for no-reference image
on Computer Vision and Pattern Recognition, pages 3677– quality assessment. In The IEEE International Conference
3686, 2020. on Computer Vision (ICCV), Oct 2017.
Kede Ma, Wentao Liu, Kai Zhang, Zhengfang Duanmu, Zhou Tianshu Song, Leida Li, Deqiang Cheng, Pengfei Chen,
Wang, and Wangmeng Zuo. End-to-end blind image qual- and Jinjian Wu. Active learning-based sample selection
ity assessment using deep neural networks. IEEE Transac- for label-efficient blind image quality assessment. IEEE
tions on Image Processing, 27(3):1202–1213, 2017. Transactions on Circuits and Systems for Video Technol-
Anish Mittal, Anush Krishna Moorthy, and Alan Conrad ogy, 2023.
Bovik. No-reference image quality assessment in the spa- Shaolin Su, Qingsen Yan, Yu Zhu, Cheng Zhang, Xin Ge, Jin-
tial domain. IEEE Transactions on image processing, qiu Sun, and Yanning Zhang. Blindly assess image qual-
21(12):4695–4708, 2012. ity in the wild guided by a self-adaptive hyper network.
Zhaoqing Pan, Hao Zhang, Jianjun Lei, Yuming Fang, Xiao In Proceedings of the IEEE/CVF Conference on Computer
Shao, Nam Ling, and Sam Kwong. Dacnn: Blind im- Vision and Pattern Recognition, pages 3667–3676, 2020.
age quality assessment via a distortion-aware convolutional Dacheng Tao, Xuelong Li, Wen Lu, and Xinbo Gao.
neural network. IEEE Transactions on Circuits and Sys- Reduced-reference iqa in contourlet domain. IEEE Trans-
tems for Video Technology, 32(11):7518–7531, 2022. actions on Systems, Man, and Cybernetics, Part B (Cyber-
Nikolay Ponomarenko, Lina Jin, Oleg Ieremeiev, Vladimir netics), 39(6):1623–1627, 2009.
Lukin, Karen Egiazarian, Jaakko Astola, Benoit Vozel, Hugo Touvron, Matthieu Cord, and Hervé Jégou. Deit iii: Re-
Kacem Chehdi, Marco Carli, Federica Battisti, et al. Im- venge of the vit. arXiv preprint arXiv:2204.07118, 2022.
age database tid2013: Peculiarities, results and perspec-
tives. Signal processing: Image communication, 30:57–77, Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Si-
2015. moncelli. Image quality assessment: from error visibility
Guanyi Qin, Runze Hu, Yutao Liu, Xiawu Zheng, Haotian to structural similarity. IEEE transactions on image pro-
Liu, Xiu Li, and Yan Zhang. Data-efficient image quality cessing, 13(4):600–612, 2004.
assessment with attention-panel decoder. In Proceedings Jinjian Wu, Jupo Ma, Fuhu Liang, Weisheng Dong, Guang-
of the Thirty-Seventh AAAI Conference on Artificial Intel- ming Shi, and Weisi Lin. End-to-end blind image qual-
ligence, 2023. ity prediction with cascaded deep neural network. IEEE
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Transactions on image processing, 29:7414–7426, 2020.
Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Xiaohan Yang, Fan Li, and Hantao Liu. Ttl-iqa: Transitive
Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learn- transfer learning based no-reference image quality assess-
ing transferable visual models from natural language su- ment. IEEE Transactions on Multimedia, 23:4326–4340,
pervision. In International conference on machine learn- 2020.
ing, pages 8748–8763. PMLR, 2021.
Zhendong Yang, Zhe Li, Mingqi Shao, Dachuan Shi, Zehuan
Robin Rombach, Andreas Blattmann, Dominik Lorenz, Yuan, and Chun Yuan. Masked generative distillation. In
Patrick Esser, and Björn Ommer. High-resolution image European Conference on Computer Vision, pages 53–69.
synthesis with latent diffusion models. In Proceedings of Springer, 2022.
the IEEE/CVF conference on computer vision and pattern
recognition, pages 10684–10695, 2022. Zhenqiang Ying, Haoran Niu, Praful Gupta, Dhruv Maha-
jan, Deepti Ghadiyaram, and Alan Bovik. From patches
Michele A Saad, Alan C Bovik, and Christophe Charrier. to pictures (paq-2-piq): Mapping the perceptual space of
Blind image quality assessment: A natural scene statistics picture quality. In Proceedings of the IEEE/CVF Confer-
approach in the dct domain. IEEE transactions on Image ence on Computer Vision and Pattern Recognition, pages
Processing, 21(8):3339–3352, 2012. 3575–3585, 2020.
Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das,
Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. Junyong You and Jari Korhonen. Transformer for image qual-
Grad-cam: Visual explanations from deep networks via ity assessment. In 2021 IEEE International Conference on
gradient-based localization. In Proceedings of the IEEE Image Processing (ICIP), pages 1389–1393. IEEE, 2021.
international conference on computer vision, pages 618– Lin Zhang, Lei Zhang, and Alan C Bovik. A feature-enriched
626, 2017. completely blind image quality evaluator. IEEE Transac-
Hamid R Sheikh, Muhammad F Sabir, and Alan C Bovik. A tions on Image Processing, 24(8):2579–2591, 2015.
statistical evaluation of recent full reference image quality Weixia Zhang, Kede Ma, Jia Yan, Dexiang Deng, and Zhou
assessment algorithms. IEEE Transactions on image pro- Wang. Blind image quality assessment using a deep bilin-
cessing, 15(11):3440–3451, 2006. ear convolutional neural network. IEEE Transactions on
Chenyang Shi and Yandan Lin. Full reference image quality Circuits and Systems for Video Technology, 30(1):36–47,
assessment based on visual salience with color appearance 2018.
and gradient similarity. IEEE Access, 8:97310–97320, Linfeng Zhang, Xin Chen, Junbo Zhang, Runpei Dong, and
2020. Kaisheng Ma. Contrastive deep supervision. In Computer
Jiaming Song, Chenlin Meng, and Stefano Ermon. De- Vision–ECCV 2022: 17th European Conference, Tel Aviv,
noising diffusion implicit models. arXiv preprint Israel, October 23–27, 2022, Proceedings, Part XXVI,
arXiv:2010.02502, 2020. pages 1–19. Springer, 2022.
Weixia Zhang, Guangtao Zhai, Ying Wei, Xiaokang Yang,
and Kede Ma. Blind image quality assessment via vision-
language correspondence: A multitask learning perspec-
tive. In Proceedings of the IEEE/CVF Conference on Com-
puter Vision and Pattern Recognition, pages 14071–14081,
2023.
Kai Zhao, Kun Yuan, Ming Sun, Mading Li, and Xing Wen.
Quality-aware pre-trained models for blind image qual-
ity assessment. In Proceedings of the IEEE/CVF Confer-
ence on Computer Vision and Pattern Recognition, pages
22302–22313, 2023.
Yu Zhou, Leida Li, Shiqi Wang, Jinjian Wu, Yuming Fang,
and Xinbo Gao. No-reference quality assessment for
view synthesis using dog-based edge statistics and tex-
ture naturalness. IEEE Transactions on Image Processing,
28(9):4566–4579, 2019.
Hancheng Zhu, Leida Li, Jinjian Wu, Weisheng Dong, and
Guangming Shi. Metaiqa: Deep meta-learning for no-
reference image quality assessment. In Proceedings of
the IEEE/CVF Conference on Computer Vision and Pat-
tern Recognition, pages 14143–14152, 2020.

You might also like