Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
6 views14 pages

Paper 11

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 14

www.nature.

com/scientificreports

OPEN A deep network embedded


with rough fuzzy discretization
for OCT fundus image
segmentation
Qiong Chen 1,2*, Lirong Zeng 3 & Cong Lin 1,3*
The noise and redundant information are the main reasons for the performance bottleneck of medical
image segmentation algorithms based on the deep learning. To this end, we propose a deep network
embedded with rough fuzzy discretization (RFDDN) for OCT fundus image segmentation. Firstly,
we establish the information decision table of OCT fundus image segmentation, and regard each
category of segmentation region as a fuzzy set. Then, we use the fuzzy c-means clustering to get
the membership degrees of pixels to each segmentation region. According to membership functions
and the equivalence relation generated by the brightness attribute, we design the individual fitness
function based on the rough fuzzy set, and use a genetic algorithm to search for the best breakpoints
to discretize the features of OCT fundus images. Finally, we take the feature discretization based on
the rough fuzzy set as the pre-module of the deep neural network, and introduce the deep supervised
attention mechanism to obtain the important multi-scale information. We compare RFDDN with
U-Net, ReLayNet, CE-Net, MultiResUNet, and ISCLNet on the two groups of 3D retinal OCT data.
RFDDN is superior to the other five methods on all evaluation indicators. The results obtained by
ISCLNet are the second only inferior to those obtained by RFDDN. DSC, sensitivity, and specificity of
RFDDN are evenly 3.3%, 2.6%, and 7.1% higher than those of ISCLNet, respectively. HD95 and ASD
of RFDDN are evenly 6.6% and 19.7% lower than those of ISCLNet, respectively. The experimental
results show that our method can effectively eliminate the noise and redundant information in Oct
fundus images, and greatly improve the accuracy of OCT fundus image segmentation while taking into
account the interpretability and computational efficiency.

The macular region is the most sensitive part of the fundus to l­ight1. Once inflammatory reaction and liquid
infiltration occur in this area, edema lesions will be ­formed2. Macular edema can cause visual impairment or
even visual loss, and is secondary to retinal d­ iseases3. Optical coherence tomography (OCT) has the charac-
teristics of high resolution, non-invasive, and non-contact, and has become an indispensable imaging method
in the diagnosis and treatment of macular ­diseases4,5. OCT can identify three types of liquid lesions, namely,
intraretinal fluid (IRF), subretinal fluid (SRF), and pigment epithelial detachment (PED)6. Although OCT has
become a standard tool for quantitative analysis of macular diseases in clinic, manual analysis of fluids is sub-
jective, labor-intensive, and error ­prone7. Therefore, automatic fluid region segmentation is proposed to assist
ophthalmologists in detecting abnormal diseases of fundus ­structures8. The automatic fluid region segmentation
process of OCT fundus images is shown in Fig. 1. First, the OCT image is obtained after being scanned by the
OCT device. Then, the automatic segmentation algorithm is used to divide the lesion area in the image, and
further distinguish the morphology of the lesion area.
Traditional methods used by early OCT segmentation systems include threshold-based s­ egmentation9, graph-
based ­segmentation10, segmentation based on the fuzzy level set with a cross-sectional v­ oting11, segmentation
based on a locally adaptive loosely coupled level s­ et12, or machine learning algorithms using manual f­ eatures13.
However, these traditional methods are sensitive to image quality and lack generalization ability. Semi-supervised

1
College of Electronic and Information Engineering, Guangdong Ocean University, Haida Road, Zhanjiang 524000,
Guangdong, China. 2Department of Earth System Science, Tsinghua University, Shuangqing Road, Beijing 100084,
Beijing, China. 3School of Information and Communication Engineering, Hainan University, Renmin Avenue,
Haikou 570228, Hainan, China. *email: 13907534385@163.com; lincong@hainanu.edu.cn

Scientific Reports | (2023) 13:328 | https://doi.org/10.1038/s41598-023-27479-6 1

Vol.:(0123456789)
www.nature.com/scientificreports/

Figure 1.  Automatic fluid region segmentation process of OCT fundus images.

methods can solve the problem of retinal OCT image segmentation with low contrast and speckle noise, but
need to rely on expert information for multiple i­ terations4.
As a mathematical method to deal with imprecise, uncertain, and incomplete data, rough set can discover
hidden knowledge and reveal potential laws through analyzing and reasoning for data. Since rough set requires
no prior knowledge, it has also been applied to medical image segmentation. Banerjee et al. used both rough
sets and contraharmonic mean for bias field estimation to remove intensity inhomogeneity artifact from MR
­images14. Jothi et al. used a hybridization of two techniques of tolerance rough set and firefly algorithm to select
the imperative features of brain tumor from the segmented MRI ­images15. Although rough set exhibits good
performance on medical image segmentation, it is difficult to handle continuous features. In addition, rough set
is prone to generate a large number of classification rules, which substantially increases the amount of calculation.
Compared with the above-mentioned segmentation methods, the segmentation methods based on deep
learning can automatically learn and extract image features, thus making great ­improvements16–22. Deep learning
can not only provide powerful models to represent complex relationships, but also make highly accurate predic-
tions from complex data sources through multi-level s­ tructure23,24. Ronneberger et al. proposed the U-shape Net
(U-Net) framework showing promising results on the neuronal structure segmentation in electron microscopic
recordings and cell segmentation in light microscopic images, which has become a popular neural network
architecture for biomedical image segmentation t­ asks16. Roy et al. proposed an end-to-end fully convolutional
framework (ReLayNet) for reliable segmentation of retinal layers and fluid masses in eye OCT ­scans20. Gu et al.
proposed a context encoder network (CE-Net) to capture more high-level features and preserve more spatial
information for 2D medical image ­segmentation25. Ibtehaz et al. developed a novel architecture (MultiResUNet)
to improve the performance of the U-Net model in segmenting multimodal medical ­images26. He et al. proposed
an intra- and inter-Slice contrastive learning network (ISCLNet) to improve the point-supervised OCT fluid
segmentation for a rapid and accurate prediction of fluid r­ egions8.
Deep learning technology has achieved promising results in OCT fundus image segmentation. However,
the presence of noise and redundant information in images is the main reason for the performance bottleneck
of deep n ­ etworks27–29. Affected by the quality of imaging sensors, the effect of environmental conditions, and
the interference in the transmission channel, OCT fundus images inevitably introduce noise and redundant
information in the process of formation, transmission, reception, and processing. Noise and redundant infor-
mation not only reduce the quality of the original image, but also make errors accumulate among the network
layers, which seriously affects the segmentation performance of the deep neural ­network30. As an important
data preprocessing technology, feature discretization is widely used in big data a­ nalysis31–33. It is able to convert
continuous attribute values into discrete ones, thereby eliminating the negative effects of noise and redundant
­information34,35. In addition, feature discretization can be useful for missing value imputation, thereby repairing
the missing regions in the ­image36,37.
Another problem of deep networks is the lack of robustness and interpretability, and they are difficult to
deal with uncertainty information caused by noise, disturbance, and blurred b ­ oundary38–40. Although feature
discretization enables the dimension reduction of complex data and eliminates the negative effects of noise and
redundant information, the uncertainty information such as randomness and fuzziness contained in the data
makes it difficult to obtain the high-quality discrete intervals. The rough fuzzy ­set41 is considered to be a more
powerful model for big data uncertainty analysis than the fuzzy s­ et42 and the rough s­ et43. By introducing the
fuzzy membership into the equivalence relation of the rough set to describe the correlation between samples, it
has more flexibility in dealing with uncertain information. The feature discretization based on the rough fuzzy
set can obtain the best discretization result by effectively quantifying the uncertain information in the data.
Therefore, the combination of deep learning and feature discretization based on the rough fuzzy set provides a
feasible solution for improving the segmentation effect of OCT fundus images.
To this end, we propose a deep network embedded with rough fuzzy discretization (RFDDN) for OCT fundus
image segmentation. Our main contributions are as follows:

(1) we establish the information decision table of OCT fundus image segmentation, and calculate the mem-
bership degrees of pixels to each segmentation region using the fuzzy c-means clustering to achieve the
fuzzification of pixel categories;
(2) we design the individual fitness function based on the rough fuzzy set, and use a genetic algorithm to search
for the best breakpoints to discretize the features of OCT fundus images to reduce the uncertainty caused
by noise and redundant information;
(3) we take the feature discretization based on the rough fuzzy set as the pre-module of the deep neural
network, and introduce the deep supervised attention mechanism to obtain the important multi-scale
information, thereby improving the segmentation accuracy of OCT fundus images.

We compare RFDDN with the state-of-the-art segmentation algorithms on OCT fundus images. The experimen-
tal results show that our method can effectively eliminate the noise and redundant information in Oct fundus

Scientific Reports | (2023) 13:328 | https://doi.org/10.1038/s41598-023-27479-6 2

Vol:.(1234567890)
www.nature.com/scientificreports/

images, and greatly improve the accuracy of OCT fundus image segmentation while taking into account the
interpretability and computational efficiency.
The rest of this paper is arranged as follows: section “Related work” reviews the related work; section “Meth-
ods” elaborates the proposed algorithm flow; the experimental results are analyzed and discussed in sec-
tion “Results and discussion”; section “Conclusion” summarizes this paper.

Related work
We introduce the definition of feature discretization and the binary coding method of genetic algorithm,
and explain the basic concepts of rough sets and fuzzy sets. Then, we describe the deep supervised attention
mechanism.

Feature discretization and binary coding method. In feature discretization, continuous attributes
are divided into a finite number of subintervals, and then, these subintervals are associated with a set of discrete
­values44–46. The basic flow of OCT fundus image feature discretization is shown in Fig. 2. First, the pixel values of
the OCT fundus image are sorted, and the duplicate values are removed to obtain a set of candidate breakpoints.
Second, the breakpoints of continuous attributes are selected from the set of candidate breakpoints, and whether
to segment the interval or merge the adjacent subintervals is decided according to the judgment criteria of the
adopted discretization algorithm. If the termination condition is satisfied, the discretization result is output;
otherwise, the remaining breakpoints are continuously selected from the set of candidate breakpoints to perform
attribute discretization.
The genetic algorithm has inherent implicit parallelism and strong global search ability, and has achieved
promising results on the problem of feature d ­ iscretization47. The genetic algorithm uses binary coding to
encode the candidate breakpoints. Each bit in the binary code corresponds to a candidate breakpoint. The
values ‘1’
 and ‘0’ represent  that the breakpoint is selected and discarded, respectively. Assuming that BP
( BP = bp1 , bp2 , . . . , bpn ) is the candidate breakpoint set of an OCT fundus image, the chromosome struc-
ture in the genetic algorithm is shown in Fig. 3. The length of chromosome is n bits. The set of selected candidate
breakpoints is a discretization scheme.

Figure 2.  Basic flow of OCT fundus image feature discretization.

Scientific Reports | (2023) 13:328 | https://doi.org/10.1038/s41598-023-27479-6 3

Vol.:(0123456789)
www.nature.com/scientificreports/

Figure 3.  Chromosome structure.

Fuzzy set and rough set. A fuzzy set is used to characterize the fuzzy phenomenon that is difficult to
measure precisely because of no strict boundary ­division42. A fuzzy set (A) is defined as follows:
A = {(x, µA (x)) : x ∈ U}, (1)
where U is the universe, and µA (x) (0 ≤ µA (x) ≤ 1) is the membership of x to A.
­ niverse43. The two-tuple (K = (U, R))
The rough set regards knowledge as the ability to classify objects in the u
is a knowledge base, where R is an equivalence relation cluster on the universe (U). For X ( X ⊆ U ), the lower
and upper approximations of X with respect to R ( R ∈ R ) are
RX ={x ∈ U | [x]R ⊆ X}, (2)

RX ={x ∈ U | [x]R ∩ X �= ∅}, (3)


where [x]R ( [x]R = {y ∈ U | (x, y) ∈ R} ) is the equivalence class of x under R. The quotient set
(U/R = {[x]R | x ∈ U}) is called a knowledge.
The fuzzy set and the rough set represent two different approaches to uncertainty. The fuzzy set addresses
gradualness of knowledge by membership function, whereas the rough set addresses granularity of knowledge
by equivalence relation. The rough fuzzy set combines the advantages of the rough set and the fuzzy set, and has
a more powerful uncertainty analysis ­capability48. The rough fuzzy set can also be combined with deep learning
to achieve a more powerful system regarding to learning and generalization. Affonso et al. presented a method-
­ etwork49. This work shows that
ology to biological image classification through a rough-fuzzy artificial neural n
combining the deep learning with rough fuzzy sets has a great potential for OCT fundus image segmentation.

Deep supervised attention mechanism. The attention mechanism exploits the autocorrelation matrix
to capture the long-distance dependence between pixels in OCT images, enabling the neural network to focus
on the extraction of key ­features50. On this basis, the deep supervised attention mechanism realizes the mapping
of the representation of pixels to area objects by constructing the correlation between pixels and area objects.
The principle of the deep supervised attention mechanism is shown in Fig. 4. First, the rough segmentation
map formed by deep supervision and the raw input are used to construct the relationship between pixels and
area objects, thus realizing the mapping of the representation of pixels to area objects. Then, the original spatial
features are restored by reverse mapping, thereby enhancing the context representation of pixels to classes.

Figure 4.  Principle of the deep supervised attention mechanism.

Scientific Reports | (2023) 13:328 | https://doi.org/10.1038/s41598-023-27479-6 4

Vol:.(1234567890)
www.nature.com/scientificreports/

Methods
We first introduce the process of calculating membership by fuzzy c-means clustering (FCM). Then, we construct
the fitness function using the rough fuzzy set as the discretization criterion. Finally, we elaborate RFDDN for
OCT fundus image segmentation.

Membership calculated by FCM. In general, it is prone to errors when an object in a dataset is completely
classified into a certain category. FCM can assign a weight between each object and each category to indicate
the degree to which the object belongs to the ­category42. To build a rough fuzzy model applied to OCT fundus
images, we first need to find out the membership functions of pixels to segmentation regions.
An OCT fundus image can be represented by an information decision table ( S = (U, B, C, V , f )), where
U is the pixel set, B is the brightness attribute, C is the category attribute of the segmentation region, V is the
range, and f is the mapping function from objects to each attribute range. Assuming that U contains N pixels,
the number of categories is M, xi is the brightness value of the i-th pixel (1 ≤ i ≤ N ), and the brightness value
of the class center of the j-th category is initialized to cj0 (1 ≤ j ≤ M ), then the membership of the i-th pixel to
the j-th category is initialized as
� � 2
�� M xi − cj0
uij0 = 1 � � . (4)
xi − ck0
k=1

Then, the brightness value of the class center of the j-th category is updated in the next iteration as
N     N  
 2 2
cj1 = uij0 × xi uij0 . (5)
i=1 i=1

The membership and the brightness value of the class center are updated iteratively until the following termina-
tion conditions are met:
 
max uijt+1 − uijt  < ε, (6)
 
ij

where t is the number of iterations and ε is the error threshold. Thus, the membership of each pixel in U to each
category is obtained, as shown in Algorithm 1.

Fitness function based on the rough fuzzy set. After the membership functions of pixels to segmenta-
tion regions are calculated by FCM, we can regard the category of each segmentation region as a fuzzy set and
combine the rough set to build a rough fuzzy model of OCT fundus image discretization. The membership
function of the j-th category is
Aj (i) = uij , (7)
where Aj is the fuzzy set corresponding to the j-th category, and uij is the membership of the i-th pixel to the
j-th category. The lower and upper approximations of the pixel (x) in the rough fuzzy model established by R[B]
and Aj are
 
R[B]∗ Aj (x) = inf Aj (y) | (x, y) ∈ R[B] , (8)
y∈U

R[B]∗ Aj (x) = sup Aj (y) | (x, y) ∈ R[B] ,


 
y∈U
(9)

where R[B] is the equivalence relation generated by the brightness attribute (B). Correspondingly, the average
approximate precision of rough fuzzy sets of all categories is

Scientific Reports | (2023) 13:328 | https://doi.org/10.1038/s41598-023-27479-6 5

Vol.:(0123456789)
www.nature.com/scientificreports/

M  
1  card R[B]∗ Aj
η̄ =  , (10)
M
j=1
card R[B]∗ Aj

where 0 ≤ η̄ ≤ 1, and card(•) is the function for calculating the cardinality of fuzzy ­sets34. The larger η̄, the
higher the approximation precision. The optimal discretization scheme is the best trade-off between the average
approximation precision and the number of b ­ reakpoints41. Assuming that NDS is the number of breakpoints of
the discretization scheme (DS) and NCB is the number of candidate breakpoints, the fitness function is designed
as follows:
 
NCB − NDS
Fitness = α × + β × η̄, (11)
NCB
where α and β are weight coefficients ( α ≥ 0, β ≥ 0, α + β = 1). We perform genetic operations iteratively to
search for the optimal breakpoint set, as shown in Algorithm 2. First, we create membership functions of all
categories, and establish the lower and upper approximations of the rough fuzzy set. Then, we design the fitness
function based on the average approximate precision and the number of breakpoints. Finally, the individual with
the highest fitness is updated to the global variable in each genetic operation. When the precision required by
the system is met or the number of iterations set by the user is exceeded, the program is stopped and the optimal
discretization scheme is output; otherwise, the genetic algorithm continues to be executed until the termination
conditions are met.
The establishment of the rough fuzzy model needs to go through fuzzy c-means clustering and the generation
of the equivalence relation. The time complexities of these two stages are O(N × M × (N + M) × t) and O(N 2 ),
where N is the number of pixels, M is the number of categories, and t is the number of iterations. Furthermore,
it is necessary to calculate the lower and upper approximations of all pixels to obtain the average approximate
precision of the rough fuzzy model. As the lower and upper approximations can be calculated simultaneously
in one traversal of all pixels, the time complexity of this process is O(N 2 ). In general, M is much smaller than N
and t. Therefore, the time complexity of the rough fuzzy model is about O(N 2 × t).

Overall framework of RFDDN. OCT fundus image is input into the network for processing after the
feature discretization based on the rough fuzzy set, and output the final segmentation result. The network of
RFDDN adopts an encoder-decoder architecture with forward skip connections from the encoder stage to the
corresponding decoder stage. In the process of fusion of shallow features and deep features of the network, we
introduce a dual attention mechanism of the spatial region and the feature channel, enabling the deep network
to adaptively select the relatively important information from the feature space for fusion. Furthermore, we
use the staged attention refinement module to capture multi-scale contextual information through the hybrid
kernel convolution. The deep network processes 3D blocks with a size of 64 × 64 × 64. The backbone network
uses forward skip links and the residual structure as the basic convolution module of the segmentation model.
This structure with skip connections facilitates the propagation of gradient information. The input of each stage
is passed through two 3 × 3 × 3 convolution layers, followed by the batch normalization (BN), and prevent
overfitting through activation unit (ReLU). We add a residual connection between the input and output of the
convolution block. The number of feature maps in the encoder increases with the reduction of feature size,
where the minimum number is 16 and the maximum number is limited to 128. Then, the up-sampling size of

Scientific Reports | (2023) 13:328 | https://doi.org/10.1038/s41598-023-27479-6 6

Vol:.(1234567890)
www.nature.com/scientificreports/

each feature map is calculated and input into the block. The convolution operation of the kernel is applied to
each feature mapping. Accordingly, we create 16 features in each feature mapping. The feature map improves
the gradient flow through the deep supervised module. At the same time, the output of the attention module
generates feature maps by using the deep supervised module and concatenation. Finally, these feature maps are
processed by BN and ReLU after passing through the two convolution layers, thereby generating probability
maps of segmentation regions.
The flow of RFDDN is shown in Fig. 5. We take the feature discretization based on the rough fuzzy set as the
pre-module of RFDDN. For each input OCT fundus image, the module first creates an information decision
table of OCT fundus image segmentation. Then, fuzzy c-means clustering is used to calculate the membership
of each pixel to the category of each segmentation region. Finally, the individual fitness function based on the
rough fuzzy set is established according to membership functions and the equivalence relation generated by
the brightness attribute, and the best breakpoints are selected by a genetic algorithm to discretize the features
of the input OCT fundus image. The feature discretization of brightness values of all pixels of the input OCT
fundus image can not only remove the redundant information, but also weaken the negative impact of the noise.
RFDDN can perform well in complex images affected by noise, disturbance, and lack of clear boundaries. We
incorporate the weight information into the cross entropy to optimize the network and alleviate imbalances. The
cross entropy loss ( Lmax ) of the deep supervised branch is defined as follows:
k

(12)
 
Lmax = − wi gi log pi ,
i=1

where k is the number of categories of the segmentation regions, gi is the gold standard of the i-th category, pi
is the prediction probability of the i-th category, and wi is the weight of the i-th category. The dice similarity
coefficient (DSC) loss ( LDSC ) and the cross entropy loss ( LCE ) are
k
1  2gi pi + ε
LDSC =1 −   , (13)
k gi + pi + ε
i=1

k

(14)
 
LCE = − gi log pi ,
i=1

where ε is a small constant to prevent the divisor from being 0. Synthesizing the above formulas, the loss func-
tion of RFDDN is
L total = κLDSC + γ LCE + Lmax . (15)
κ , γ , and  are the hyperparameters for balancing each loss function. According to the importance of different
losses, we set them to 1, 1, and 0.5, respectively.

Results and discussion


We introduce the data source and the experimental environment configuration. Then, we compare RFDDN
with the state-of-the-art segmentation algorithms and perform ablation experiments. Finally, we analyze and
discuss the experimental results.

Data source and experimental environment. We use two groups of 3D retinal OCT data with the gold
standard for experiments. The gold standard is marked by medical professional physicians. The resolution of
each 3D data is 1024 × 512 × 128. Each 3D data consists of 128 2D slices, and the resolution of each 2D slice is
1024 × 512. The first group includes 43 3D retinal OCT data with the gold standard. The entire dataset contains
5504 2D slices, of which 2816 are training data, 1408 are validation data, and the rest are test data. The second
group includes 40 3D retinal OCT data with the gold standard. The entire dataset contains 5120 2D slices, of

Figure 5.  Flow of RFDDN.

Scientific Reports | (2023) 13:328 | https://doi.org/10.1038/s41598-023-27479-6 7

Vol.:(0123456789)
www.nature.com/scientificreports/

which 2560 are training data, 1280 are validation data, and the rest are test data. There are four categories of
regions in Oct fundus images, namely SRF, PED, retinal edema area (REA), and background. The area of SRF
accounts for about 0.7% of the total area, the area of PED accounts for about 0.03% of the total area, the area of
REA accounts for about 61% of the total area, and the rest area is the background. The abbreviations that appear
in this section are listed in Table 1.
The hardware environment of experiments was a server with Intel Xeon CPU processor, 16GB memory, and
NVIDIA Tesla V100 PCIe GPU (11GB video memory). The visualization, programming, simulation, testing,
and numerical calculation processing of experiments were implemented in Python 3.8.

Evaluation of segmentation performance. We compare RFDDN with the five state-of-the-art OCT
image segmentation algorithms, namely U-Net16, ­ReLayNet20, CE-Net25, ­MultiResUNet26, ­ISCLNet8. We use
DSC, 95th-percentile Hausdorff Distance (HD95), Average symmetric Surface Distance (ASD), sensitivity, and
specificity as evaluation indicators. The calculation formula of DSC is as follows:
2 × |P ∩ T|
DSC = , (16)
|P ∪ T|
where P represents the predicted segmentation result, T represents the real segmentation result, | • | is the cardi-
nality of the set, and 0 ≤ DSC ≤ 1. The larger DSC, the better the segmentation effect. The calculation formula
of HD95 is as follows:
HD95 =95% × max {dXY , dYX }, (17)
 
dXY = max min �x − y� , (18)
x∈X y∈Y

 
dYX = max min �y − x� , (19)
y∈Y x∈X

where X and Y respectively represent the point set of the real segmentation result and the point set of the pre-
dicted segmentation result, and � • � is a distance function between two points. The smaller HD95, the better
the segmentation effect. The calculation formula of ASD is as follows:
 
x∈X miny∈Y �x − y� + y∈Y minx∈X �y − x�
ASD = . (20)
|X| + |Y |
The smaller ASD, the better the segmentation effect. We use a mini-batch stochastic gradient descent optimiza-
tion algorithm with momentum to update the network parameters, and set the momentum to 0.9. The gradient
threshold is 0.005. The L2 regularization parameter is 1e-4. The epoch is 20, and the number of iterations is
3k. The batch size is 16. The initial learning rate is 1e-3. We reduce the learning rate to 0.1 times of the current
learning rate after 1k and 2.5k iterations, respectively. The segmentation results of the six methods on the first
group of data are shown in Table 2.
RFDDN is superior to the other five methods on all evaluation indicators. DSC, HD95, ASD, sensitivity, and
specificity of RFDDN are 0.97, 0.63, 0.13, 0.99, and 0.92, respectively. The results obtained by ISCLNet are the
second only inferior to those obtained by RFDDN. DSC, sensitivity, and specificity of RFDDN are 3.2%, 3.1%,
and 5.7% higher than those of ISCLNet, respectively. HD95 and ASD of RFDDN are 7.4% and 23.5% lower than
those of ISCLNet, respectively. The segmentation results of RFDDN before and after feature discretization on
the first group of data are shown in Table 3.

Abbreviation Notes
OCT Optical coherence tomography
SRF Subretinal fluid
PED Pigment epithelial detachment
REA Retinal edema area
DSC Dice similarity coefficient
HD95 95th-percentile Hausdorff Distance
ASD Average symmetric Surface Distance
DS The model with only deep supervised mechanism
DAB The model with only double attention block
ARB The model with only attention refinement block

Table 1.  Abbreviation notes.

Scientific Reports | (2023) 13:328 | https://doi.org/10.1038/s41598-023-27479-6 8

Vol:.(1234567890)
www.nature.com/scientificreports/

Method DSC HD95 ASD Sensitivity Specificity


U-Net 0.89 0.88 0.36 0.91 0.77
ReLayNet 0.87 0.89 0.38 0.89 0.74
CE-Net 0.92 0.73 0.22 0.94 0.83
MultiResUNet 0.91 0.74 0.25 0.93 0.82
ISCLNet 0.94 0.68 0.17 0.96 0.87
RFDDN 0.97 0.63 0.13 0.99 0.92

Table 2.  Segmentation results of the six methods on the first group of data. The values obtained by our
method are in bold.

Method DSC HD95 ASD Sensitivity Specificity


UnFD-Net a 0.95 0.65 0.15 0.97 0.89
RFDDN 0.97 0.63 0.13 0.99 0.92

Table 3.  Segmentation results of RFDDN before and after feature discretization on the first group of data. a
UnFD-Net is a model that has the same network structure as RFDDN but lacks the feature discretization
module. The values obtained by our method are in bold.

DSC, sensitivity, and specificity of RFDDN are 2.1%, 2.1%, and 3.4% higher than those of UnFD-Net, respec-
tively. HD95 and ASD of RFDDN are 3.1% and 13.3% lower than those of UnFD-Net, respectively. The segmenta-
tion results of the six methods on the second group of data are shown in Table 4.
RFDDN is superior to the other five methods on all evaluation indicators. DSC, HD95, ASD, sensitivity, and
specificity of RFDDN are 0.95, 0.65, 0.16, 0.97, and 0.89, respectively. The results obtained by ISCLNet are the
second only inferior to those obtained by RFDDN. DSC, sensitivity, and specificity of RFDDN are 3.3%, 2.1%,
and 8.5% higher than those of ISCLNet, respectively. HD95 and ASD of RFDDN are 5.8% and 15.8% lower than
those of ISCLNet, respectively. The segmentation results of RFDDN before and after feature discretization on
the second group of data are shown in Table 5.
DSC, sensitivity, and specificity of RFDDN are 2.2%, 1%, and 6% higher than those of UnFD-Net, respec-
tively. HD95 and ASD of RFDDN are 3% and 11.1% lower than those of UnFD-Net, respectively. The results
show that the feature discretization based on the rough fuzzy set can improve the segmentation precision of the
deep neural network. The rough fuzzy set based discretization results of two groups of 3D retinal OCT data are
shown in Fig. 6.
The number of breakpoints and data inconsistency are two major rough fuzzy set based discretization evalu-
ation indicators. The smaller the number of breakpoints and the data inconsistency, the better the discretization
effect. Each 3D retinal OCT data consists of 128 2D slices. The brightness value of each 2D slice is 8 bits (ranging

Method DSC HD95 ASD Sensitivity Specificity


U-Net 0.86 0.89 0.37 0.88 0.72
ReLayNet 0.85 0.89 0.38 0.88 0.7
CE-Net 0.88 0.76 0.28 0.91 0.75
MultiResUNet 0.87 0.77 0.29 0.9 0.73
ISCLNet 0.92 0.69 0.19 0.95 0.82
RFDDN 0.95 0.65 0.16 0.97 0.89

Table 4.  Segmentation results of the six methods on the second group of data. The values obtained by our
method are in bold.

Method DSC HD95 ASD Sensitivity Specificity


UnFD-Net 0.93 0.67 0.18 0.96 0.84
RFDDN 0.95 0.65 0.16 0.97 0.89

Table 5.  Segmentation results of RFDDN before and after feature discretization on the second group of data.
The values obtained by our method are in bold.

Scientific Reports | (2023) 13:328 | https://doi.org/10.1038/s41598-023-27479-6 9

Vol.:(0123456789)
www.nature.com/scientificreports/

Figure 6.  Rough fuzzy set based discretization results on the two groups of 3D retinal OCT data.

from 0 to 255), and the number of breakpoints is 256. Thus, the initial number of breakpoints for each 3D retinal
OCT data is 32768. After feature discretization based on the rough fuzzy set, the average number of breakpoints
in the first group is 8826, which is reduced by 73.1%, and the average number of breakpoints in the second group
is 7520, which is reduced by 77.1%. The overall data scale decrease by 75.1%. The data inconsistency of both
groups is 0. Therefore, the computational efficiency of the model is improved. The significance of DSC on the
two groups of 3D retinal OCT data is shown in Fig. 7.
We use one-way ANOVA to analyze the significance of DSC between the six methods on the two groups of
3D retinal OCT data. The threshold of significance level is 0.05. In the box plot, the lines represent the median,
25th, and 75th percentiles. **** indicates P < 0.0001. The P value is less than 0.05, which indicates a statistically
significant difference of DSC between the six methods on the two groups of 3D retinal OCT data.

Ablation experiment. The initial learning rate is an important hyperparameter of RFDDN. The segmenta-
tion results of RFDDN under different initial learning rates are shown in Table 6.

Figure 7.  Significance of DSC on the two groups of 3D retinal OCT data.

Initial learning rate DSC HD95 ASD


0.1 0.9632 0.6395 0.1382
0.01 0.9637 0.6388 0.1369
0.001 0.9682 0.6327 0.1347
0.0001 0.9618 0.6406 0.1397

Table 6.  Segmentation results of RFDDN under different initial learning rates. The values obtained by the
optimal initial learning rate are in bold.

Scientific Reports | (2023) 13:328 | https://doi.org/10.1038/s41598-023-27479-6 10

Vol:.(1234567890)
www.nature.com/scientificreports/

An initial learning rate that is extremely large will cause the gradient to oscillate around the minimum, while
an initial learning rate that is extremely small will result in slow convergence. RFDDN can achieve the best evalu-
ation indicator values when the initial learning rate is 0.001. The evaluation indicator values of RFDDN under
different learning rates have little difference. Therefore, RFDDN is not sensitive to the initial learning rate. Then,
we conduct the ablation experiment with four different models. The four models are (a) the baseline with only
encoder and decoder; (b) the model with only deep supervised mechanism (DS); (c) the model with only double
attention block (DAB); (d) the model with only attention refinement block (ARB). These models have the same
pre-trained weights as RFDDN. The segmentation results of RFDDN and the four models are shown in Table 7.
The baseline has the worst DSC, HD95, and ASD at 0.78, 1.31, and 0.5, respectively. Although DS can allevi-
ate the negative impact of data imbalance, the performance of the model with only DS is still unsatisfactory,
and the obtained DSC, HD95, and ASD are 0.81, 1.13, and 0.36, respectively. The introduction of the attention
mechanism enables both the model with only DAB and the model with only ARB to obtain better segmenta-
tion results than those of the baseline and the model with only DS. DSC of RFDDN is 7.8% better than that of
the model with only ARB. HD95 and ASD of RFDDN are 17.1% and 53.6% lower than those of the model with
only ARB, respectively. RFDDN has a strong ability to capture contextual information. Therefore, RFDDN can
obtain the best segmentation results. The visual segmentation results obtained from the ablation experiment
are shown in Fig. 8.
The baseline has the worst segmentation effect. Owing to the lack of ability to capture information, the model
with only DS has the poor segmentation ability, which is prone to false segmentation. Although the segmentation
performance of the model with only DAB and the model with only ARB has improved, it is still difficult to seg-
ment some regions with blurred boundaries. Baseline, DS, DAB, and ARB have artifacts in the area surrounded
by green lines, and are not able to clearly display some detail characteristic of slices in the area surrounded by
red lines. Compared with these four models, RFDDN generates smaller fuzzy regions. Obviously, RFDDN has
the best segmentation effect.

Discussion. U-Net combines low-level detail information with high-level semantic information by concat-
enating feature maps from different levels to improve segmentation accuracy. However, U-Net is prone to over-
fitting during training because of shallower layers and fewer parameters. Furthermore, the consecutive pooling

Method DSC HD95 ASD


Baseline 0.78 1.31 0.5
DS 0.81 1.13 0.36
DAB 0.82 1.06 0.32
ARB 0.9 0.76 0.28
RFDDN 0.97 0.63 0.13

Table 7.  Results of the ablation experiment. The values obtained by our method are in bold.

Figure 8.  Visual segmentation results of different models.

Scientific Reports | (2023) 13:328 | https://doi.org/10.1038/s41598-023-27479-6 11

Vol.:(0123456789)
www.nature.com/scientificreports/

and strided convolutional operations led to the loss of some spatial information. ReLayNet uses a contracting
path of encoders to learn a hierarchy of contextual features, followed by an expansive path of decoders for
semantic segmentation. However, the convolutional blocks employed by these encoders and decoders have lim-
ited ability to capture important features. CE-Net adopts the pre-trained ResNet block in the feature encoder and
integrates the dense atrous convolution block and the residual multi-kernel pooling into the ResNet modified
U-Net structure to capture more high-level features and preserve more spatial information. Although CE-Net
can achieve better segmentation accuracy than that of U-Net, it still faces the problem that the adopted convolu-
tional block has the limited ability to capture important features. MultiResUNet uses Res paths to reconcile the
two incompatible sets of features from the encoder and the decoder, and designs MultiRes blocks to augment
U-Net with the ability of multi-resolutional analysis. However, it still suffers from the loss of spatial informa-
tion caused by the consecutive pooling and strided convolutional operations. ISCLNet learns the intra-slice
fluid-background similarity and the fluid-retinal layers dissimilarity within an OCT slice, and builds an inter-
slice contrastive learning architecture to learn the similarity among adjacent OCT slices. However, it relies on
complete OCT volumes that may be difficult to access in the clinic. In addition, none of the above methods has
the special mechanism for dealing with noise and uncertain information. RFDDN introduces a deep supervised
attention mechanism into the network, and greatly eliminates the negative impact of redundant information
and noise through feature discretization based on the rough fuzzy set. Therefore, RFDDN can achieve higher
segmentation accuracy.

Conclusion
Deep learning technology has achieved promising results in optical coherence tomography (OCT) fundus image
segmentation. However, the noise and redundant information in the images are the main reasons for the per-
formance bottleneck of the deep network. In addition, the deep network lacks robustness and interpretability,
and is difficult to deal with the uncertain information. To this end, we have proposed a deep network embed-
ded with rough fuzzy discretization (RFDDN) for OCT fundus image segmentation. Our contributions are as
follows: (1) we establish the information decision table of OCT fundus image segmentation, and calculate the
membership degrees of pixels to each segmentation region using the fuzzy c-means clustering to achieve the
fuzzification of pixel categories; (2) we design the individual fitness function based on the rough fuzzy set, and
use a genetic algorithm to search for the best breakpoints to discretize the features of OCT fundus images to
reduce the uncertainty caused by noise and redundant information; (3) we take the feature discretization based
on the rough fuzzy set as the pre-module of the deep neural network, and introduce the deep supervised attention
mechanism to obtain the important multi-scale information, thereby improving the segmentation accuracy of
OCT fundus images. We compare RFDDN with U-Net, ReLayNet, CE-Net, MultiResUNet, and ISCLNet on the
two groups of 3D retinal OCT data. RFDDN is superior to the other five methods on all evaluation indicators.
The results obtained by ISCLNet are the second only inferior to those obtained by RFDDN. DSC, sensitivity, and
specificity of RFDDN are evenly 3.3%, 2.6%, and 7.1% higher than those of ISCLNet, respectively. HD95 and
ASD of RFDDN are evenly 6.6% and 19.7% lower than those of ISCLNet, respectively. Then, we compare the
results before and after feature discretization. DSC, sensitivity, and specificity of RFDDN are evenly 2.2%, 1.6%,
and 4.7% higher than those of UnFD-Net, respectively. HD95 and ASD of RFDDN are evenly 3.1% and 12.2%
lower than those of UnFD-Net, respectively. Furthermore, we analyze the hyperparameters of the network and
conduct the ablation experiment with four different models. The experimental results show that RFDDN can
effectively eliminate the noise and redundant information in Oct fundus images, and greatly improve the accuracy
of OCT fundus image segmentation while taking into account the interpretability and computational efficiency.
The future research work includes: (1) applying this method to other medical image datasets to test and
improve the adaptability of the model; (2) comparing the proposed method with the state-of-the-art feature
discretization algorithms on performance to optimize the feature discretization module, thus improving the
generalization ability of the network.

Data availability
The datasets used and/or analysed during the current study available from the corresponding author on reason-
able request.

Received: 10 October 2022; Accepted: 3 January 2023

References
1. Abbas, F. et al. Revival of light signalling in the postmortem mouse and human retina. Nature 606, 351–357 (2022).
2. Bogunović, H. et al. Retouch: The retinal oct fluid detection and segmentation benchmark and challenge. IEEE Trans. Med. Imaging
38(8), 1858–1874 (2019).
3. Lim, L. S., Mitchell, P., Seddon, J. M., Holz, F. G. & Wong, T. Y. Age-related macular degeneration. Lancet 379(9827), 1728–1738
(2012).
4. Wang, T. et al. Label propagation and higher-order constraint-based segmentation of fluid-associated regions in retinal SD-OCT
images. Inf. Sci. 358, 92–111 (2016).
5. Yao, C. et al. Joint segmentation of multi-class hyper-reflective foci in retinal optical coherence tomography images. IEEE Trans.
Biomed. Eng. 69(4), 1349–1358 (2022).
6. Xing, G. et al. Multi-scale pathological fluid segmentation in OCT with a novel curvature loss in convolutional neural network.
IEEE Trans. Med. Imaging 41(6), 1547–1559 (2022).
7. Wolf, S. & Wolf-Schnurrbusch, U. Spectral-domain optical coherence tomography use in macular diseases: A review. Ophthalmo-
logica 224(6), 333–340 (2010).

Scientific Reports | (2023) 13:328 | https://doi.org/10.1038/s41598-023-27479-6 12

Vol:.(1234567890)
www.nature.com/scientificreports/

8. He, X., Fang, L., Tan, M. & Chen, X. Intra- and inter-slice contrastive learning for point supervised OCT fluid segmentation. IEEE
Trans. Image Process. 31, 1870–1881 (2022).
9. Wilkins, G. R., Houghton, O. M. & Oldenburg, A. L. Automated segmentation of intraretinal cystoid fluid in optical coherence
tomography. IEEE Trans. Biomed. Eng. 59(4), 1109–1114 (2012).
10. Rashno, A. et al. Fully automated segmentation of fluid/cyst regions in optical coherence tomography images with diabetic macular
edema using neutrosophic sets and graph algorithms. IEEE Trans. Biomed. Eng. 65(5), 989–1001 (2018).
11. Wang, J. et al. Automated volumetric segmentation of retinal fluid on optical coherence tomography. Biomed. Opt. Express 7(4),
1577–1589 (2016).
12. Novosel, J., Wang, Z., Jong, H. D., Velthoven, M. V., Vermeer, K. A. & Vliet, L. J. V. Locally-adaptive loosely-coupled level sets for
retinal layer and fluid segmentation in subjects with central serous retinopathy. In 13th IEEE International Symposium on Biomedi-
cal Imaging (ISBI), Prague, pp. 702–705 (2016)
13. Montuoro, A., Waldstein, S. M., Gerendas, B. S., Schmidt-Erfurth, U. & Bogunović, H. Joint retinal layer and fluid segmentation
in OCT scans of eyes with severe macular edema using unsupervised representation and auto-context. Biomed. Opt. Express 8(3),
1874–1888 (2017).
14. Banerjee, A. & Maji, P. Rough sets for bias field correction in MR images using contraharmonic mean and quantitative index. IEEE
Trans. Med. Imaging 32(11), 2140–2151 (2013).
15. Jothi, G. & Hannah, I. H. Hybrid tolerance rough set-firefly based supervised feature selection for MRI brain tumor image clas-
sification. Appl. Soft Comput. 46, 639–651 (2016).
16. Ronneberger, O., Fischer, P. & Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Medical Image
Computing and Computer-Assisted Intervention (MICCAI), Munich, pp. 234–241 (2015)
17. Shelhamer, E., Long, J. & Darrell, T. Fully convolutional networks for semantic segmentation. IEEE Trans. Pattern Anal. Mach.
Intell. 39(4), 640–651 (2017).
18. Badrinarayanan, V., Kendall, A. & Cipolla, R. Segnet: A deep convolutional encoder-decoder architecture for image segmentation.
IEEE Trans. Pattern Anal. Mach. Intell. 39(12), 2481–2495 (2017).
19. Chen, L.-C., Papandreou, G., Kokkinos, I., Murphy, K. & Yuille, A. L. Deeplab: Semantic image segmentation with deep convolu-
tional nets, atrous convolution, and fully connected CRFs. IEEE Trans. Pattern Anal. Mach. Intell. 40(4), 834–848 (2018).
20. Roy, A. G. et al. Relaynet: Retinal layer and fluid segmentation of macular optical coherence tomography using fully convolutional
networks. Biomed. Opt. Express 8(8), 3627–3642 (2017).
21. Fauw, J. D. et al. Clinically applicable deep learning for diagnosis and referral in retinal disease. Nat. Med. 24(9), 1342–1350 (2018).
22. Hu, J., Chen, Y. & Yi, Z. Automated segmentation of macular edema in OCT using deep neural networks. Med. Image Anal. 55,
216–227 (2019).
23. Shao, M., Zhang, G., Zuo, W. & Meng, D. Target attack on biomedical image segmentation model based on multi-scale gradients.
Inf. Sci. 554, 33–46 (2021).
24. Lin, C., Zheng, Y., Xiao, X. & Lin, J. CXR-RefineDet: Single-shot refinement neural network for chest X-ray radiograph based on
multiple lesions detection. J. Healthc. Eng.https://​doi.​org/​10.​1155/​2022/​41821​91 (2022).
25. Gu, Z. et al. CE-Net: Context encoder network for 2D medical image segmentation. IEEE Trans. Med. Imaging 38(10), 2281–2292
(2019).
26. Ibtehaz, N. & Rahman, M. S. MultiResUNet: Rethinking the U-Net architecture for multimodal biomedical image segmentation.
Neural Netw. 121, 74–87 (2020).
27. Hesamian, M. H., Jia, W., He, X. & Kennedy, P. Deep learning techniques for medical image segmentation: Achievements and
challenges. J. Digit. Imaging 32(4), 582–596 (2019).
28. Minaee, S. et al. Image segmentation using deep learning: A survey. IEEE Trans. Pattern Anal. Mach. Intell. 44(7), 3523–3542
(2022).
29. Zeng, L., Huang, M., Li, Y., Chen, Q. & Dai, H.-N. Progressive feature fusion attention dense network for speckle noise removal
in OCT images. IEEE/ACM Trans. Comput. Biol. Bioinform.https://​doi.​org/​10.​1109/​TCBB.​2022.​32052​17 (2022).
30. Qiu, B. et al. Comparative study of deep neural networks with unsupervised noise2noise strategy for noise reduction of optical
coherence tomography images. J. Biophotonics 14(11), e202100151. https://​doi.​org/​10.​1002/​jbio.​20210​0151 (2021).
31. Sang, Y. et al. An effective discretization method for disposing high-dimensional data. Inf. Sci. 270, 73–91 (2014).
32. Ramírez-Gallego, S. et al. Data discretization: Taxonomy and big data challenge. Wiley Interdiscip. Rev. Data Min. Knowl. Discov.
6(1), 5–21 (2016).
33. Tsai, C.-F. & Chen, Y.-C. The optimal combination of feature selection and data discretization: An empirical study. Inf. Sci. 505,
282–293 (2019).
34. Chen, Q., Huang, M., Wang, H. & Xu, G. A feature discretization method based on fuzzy rough sets for high-resolution remote
sensing big data under linear spectral model. IEEE Trans. Fuzzy Syst. 30(5), 1328–1342 (2022).
35. Chen, Q., Ding, W., Huang, X. & Wang, H. Generalized interval type II fuzzy rough model based feature discretization for mixed
pixels. IEEE Trans. Fuzzy Syst.https://​doi.​org/​10.​1109/​TFUZZ.​2022.​31906​25 (2022).
36. Rahman, M. G. & Islam, M. Z. Discretization of continuous attributes through low frequency numerical values and attribute
interdependency. Expert Syst. Appl. 45(1), 410–423 (2016).
37. Quan, W. et al. Image inpainting with local and global refinement. IEEE Trans. Image Process. 31, 2405–2420 (2022).
38. Wang, X. et al. UD-MIL: Uncertainty-driven deep multiple instance learning for OCT image classification. IEEE J. Biomed. Health
Inform. 24(12), 3431–3442 (2020).
39. Mehta, R. et al. Propagating uncertainty across cascaded medical imaging tasks for improved deep learning inference. IEEE Trans.
Med. Imaging 41(2), 360–373 (2022).
40. Wang, C., Huang, Y., Ding, W. & Cao, Z. Attribute reduction with fuzzy rough self-information measures. Inf. Sci. 549, 68–86
(2021).
41. Chen, Q. & Huang, M. Rough fuzzy model based feature discretization in intelligent data preprocess. J. Cloud Comput. 10(1), 1–13
(2021).
42. Kumar, D., Agrawal, R. K. & Kumar, P. Bias-corrected intuitionistic fuzzy c-means with spatial neighborhood information approach
for human brain MRI image segmentation. IEEE Trans. Fuzzy Syst. 30(3), 687–700 (2022).
43. Banerjee, A. & Maji, P. Rough sets and stomped normal distribution for simultaneous segmentation and bias field correction in
brain MR images. IEEE Trans. Image Process. 24(12), 5764–5776 (2015).
44. Chen, Q., Huang, M. & Wang, H. A feature discretization method for classification of high-resolution remote sensing images in
coastal areas. IEEE Trans. Geosci. Remote Sens. 59(10), 8584–8598 (2021).
45. Huang, P. et al. Tripleconvtransformer: A deep learning vessel trajectory prediction method fusing discretized meteorological
data. Front. Environ. Sci.https://​doi.​org/​10.​3389/​fenvs.​2022.​10125​47 (2022).
46. Zeng, L., Chen, Q. & Huang, M. RSFD: A rough set-based feature discretization method for meteorological data. Front. Environ.
Sci.https://​doi.​org/​10.​3389/​fenvs.​2022.​10138​11 (2022).
47. Tahan, M. H. & Asadi, S. EMDID: Evolutionary multi-objective discretization for imbalanced datasets. Inf. Sci. 432, 442–461
(2018).
48. Zhan, J. & Xu, W. Two types of coverings based multigranulation rough fuzzy sets and applications to decision making. Artif. Intell.
Rev. 53(1), 167–198 (2020).

Scientific Reports | (2023) 13:328 | https://doi.org/10.1038/s41598-023-27479-6 13

Vol.:(0123456789)
www.nature.com/scientificreports/

49. Affonso, C., Sassi, R. J. & Barreiros, R. M. Biological image classification using rough-fuzzy artificial neural network. Expert Syst.
Appl. 42(24), 9482–9488 (2015).
50. Niu, Z., Zhong, G. & Yu, H. A review on the attention mechanism of deep learning. Neurocomputing 452, 48–62 (2021).

Acknowledgements
This work was supported in part by the China Postdoctoral Science Foundation under Grant 2021M701838, in
part by the National Natural Science Foundation of China under Grant 62272109, in part by the Guangdong
University Student Science and Technology Innovation Cultivation Special Fund Support Project under Grant
pdjh2023a0243, in part by the Undergraduate Innovation Team Project of Guangdong Ocean University under
Grant CXTD2021019, and in part by the Zhanjiang Non-funded Science and Technology Research Program
under Grant 2022B01079.

Author contributions
Q.C. contributed to method design, experimental analysis, and manuscript writing. L.Z. and C.L. contributed to
the visualization. Q.C and C.L. were responsible for data provision and funding acquisition. All authors finalized
the manuscript after its internal evaluation.

Competing interests
The authors declare no competing interests.

Additional information
Correspondence and requests for materials should be addressed to Q.C. or C.L.
Reprints and permissions information is available at www.nature.com/reprints.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and
institutional affiliations.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International
License, which permits use, sharing, adaptation, distribution and reproduction in any medium or
format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the
Creative Commons licence, and indicate if changes were made. The images or other third party material in this
article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the
material. If material is not included in the article’s Creative Commons licence and your intended use is not
permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder. To view a copy of this licence, visit http://​creat​iveco​mmons.​org/​licen​ses/​by/4.​0/.

© The Author(s) 2023

Scientific Reports | (2023) 13:328 | https://doi.org/10.1038/s41598-023-27479-6 14

Vol:.(1234567890)

You might also like