Machine Learning Approaches in Brillouin Distributed Fiber Optic Sensors
Machine Learning Approaches in Brillouin Distributed Fiber Optic Sensors
Machine Learning Approaches in Brillouin Distributed Fiber Optic Sensors
net/publication/372164711
CITATIONS READS
0 22
2 authors, including:
Christos Karapanagiotis
Bundesanstalt für Materialforschung und -prüfung
11 PUBLICATIONS 103 CITATIONS
SEE PROFILE
All content following this page was uploaded by Christos Karapanagiotis on 07 July 2023.
Bundesanstalt für Materialforschung und-Prüfung, Unter den Eichen 87, 12205 Berlin, Germany;
katerina.krebber@bam.de
* Correspondence: christos.karapanagiotis@bam.de
Abstract: This paper presents reported machine learning approaches in the field of Brillouin dis-
tributed fiber optic sensors (DFOSs). The increasing popularity of Brillouin DFOSs stems from
their capability to continuously monitor temperature and strain along kilometer-long optical fibers,
rendering them attractive for industrial applications, such as the structural health monitoring of large
civil infrastructures and pipelines. In recent years, machine learning has been integrated into the
Brillouin DFOS signal processing, resulting in fast and enhanced temperature, strain, and humidity
measurements without increasing the system’s cost. Machine learning has also contributed to en-
hanced spatial resolution in Brillouin optical time domain analysis (BOTDA) systems and shorter
measurement times in Brillouin optical frequency domain analysis (BOFDA) systems. This paper
provides an overview of the applied machine learning methodologies in Brillouin DFOSs, as well as
future perspectives in this area.
Keywords: distributed fiber optic sensors; Brillouin scattering; BOTDA; BOFDA; machine learning;
artificial neural networks; structural health monitoring; strain and temperature measurements
1. Introduction
Over the last few years, machine learning has revealed the untapped potential for ad-
Citation: Karapanagiotis, C.; Krebber,
vanced signal processing and provided new avenues for innovation and progress in the field
K. Machine Learning Approaches in
of distributed fiber optic sensors (DFOSs). DFOSs enable continuous measurements along
Brillouin Distributed Fiber Optic
the entire length of an optical fiber, which can be up to hundreds of kilometers. This has
Sensors. Sensors 2023, 23, 6187.
https://doi.org/10.3390/s23136187
already made DFOSs attractive for a wide range of applications, including structural health
monitoring of civil and geotechnical structures [1–5], pipeline and borehole monitoring for
Academic Editors: Vittorio Passaro, leak detection [6], seismic activity monitoring [7–10] or even the condition monitoring of
Yuliya Semenova and high-voltage submarine cables [11] and deep earth dynamics in oceans [12]. Even though
Nikolay Kazanskiy
the most common measurands are temperature and strain, DFOSs can directly or indirectly
Received: 1 June 2023 measure the humidity [13–15], pressure [16], displacement [4,17], radiation [18–20], gas
Revised: 29 June 2023 concentration [21,22], etc.
Accepted: 4 July 2023 DFOSs are primarily categorized based on the scattering mechanisms, which can be
Published: 6 July 2023 Rayleigh, Brillouin or Raman [23]. Rayleigh-based DFOSs rely on the detection of the
backscattered light generated by the interaction between the light and the fiber’s inherent
refractive index fluctuations. This technique provides the strongest signal and is ideal for
dynamic sensing applications, such as distributed acoustic sensing (DAS). Rayleigh-based
Copyright: © 2023 by the authors. DFOSs do not require signal averaging and can provide real-time monitoring. For the sake
Licensee MDPI, Basel, Switzerland.
of completeness, we mention that many Rayleigh-based DFOSs operating either in the
This article is an open access article
time or frequency domain have been developed and proposed. Similar to Brillouin-based
distributed under the terms and
DFOSs, these sensors can be used for temperature and strain monitoring [24–27]. Brillouin-
conditions of the Creative Commons
based DFOSs rely on the detection of the Brillouin scattering generated by the interaction
Attribution (CC BY) license (https://
between the light and the acoustic waves propagating along the fiber. This technique is
creativecommons.org/licenses/by/
highly sensitive and provides accurate measurements of temperature and strain. However,
4.0/).
In thisDistributed
2. Brillouin section, we describe
Fiber Opticthe most-known
Sensors (DFOSs) types of Brillouin DFOS systems and the
conventional signal processing for temperature
In this section, we describe the most-known types or strain extraction.
of Brillouin DFOSRayleigh
systemsscattering
and theis
elastic and arises
conventional signalfrom the non-propagating
processing for temperature density fluctuations
or strain extraction.ofRayleigh
the medium. Because
scattering is
this scattering effect is the strongest, no signal averaging is needed,
elastic and arises from the non-propagating density fluctuations of the medium. Because and thus, Rayleigh
DFOSs
this are widely
scattering effectused
is thefor vibrationno
strongest, monitoring. Brillouin
signal averaging and Raman
is needed, andscattering effects
thus, Rayleigh
are inelastic and originate from the interaction of the propagating light
DFOSs are widely used for vibration monitoring. Brillouin and Raman scattering effects are with the acoustic
and optical
inelastic phonons,
and originate from respectively.
the interactionFurthermore, the frequency
of the propagating light withdownshifted
the acoustic and and
upshifted
optical components
phonons, resulting
respectively. from these
Furthermore, the interactions are called “Stokes”
frequency downshifted and com-
and upshifted “anti-
Stokes”, respectively. Raman DFOSs are mainly used for temperature
ponents resulting from these interactions are called “Stokes” and “anti-Stokes”, respectively. sensing, while
Brillouin DFOSs provide temperature and strain information.
Raman DFOSs are mainly used for temperature sensing, while Brillouin DFOSs provide We note that in Brillouin
DFOS, the and
temperature temperature and strainWe
strain information. information
note that inisBrillouin
related DFOS,
to the the
frequency
temperaturedifference
and
between the incident and the scattered Stokes or Anti-stokes light.
strain information is related to the frequency difference between the incident and the scat- This frequency
difference
tered Stokesisorcalled the Brillouin
Anti-stokes frequency
light. This frequencyshift (BFS). Aisschematic
difference called therepresentation
Brillouin frequencyof the
scattering
shift (BFS). Aeffects is shown
schematic in Figure 1.of the scattering effects is shown in Figure 1.
representation
Figure1.1.Schematic
Figure Schematicrepresentation
representationofofthe
theRayleigh,
Rayleigh,Brillouin
Brillouinand
andRaman
Ramanscattering
scatteringeffects
effectsininoptical
optical
fibers providing a rough estimation of the backscattered intensity or frequency changes
fibers providing a rough estimation of the backscattered intensity or frequency changes with temper- with
temperature or strain. Copyright 2021 licensed under a Creative Commons Attribution 4.0
ature or strain. Copyright 2021 licensed under a Creative Commons Attribution 4.0 International
International license [84].
license [84].
Sensors 2023, 23, x FOR PEER REVIEW 4 of 27
Figure2.2.Two
Figure Twomost
most common
common types of Brillouin-distributed
Brillouin-distributedfiber
fiberoptic
opticsensors
sensorsbased
basedononthe
the time
time (a)
and frequency (b) domain. BOTDA: Brillouin
(a) and frequency (b) domain. BOTDA: Brillouin optical time domain analysis; BOFDA: Brillouin
domain analysis; BOFDA: Brillouin
opticalfrequency
optical frequencydomain
domainanalysis;
analysis;EOM:
EOM:electro-optic
electro-opticmodulator;
modulator;RF:
RF:radio
radiofrequency;
frequency;PG:PG:pulse
pulse
generator; PD: photodiode.
generator; PD: photodiode.
Eventhough
Even thoughthe thedata
dataacquisition
acquisitionprocess
processdiffers
differsfrom
fromsystem
systemtotosystem,
system,thethesignal
signal
processingforfor
processing temperature
temperature and and
strainstrain extraction
extraction from thefrom the so-called
so-called Brillouin
Brillouin gain gain
spectrum
spectrum
(BGS) (BGS) is
is similar. similar.
The most The most conventional
conventional feature isfeature is the Brillouin
the Brillouin frequencyfrequency shift
shift (BFS),
(BFS),iswhich
which is extracted
extracted by performing
by performing Lorentzian
Lorentzian curve fitting
curve fitting (LCF) (LCF)
on theonBGSthedata.
BGS data.
We
note
We that,
note apart from Lorentzian
that, apart curves,
from Lorentzian Gaussian
curves, or pseudo-Voigt
Gaussian curvescurves
or pseudo-Voigt have alsohavebeen
also
employed and inand
been employed some in cases
some delivered a morea accurate
cases delivered BFS [88,89].
more accurate Furthermore,
BFS [88,89]. BFS
Furthermore,
extraction basedbased
BFS extraction on cross-correlation is also
on cross-correlation common
is also common in the literature
in the [90,91].
literature [90,91].The
TheBFS
BFS
depends
dependslinearly
linearlyon ontemperature
temperatureandandstrain,
strain,and
andthus,
thus,the
thetemperature
temperatureororstrain
strainchange
change
can be estimated, providing that the temperature and strain coefficients are known. These
coefficients are unique for every fiber, and unless they are provided by the manufacturer,
Sensors 2023, 23, 6187 5 of 26
their estimation requires a preliminary analysis of the BFS under different temperature and
strain conditions.
Simultaneous measurements of temperature and strain are not trivial due to the cross-
sensitivity effects. This means that changes in one parameter can be measured as long as the
other one is constant. This problem has been addressed by using two optical fibers, placed
in parallel and close to each other with the one being mechanically isolated [74]. However,
the two-fiber configuration is impractical for many applications. Temperature and strain
discrimination has been demonstrated using hybrid systems employing more than one
scattering effect or specialty fibers [75–78]. Some specialty fibers, such as large effective area
fibers (LEAFs) [79–81], photonic crystal fibers [82], and dispersion compensating fibers [83]
offer a multipeak BGS with at least two Brillouin peaks, with different temperature and
strain sensitivities. In that case, one extracts simultaneously the temperature (T) and strain
(ε) by solving a system of equations, as follows:
peak1 peak1
BFS peak1 = CT T + Cε ε (1)
peak2 peak2
BFS peak2 = CT T + Cε ε (2)
where CT and Cε are the temperature and strain coefficients, respectively.
3.1. Machine Learning for Feature Extraction from the Brillouin Gain Spectrum
Many types of machine learning were proposed to extract the BFS. The LCF can be
cumbersome, especially in cases with low SNR, which in turn, results in slow and inaccu-
rate temperature or strain estimations. Machine learning was applied to overcome these
limitations and provide a more efficient way to extract the BFS leading to more accurate and
faster temperature or strain measurements. To this end, many types of machine learning al-
gorithms were proposed, including artificial neural networks (ANNs), convolutional neural
networks (CNNs), support vector machines (SVMs), k-nearest neighbors (KNNs), etc.
Figure 3 shows a schematic of the ANN methodology for BFS extraction reported
by Liang et al. [92]. Instead of performing LCF on the data points of the BGS, those data
points were given as inputs to an ANN. The proposed ANN consisted of two hidden layers.
The hidden layers of the ANNs consist of nodes that are nothing more than activation
functions applied to the weighted sums of the outputs of all the nodes of the previous
layer. The ANN training aims at optimizing the weights so that the error of the output is
minimized. The optimization algorithm is based on backpropagation [93]. Liang et al. [92]
trained an ANN and evaluated its performance using synthetic and experimental data,
respectively. To increase the model’s robustness, the training dataset included different
frequency ranges, linewidths and noise levels. The authors note that both the inputs and
the outputs were normalized before training. The normalization of the input and the output
facilitates the model generalization based on the BGS with different gains and different
scanning frequencies, respectively.
The optimization of the hyperparameters (number of hidden layers, number of nodes,
type of activation function, etc.) is of great importance in all machine learning models.
Liang et al. [92] used a validation dataset to optimize the ANN during training and applied
early stopping to avoid overfitting. For the sake of completeness, we mention that overfit-
ting refers to the model’s failure to generalize based on new data [94], while early stopping
Sensors 2023, 23, 6187 6 of 26
stops the training procedure when the model’s performance based on the validation dataset
starts degrading [95]. We note that the complexity of the ANN architectures is strongly
related to the prediction times. Therefore, the relatively simple architecture proposed
by Liang et al. [92] proved to be very fast. Specifically, the final ANN model required
approximately only 1.2 s to process 100,000 BGSs. Even though ANNs can deliver fast
predictions, the training time is usually time-consuming. In this case, the reported training
Sensors 2023, 23, x FOR PEER REVIEWtime was approximately three hours. Furthermore, the authors tested the final ANN6model of 27
based on real experimentally obtained data using a BOTDA system. The BFS errors were
found to be very close to the LCF errors.
Figure 3. Brillouin frequency shift (BFS) extraction using an artificial neural network (ANN). w:
Figure 3. Brillouin frequency shift (BFS) extraction using an artificial neural network (ANN).
weight; Σw: weighted sum; g: activation function.
w: weight; Σw: weighted sum; g: activation function.
The
The optimization
described training of theand
hyperparameters
model’s evaluation (number of hidden
procedure is shown layers, number
in Figure of
4. This
nodes, type of activation function, etc.) is of great importance
training pipeline is very common in machine learning and has been used in the majority of in all machine learning
models.
the papersLiang
thatet al.discussed
are [92] usedhere.
a validation
The traindataset to optimize
and validation data the ANN
usually during
consist training
of synthetic
and
data,applied
while early
the teststopping to avoid
data result fromoverfitting.
lab or fieldFor the sake of Before
experiments. completeness,
trainingwe andmention
testing,
that overfitting refers to the model’s failure to generalize based
all data are normalized. During the training process, the algorithm undergoes multiple on new data [94], while
early stopping stops the training procedure when the model’s performance
iterations (epochs) based on the training dataset. After each epoch, the model’s performance based on the
validation dataset starts degrading [95]. We note that the
is evaluated by assessing its ability to generalize based on the validation dataset. Thiscomplexity of the ANN
architectures is strongly
training procedure relatedmany
is repeated to thetimesprediction times. Therefore,
with different the relatively
hyperparameter settings.simple
This
architecture
hyperparameter proposedtuning byprocess
Liang et is aal.common
[92] proved to be
practice invery fast. learning,
machine Specifically, as itthe final
helps to
ANN
find themodel
mostrequired
effective approximately
settings for the only 1.2 s to
algorithm. The process 100,000
final model BGSs. Even
is selected basedthough
on the
ANNs can deliver
performance usingfastthe predictions, the training
validation dataset. Finally,time is usually
to assess time-consuming.
the overall effectivenessInofthis the
case, the reported training time was approximately three
trained model, it is evaluated using a separate and independent dataset called thehours. Furthermore, the test
authors
data.
tested the provides
This step final ANN anmodel
unbiasedbased on real
measure of experimentally obtained data
the model’s performance basedusing a BOTDA
on unseen data,
system.
confirmingThe its
BFSgeneralization
errors were found to be very close to the LCF errors.
capabilities.
The
We described training
note that apart andthe
from model’s
described evaluation
training procedure
pipeline,ismethods
shown inbased Figureon4. cross-
This
training
validationpipeline
are alsois very
used,common in machine
especially when the learning
datasets andare
haslimited.
been used in the majority
Specifically, cross-
of the papers
validation that are
is based ondiscussed here. The
data resampling andtrain and validation
repeatedly splits thedata usually
dataset intoconsist
train and of
synthetic data, while the test data result from lab or field experiments.
validation sets. This technique has been widely applied in machine learning providing an Before training and
testing,
unbiased allestimation
data are normalized.
of the model’s During the training
performance [96]. process, the algorithm undergoes
multiple
In aiterations
more recent (epochs)
paper,based
Liang on the[97]
et al. training
improved dataset.the After
ANN eachmodel epoch,
to deal thewith
model’s
a dis-
torted BGS, caused
performance by nonlocal
is evaluated effects. BGSs
by assessing with nonlocal
its ability effects were
to generalize basedsimulated to acquire
on the validation
a new training
dataset. dataset.
This training The new ANN
procedure modelmany
is repeated resultedtimesin significantly
with different reduced BFS errors,
hyperparameter
althoughThis
settings. the network’s
hyperparameter architecture
tuning changed
processonly is aslightly
common (minus 10 and
practice 5 nodes in
in machine the first
learning,
and second layer, respectively). In comparison to the previous ANN
as it helps to find the most effective settings for the algorithm. The final model is selected and the conventional
LCF method,
based on the an at least a five-fold
performance reduction
using the in thedataset.
validation estimated BFS errors
Finally, is reported.
to assess These
the overall
results highlight
effectiveness the trained
of the importance of the
model, it dataset
is evaluatedin machine
usinglearning
a separateapplications.
and independent
dataset called the test data. This step provides an unbiased measure of the model’s
performance based on unseen data, confirming its generalization capabilities.
We note that apart from the described training pipeline, methods based on cross-
validation are also used, especially when the datasets are limited. Specifically, cross-
validation is based on data resampling and repeatedly splits the dataset into train and
validation sets. This technique has been widely applied in machine learning providing an
unbiased estimation of the model’s performance [96].
Sensors2023,
Sensors 2023,23,
23,6187
x FOR PEER REVIEW 77 of
of 26
27
In a more recent paper, Liang et al. [97] improved the ANN model to deal with a
distorted BGS, caused by nonlocal effects. BGSs with nonlocal effects were simulated to
acquire a new training dataset. The new ANN model resulted in significantly reduced BFS
errors, although the network’s architecture changed only slightly (minus 10 and 5 nodes
in the first and second layer, respectively). In comparison to the previous ANN and the
conventional LCF method, an at least a five-fold reduction in the estimated BFS errors is
reported. These results highlight the importance of the dataset in machine learning
applications.
Figure4.4.Common
Figure Commontrainingtrainingprocedure
procedurein inmachine
machinelearning.
learning.
Recently, Chen et al. [98] proposed one-dimensional CNNs for BFS extraction and
compared
Recently,
In a more their
Chen approach
recent with
et paper,
al. [98] the conventional
proposed
Liang LCF andthe
et al.one-dimensional
[97] improved theANN
CNNs simple ANN.
formodel
BFS to Specifically,
extraction anda
deal with
the authors
compared
distorted their used a
BGS, approach special
caused bywith type of CNN, called
the conventional
nonlocal effects. BGSs wavelet
LCF convolutional
andnonlocal
with the simple neural
ANN.
effects network.
Specifically,
were simulated The
theto
architecture
authors
acquire used of the
a newatraining proposed
special dataset. network
type of CNN,
The new is
called
ANNshown
wavelet in Figure
model convolutional5. It consists of two
neural network.
resulted in significantly paths
reducedThe of
BFS
convolutional
architecture
errors, although of layers,
the the which end
proposed
network’s up inisa shown
network
architecture fully connected
changed onlyneural
in Figure network
(minusofafter
5. It consists
slightly 10two
andapaths
residual
of
5 nodes
connection
convolutional is applied.
layers, The
which term
end “wavelet”
up in a arises
fully from
connected the type
neural
in the first and second layer, respectively). In comparison to the previous ANN and the of activation
network function
after a that
residual
is used in isthe
connection
conventional LCFfully
applied. connected
The term
method, an atnetwork.
“wavelet” The authors
arises
least a five-fold from theassert
reductiontypeinofthat
the the wavelet
activation
estimated BFSactivation
function that isis
errors
function
used in thewas employed
fully connected to cover
network. moreThe local characteristics
authors assert that in
the
reported. These results highlight the importance of the dataset in machine learning the frequency
wavelet domain.
activation The
function
input
was of the
employed
applications. CNN to is a
cover single
more normalized
local BGS
characteristicsconsisting
in the of 100
frequencyfrequency
domain.scanning
The points,
input of
while
the CNN the output
is a single
Recently, Chen is a single
normalized value
et al. [98] BGS indicating
consisting
proposed the BFS. The
of 100 frequency
one-dimensional batch
CNNs normalization
scanning
for BFSpoints, and max
while and
extraction the
pooling
output is layers
a single are used
value to address
indicating thethe covariance
BFS. The batch shift problem
normalization
compared their approach with the conventional LCF and the simple ANN. Specifically, and
and to down-sample
max pooling the
layers
data,
are respectively.
used to address the covariance shift problem and to down-sample
the authors used a special type of CNN, called wavelet convolutional neural network. The the data, respectively.
architecture of the proposed network is shown in Figure 5. It consists of two paths of
convolutional layers, which end up in a fully connected neural network after a residual
connection is applied. The term “wavelet” arises from the type of activation function that
is used in the fully connected network. The authors assert that the wavelet activation
function was employed to cover more local characteristics in the frequency domain. The
input of the CNN is a single normalized BGS consisting of 100 frequency scanning points,
while the output is a single value indicating the BFS. The batch normalization and max
pooling layers are used to address the covariance shift problem and to down-sample the
data, respectively.
Figure5.
Figure 5. Architecture
Architectureofofthe
the “wavelet”
“wavelet” convolutional
convolutionalneural
neuralnetwork
network(CNN)
(CNN)consisting
consistingof
oftwo
twopaths
paths
of convolutional layers (top) and a stack of fully connected wavelet layers (bottom). One-
of convolutional layers (top) and a stack of fully connected wavelet layers (bottom). One-dimensional
convolutional layer (Conv 1D); Batch normalization (BN). Reprinted with permission from [98] ©
Optica Publishing Group.
Similar to the previous methodology, the authors made use of synthetic data for
training. The data consisted of different BFSs, linewidths and SNRs. The model’s evaluation
based on experimental data, obtained by a BOTDR system, showed an improvement in
terms of temperature error in comparison to the conventional LCF and a simple ANN
consisting of two hidden layers. Specifically, the results indicated that the temperature
Figure 5. Architecture of the “wavelet” convolutional neural network (CNN) consisting of two paths
root mean square error (RMSE) of the CNN is approximately 1 ◦ C lower than that of the
of convolutional layers (top) and a stack of fully connected wavelet layers (bottom). One-
conventional LCF method. However, the improvement of the CNN in comparison to the
training. The data consisted of different BFSs, linewidths and SNRs. The model’s
training.
evaluationThebased
data onconsisted of different
experimental data, BFSs,
obtainedlinewidths
by a BOTDRand SNRs.system,Theshowed
model’san
evaluation based on experimental data, obtained by a BOTDR
improvement in terms of temperature error in comparison to the conventional LCF and system, showed an a
improvement in terms of temperature error in comparison to the conventional
simple ANN consisting of two hidden layers. Specifically, the results indicated that the LCF and a
simple ANN consisting of two hidden layers. Specifically, the results
temperature root mean square error (RMSE) of the CNN is approximately 1 °C lower than indicated that the
Sensors 2023, 23, 6187 temperature
that of theroot mean square
conventional error
LCF (RMSE)However,
method. of the CNNthe is approximately
improvement 1of°Cthe lower 8than
CNN of 26in
that
comparison to the ANN seems to highly depend on the temperature. For example, in
of the conventional LCF method. However, the improvement of the CNN the
comparison to theatANN
error difference 61.62seems to highly
°C is around depend
1 °C, whileonat the temperature.
65.82 °C, it becomesFor negligible.
example, the The
error difference
results are shown at 61.62 °C is6.around
in Figure Wethe
note1 °C,
thatwhile at 65.82trained
the authors °C, it becomes
thethe
ANN negligible.
and The
CNN using
ANN seems to highly depend on temperature. For example, error difference at
results are shown
◦ C is hardware in Figure 6. We note that the authors trained the ANN and CNN using
the same
61.62 around 1 ◦and at 65.82 ◦ C, it becomes negligible. The results are shown in
software.
C, while
the same hardware and software.
Figure 6. We note that the authors trained the ANN and CNN using the same hardware
and software.
Figure 6. The root mean square error (RMSE) of the extracted temperature using Lorentzian curve
Figure 6.
Figure The root
6. The root mean
mean square
square error
error (RMSE)
(RMSE) of
of the
the extracted
extracted temperature
temperature using
using Lorentzian
Lorentzian curve
curve
fitting (LCF), artificial neural networks (ANN) and convolutional neural networks (CNN), adapted
fitting(LCF),
fitting (LCF),artificial
artificialneural
neuralnetworks
networks(ANN)
(ANN)and
andconvolutional
convolutionalneural
neuralnetworks
networks (CNN),
(CNN), adapted
adapted
with permission from [98] © Optica Publishing Group.
with permission
with permission from
from [98]
[98] ©
© Optica
Optica Publishing
Publishing Group.
Group.
Changetetal.al.[99]
Chang [99]reported
reported that
that due
due toto
thethe correlation
correlation of of
thetheBGS BGS in the
in the timetime domain,
domain, a
Chang et al. [99](2D)
a two-dimensional reported
CNN that due to thedistributed
correlationthe of the
BFSBGS in thefrom
timedistributed
domain,
two-dimensional (2D) CNN that that extracts
extracts distributed the BFS directly directly
from distributed BGSs
a BGSs
two-dimensional
could (2D) CNN that
be advantageous. extracts distributed
Specifically, the BFS directly
they demonstrated a CNN from distributedas
architecture,
could be advantageous. Specifically, they demonstrated a CNN architecture, as shown
BGSs
showncould
in be
Figureadvantageous.
7, which Specifically,
consists of a 2D they demonstrated
convolutional layer, a a CNN
batch architecture,
normalization as
layer
in Figure 7, which consists of a 2D convolutional layer, a batch normalization layer and
shown in Figure 7, which consists of a 2D convolutional layer, a batch normalization layer
a and
singlea max
single max pooling
pooling layer. After layer.
the After the max
max pooling pooling
layer, whichlayer,reduces which reduces the
the dimensions
and a single max pooling layer. After the max pooling layer, which reduces the
ofdimensions
the processed of data,
the processed
a residual data, a residual
subnetwork subnetwork
consisting of a seriesconsisting of a series
of convolutional andof
dimensions
convolutional of the processed
and batch data, a residual subnetwork consisting of a that the of
series
batch normalization layers,normalization
is placed. Thelayers,authors is claimed
placed. Thethatauthors
the use claimed
of that subnetwork use
convolutional and
of that subnetwork
facilitates batch normalization
the featurefacilitates
perception theinfeaturelayers,
the time is
perceptionplaced. The authors
in the domain
and frequency claimed
time andasfrequency that
well. Thedomainthe part
last useas
of that
ofwell. subnetwork
The last
the CNN partfacilitates
consists the feature
of consecutive
of the CNN consists perception inlayers
of consecutive
2D convolutional the time
2Dwith and frequencylayers
convolutional
a decreasing domain
number withas
of a
well. The
decreasing
filters. last part
number
In contrast of
to ofthe CNN
thefilters.
CNN In consists of consecutive
contrast tointhe
architecture 2D convolutional
CNN5,architecture
Figure this CNN does in Figurelayers with
5, this fully
not include CNNa
decreasing
connected numberThe
does not include
layers. of size
fully filters. In contrast
connected
of the layers.
input to the
The
layer, CNN
size
151 ×of architecture
N,the input
refers in
layer,
to the Figure
151 × N,
number of5,data
this points
refers CNN
to the
does not
ofnumber include
the BGSofand data fully
the connected
number
points layers.
of distributed
of the The
BGS and theBGSs, size
number of the
of distributed BGSs, respectively. the
input
respectively. layer, 151 × N, refers to
number of data points of the BGS and the number of distributed BGSs, respectively.
Figure7.7. Convolutional
Figure Convolutional neural
neural network
network (CNN)
(CNN)forfordistributed
distributedBrillouin
Brillouinfrequency
frequencyshift
shift(BFS)
(BFS)
Figure 7. Convolutional
extraction. Reprinted neural
with network
permission (CNN)
from [99] forChinese
© distributed
Laser Brillouin
Press. frequency shift (BFS)
extraction. Reprinted with permission from [99] © Chinese Laser Press.
extraction. Reprinted with permission from [99] © Chinese Laser Press.
Similar
Similartotothe
theprevious
previousmethods,
methods,both
boththe
theinputs
inputsand
andthe
theoutputs
outputswere
werenormalized.
normalized.
The Similar
training to
set the previous
arose from methods,
synthetic both
data, the inputs
including the and
BGS the
withoutputs were
different
The training set arose from synthetic data, including the BGS with different normalized.
BFSs, linewidths
BFSs,
The training
and SNR set The
values. arose from training
reported synthetictime
data,
wasincluding the BGS
approximately with using
two hours different BFSs,
an Nvidia
GTX 1080 GPU. It is notable that in comparison to a CPU, a GPU results in significantly
faster training times [100].
The performance evaluation based on experimental data, collected with a BOTDA
system, showed that in comparison to the conventional LCF method, the CNN has slightly
improved the error of the BFS estimation. However, the authors are confident that the
performance could be further improved by optimizing the CNN architecture and the
training dataset. Furthermore, the authors reported that the CNN required only 0.13 s for
the processing of 1000 BGSs, while the corresponding computation time for the conventional
LCF approach was 0.81 s. A similar speed enhancement was also reported by Qi et al. [101].
Ge et al. [68] showed that similar 2D CNNs can also result in enhanced spatial resolu-
tion in BOTDA and particularly when long pulses are used. Long pulses in BOTDA result in
performance could be further improved by optimizing the CNN architecture and the
training dataset. Furthermore, the authors reported that the CNN required only 0.13 s for
the processing of 1000 BGSs, while the corresponding computation time for the
conventional LCF approach was 0.81 s. A similar speed enhancement was also reported
Sensors 2023, 23, 6187 by Qi et al. [101]. 9 of 26
Ge et al. [68] showed that similar 2D CNNs can also result in enhanced spatial
resolution in BOTDA and particularly when long pulses are used. Long pulses in BOTDA
result in longer measurement lengths but on the other hand, decrease the spatial
longer measurement lengths but on the other hand, decrease the spatial resolution. Conven-
resolution. Conventionally,
tionally, this thiscan
trade-off problem trade-off problem
be alleviated can be alleviated
by implementing by implementing
a differential a
pulse-width
differential pulse-width pair (DPP), but at the cost of a two-fold increase in measurement
pair (DPP), but at the cost of a two-fold increase in measurement time. Ge et al. [68] showed
time.
that aGe et al. [68] BOTDA
CNN-assisted showed isthat a CNN-assisted
capable of reaching theBOTDA is capable
resolution of reaching with-
of the DPP-BOTDA the
resolution of the DPP-BOTDA without increasing the measurement time. An
out increasing the measurement time. An example of the BFS estimation accuracy is shown example of
the BFS estimation accuracy is shown in Figure 8. Caceres et al. [102] used similar
in Figure 8. Caceres et al. [102] used similar CNNs to enhance the spatial resolution in CNNs
toBOCDR/BOCDA
enhance the spatial resolution in BOCDR/BOCDA sensors.
sensors.
Figure Brillouinfrequency
Figure8.8.Brillouin frequencyshift
shift(BFS)
(BFS)estimation
estimationusing
usingaaconventional
conventionalBOTDA,
BOTDA,aaDPP-BOTDA
DPP-BOTDA
and
andaaCNN-BOTDA.
CNN-BOTDA.Reprinted
Reprintedwith
withpermission from[68]
permissionfrom [68] ©
© IEEE.
IEEE.
Lalametetal.
Lalam al.[103]
[103] aimed
aimed at
at increasing
increasing thethe reliability
reliability of
of the
the neural
neural networks.
networks. They
They
proposed probabilistic neural networks that provide not only a point estimate
proposed probabilistic neural networks that provide not only a point estimate of the BFS of the BFS
but also the prediction’s uncertainty, which is a measure to assess the model’s
but also the prediction’s uncertainty, which is a measure to assess the model’s confidence. confidence.
Therefore,when
Therefore, whenthe themodel’s
model’sprediction
predictionisisnot notprecise
preciseenough,
enough,this thisisisindicated
indicatedbybythe
the
provided uncertainty. Furthermore, the neural network outputs the
provided uncertainty. Furthermore, the neural network outputs the full width at half full width at half
maximum(FWHM)
Sensors 2023, 23, x FOR PEER REVIEW
maximum (FWHM)ofofthe theLorentzian
Lorentziancurve
curveasaswell.
well.The
Thestructure
structureisisshown
shownininFigure
Figure 9.
10 of9.27
For the sake of completeness, we note that BFS uncertainties were also extracted
For the sake of completeness, we note that BFS uncertainties were also extracted using using LCF
and and
LCF classic [104][104]
classic or Bayesian statistics
or Bayesian [105].
statistics [105].
Figure 9.
Figure 9. Probabilistic
Probabilistic convolutional
convolutional neural
neural network
network for Brillouin frequency
for Brillouin frequency shift
shift and
and linewidth
linewidth
extraction. Reprinted with permission from [103] © SPIE.
extraction. Reprinted with permission from [103] © SPIE.
Apart from neural networks, simpler machine learning methods, including including SVM,
AdaBoost and KNN, have been applied for BFS extraction. SVMs are supervised learning
models
models that
thathave
havebeen widely
been used
widely in classification
used and regression
in classification analysisanalysis
and regression [93]. In contrast
[93]. In
to ANNsto
contrast that require
ANNs thatarequire
big amount
a bigof data, SVM
amount proved
of data, SVMvery efficient
proved veryeven if theeven
efficient available
if the
dataset
availableis dataset
limited is[106]. SVMs
limited separate
[106]. SVMs classes
separatebyclasses
constructing hyperplanes
by constructing (decision
hyperplanes
(decision surfaces) in high-dimensional spaces. SVM is named after the so-called support
vectors, which are the data points that determine the orientation and position of the
hyperplanes. Furthermore, SVM is based on kernels, which can be specified by e.g., linear,
polynomial and radial basis functions [106]. Yao et al. [107] compared the influence of
Sensors 2023, 23, 6187 10 of 26
surfaces) in high-dimensional spaces. SVM is named after the so-called support vectors,
which are the data points that determine the orientation and position of the hyperplanes.
Furthermore, SVM is based on kernels, which can be specified by e.g., linear, polynomial
and radial basis functions [106]. Yao et al. [107] compared the influence of different kernel
functions on the BFS estimation and found that the Gaussian radial basis function delivers
the lowest errors. However, the width of the Gaussian kernel needs to be optimized so that
overfitting is addressed. Yao et al. [107] also commented on the training speed of the SVM,
which in general, is shorter than that of the ANNs. Specifically, the authors mentioned that
the training of the SVM lasted only several minutes, which is a significant advantage over
the ANN.
Zheng et al. [108] applied AdaBoost to extract the FBS. The AdaBoost algorithm trains
many weak classifiers, which are weighted depending on the classification rate that they
provide [109,110]. In the end, a strong classifier consisting of many weak classifiers arises.
The weak classifiers that the authors chose were simple decision trees. The authors claimed
that in cases of low SNR, where the LCF fails, the AdaBoost predicts the BFS with relatively
low errors (approximately 1 MHz). However, no information was provided about the
training and the prediction times. Furthermore, the trained AdaBoost is a classifier, which
means that no interpolation is possible. We believe that this problem could be addressed
by applying linear decision trees for regression [111,112].
In contrast to the previous algorithms, KNNs do not learn any model, and thus, no
training is needed [113]. This is a great advantage over other algorithms that require time-
consuming training (such as ANN and CNN). However, a dataset, including a plethora of
BGSs and BFSs is required because the KNN predictions are based on feature similarity.
Furthermore, the KNNs are characterized by two hyperparameters, namely the distance
function and the number of neighbors (k-value) to be considered. Zheng et al. [114,115]
made use of the Euclidean distance and optimized the k-value after a systematic analysis
of its impact on the BFS extraction. The results based on experimental data showed that the
KNNs provide lower BFS errors than those from the conventional LCF approach but only
if the SNR is low. This indicates that KNNs are more tolerant against noise than the LCF.
Even though the proposed machine learning algorithms for BFS extraction have
proved very efficient, the requirement for fixed input dimensions is a significant limitation.
It is known that machine learning algorithms, in general, make predictions only based on
data with the same dimensions as the data that were provided to the algorithm during
training. This is of course impractical because the number of scanning frequencies, as
well as the frequency range, can vary depending on the application. To address this issue,
Liang et al. [92] applied linear interpolation based on the BGS so that the BGS always
consists of the same number of frequencies before it is processed by the machine learning
model. Furthermore, Xiao et al. [116] and Yao et al. [107] addressed this issue by regulating
the input dimensions with principal component analysis (PCA). Apart from this, PCA also
had a positive impact on the training time. We note that PCA is commonly used in data
analysis to reduce the dimensions of the data without losing significant information [117].
Among the most common weaknesses in machine learning is the long training times
that are related to the complexity of the algorithms. Usually, the more complex the algo-
rithm, the longer the training. ANN and CNN are considered very complex, and usually,
the training lasts several hours. Considering also the optimization of the hyperparameters,
the total training time increases dramatically. This could be addressed to some extent using
simpler architectures and state-of-the-art optimization techniques [118–120].
Interpretability is of great importance for every machine learning algorithm. Although
some simple algorithms, such as linear and polynomial regression, are considered inter-
pretable by themselves, ANNs and CNNs are usually treated as black boxes. This arises
from their complexity, which renders the interpretation of their decisions very difficult.
However, in the last few years, interpretable machine learning has gained much attention
and has already made significant progress. As an example, we mention that sensitivity
analysis, Taylor decomposition, deconvolution, guided backpropagation and layer-wise
Sensors 2023, 23, 6187 11 of 26
relevance propagation are among the state-of-the-art techniques that have been proposed
to shed light on the neural networks’ decisions [121]. Other algorithms, such as KNN,
SVM and AdaBoost (decision trees), are easier to interpret. We note that in comparison to
all the aforementioned machine learning algorithms, KNN offers the fastest and easiest
interpretation [122]. We believe that further research on the interpretation of the proposed
machine learning algorithms for BFS extraction will create more trust, contribute to a more
efficient hyperparameter optimization and open the way for wider use in the future.
Figure
Figure 10.
10. (a)
(a) Single
Single Brillouin
Brillouingain
gainspectrum
spectrumdenoising
denoisingusing
usingautoencoder-based
autoencoder-basedneural
neuralnetworks.
networks.
with permission from [124]
Reprinted with permission from [124] © IEEE; (b) 2D Brillouin gain spectrum denoising using
Reprinted © IEEE; (b) 2D Brillouin gain spectrum denoising using
convolutional
convolutionalneural
neuralnetworks.
networks.Adapted
Adaptedwith
withpermission from[125]
permissionfrom © IEICE.
[125] © IEICE.
Wu
Wu etetal.al.[126]
[126]and
andZheng
Zheng et et
al. al.
[127] proposed
[127] proposed CNNsCNNs thatthat
consider the spatial
consider and
the spatial
spatio-temporal similarities, respectively. Specifically, the CNNs demonstrated
and spatio-temporal similarities, respectively. Specifically, the CNNs demonstrated by by Wu et
al.
Wu[126]
et al.accept 2D BGSs
[126] accept (Figure
2D BGSs 10b)10b)
(Figure withwith
the the
dimensions
dimensions defined
defined bybythe
thenumber
numberof of
frequency
frequencyscanning
scanningpoints
pointsand
and the
the number
number of of the
the spatially
spatially resolved
resolved sensing
sensing points.
points. They
They
reported
reported that
that the
theBM3D
BM3Dhad hadaanegative
negativeeffecteffecton
onthe
thesystem’s
system’sset setspatial
spatialresolution,
resolution,which
which
was
was not observed
observed when whenCNNCNNdenoisers
denoiserswere were used.
used. Zheng
Zheng et [127]
et al. al. [127] designed
designed a CNN
a CNN with
with
threethree dimensions
dimensions including
including the The
the time. time. The authors
authors concludedconcluded
that the that
3D CNNthe 3D CNN
provides
provides higher SNR than the 2D CNN, with the reported improvement being 3.6 dB.
However, we note that the SNR improvement is expected to be related e.g., to the number
of signal averages during the experiments.
The results of these two papers indicated that the CNN denoisers are approximately
Sensors 2023, 23, 6187 12 of 26
higher SNR than the 2D CNN, with the reported improvement being 3.6 dB. However,
we note that the SNR improvement is expected to be related e.g., to the number of signal
averages during the experiments.
The results of these two papers indicated that the CNN denoisers are approximately
more than two orders of magnitude faster than the conventional BM3D denoiser. This
enabled the real-time denoising of the experimentally obtained BGS allowing for even
dynamic strain sensing [127]. However, we need to mention that even though the de-
noising itself is fast, the training of the CNNs is time-consuming, and it can last up to
45 h as reported in [126]. It is of high importance to note that these training times were
acquired using a state-of-the-art GPU. The use of a CPU is expected to increase the training
time dramatically.
Very recently, Yang et al. [125] proposed a 2D CNN, namely attention-guided denois-
ing CNN, which has been widely used in the field of image recognition to shorten the
computation time of deep CNN architectures [128,129]. The authors claimed that the new
CNN architecture could result in more accurate BFS estimations than the one used in [126].
However, more investigations including experimental data are required.
Even though neural network-based denoisers resulted in BGSs with high SNR and
short computation times, more investigations are required for a wider use in the future. As
mentioned previously, no optimization is performed once the denoising model is trained,
which renders the CNN denoisers faster than the BM3D conventional image denoising
method, as reported in [126]. To the best of our knowledge, a similar comparison between
CNN denoisers and other conventional denoising algorithms, such as non-local means
(NLM) and wavelet denoising (WD) using GPUs, has not been reported yet. Nevertheless,
a comparison between the three denoising algorithms, BM3D, NLM and WD, using a
CPU showed that WD is two orders of magnitude faster than the BM3D and NLM [130].
Therefore, even if the CNN denoisers are faster than the BM3D, further studies should
investigate whether the CNN denoisers are faster than the WD as well. We note that
the use of the same hardware (i.e., GPU) is of high importance when computation times
are compared.
A limitation of the neural network denoisers that needs to be addressed in the future
arises from the fact that the size of the input images should always match the network’s
input size. This means that all the images should consist of the same number of sampling
points and the same number of frequencies. For this reason, methods to address this issue,
such as zero-padding and interpolation, should be tested [131].
3.3. Machine Learning for Temperature and Strain Predictions Directly from the Brillouin
Gain Spectrum
Machine learning has also been used to extract temperature directly from BGSs.
Azad et al. [132,133] and Wang L. et al. [134] proposed a signal post-processing method
based on ANNs to predict temperature without extracting the BFS. First, an ANN was
trained based on the normalized BGS corresponding to different temperatures. The training
dataset consisted of ideal synthetic data with varying linewidths. We note that in contrast
to other training datasets, Azad et al. [133] did not add noise to the ideal synthetic data.
The authors trained separate ANNs for BGSs recorded using different frequency
scanning steps. This results from the fact that the set frequency scanning step affects the
number of data points of the BGS, and thus, ANNs with different nodes in the input layer
are required. Figure 11 compares the performance of the ANN to that of the LCF and
cross-correlation method (XCM) when different frequency scanning steps are used. The
performance is calculated in terms of the temperature RMSE when the fiber is exposed to
controlled-temperature conditions. In general, the ANNs perform better than the conven-
tional methods, which according to the authors, is attributed to the fact that the ANNs are
trained and optimized for each frequency step separately. However, we observe that the
ANNs perform significantly better than the conventional methods when the set frequency
number of data points of the BGS, and thus, ANNs with different nodes in the input layer
are required. Figure 11 compares the performance of the ANN to that of the LCF and
cross-correlation method (XCM) when different frequency scanning steps are used. The
performance is calculated in terms of the temperature RMSE when the fiber is exposed to
controlled-temperature conditions. In general, the ANNs perform better than the
Sensors 2023, 23, 6187 conventional methods, which according to the authors, is attributed to the fact that 13 of 26
the
ANNs are trained and optimized for each frequency step separately. However, we
observe that the ANNs perform significantly better than the conventional methods when
the
stepset frequency
is greater step
than is greater
2. These thanagree
results 2. These
withresults agree with
those reported bythose
Wangreported by Wang
J. et al. [135] and
J.
Caoet al. [135]
et al. andand
[136] Caoindicate
et al. [136]
thatand
ANNsindicate that ANNs
can handle can
sparse handle
data verysparse
well. data very well.
Figure 11. Performance comparison of the ANN, LCF and XCM in terms of the temperature RMSE
heated to
when the fiber is heated 29.90 ◦°C
to 29.90 C (a), 39.14 ◦°C
(a), 39.14 (b) and
C (b) 48.63 ◦°C
and 48.63 (c). Reprinted with permission
C (c).
[133]©©Optica
from [133] OpticaPublishing
PublishingGroup.
Group.
Madaschi et al. [137] proposed a similar ANN for direct temperature extraction that
could handle BGS acquired with different different frequency
frequency scanning
scanning steps.
steps. Specifically, they
applied spline interpolation based on the BGS, so that the data points of the BGS are equal
to the number of nodes in the the input
input layer
layer of
of the
theANN.
ANN.This
Thissolution
solution increases
increases the
the flexibility
flexibility
of the
of ANN, but
the ANN, but according
according to
to the
the authors,
authors, thethe extracted
extracted temperature accuracy of
temperature accuracy of this
this
approach is
approach is slightly
slightly lower
lower than
than the
the temperature
temperature accuracy
accuracy ofof the
the separately
separately trained
trained ANNs.
ANNs.
We
We note
note that
thataaBGSBGSinterpolation
interpolationhas
hasalso
alsobeen
been proposed
proposed andandtested by by
tested Liang
Lianget al.
et [92] for
al. [92]
BFS extraction as mentioned in the previous chapter.
for BFS extraction as mentioned in the previous chapter.
Azad et
Azad et al.
al. [133]
[133] and
and Madaschi
Madaschi etet al.
al. [137]
[137] highlighted
highlighted thethe improvement
improvement in in terms
terms ofof
computation time that the ANNs offer in comparison to the conventional
computation time that the ANNs offer in comparison to the conventional methods. Both methods. Both
reports agree that the temperature extraction through ANNs can be even two orders of
magnitude faster than the LCF approach.
Li et al. [138] studied the impact of the training dataset on the temperature accuracy
of the ANNs. Specifically, they created three different training datasets using synthetic
BGS consisting of (a) Lorentzian functions, (b) Pseudo-Voigt functions and (c) Pseudo-Voigt
functions with artificial noise. The authors tested the three different trained models on data
collected by a BOTDR system and concluded that the model trained with noisy Pseudo-
Voigt functions delivered the most accurate temperature predictions. However, because
the shape of the BGS that is obtained by systems that are based on pump pulses, such as
BOTDR and BOTDA, depends on the pump pulse power and width [88,139], a general
conclusion cannot be drawn.
The implementation of ANNs for temperature extraction has been also studied by
other research groups [123,140–144]. For example, Wang M. et al. [141] brought together
the state-of-the-art ANN-based signal processing with the internet of things (IoT) [145] to
facilitate automatization and enhance data management and analytics.
Zhang et al. [146] extracted temperature, applying kernel extreme learning machines
(K-ELM). ELM is a special case of ANNs consisting of a single hidden layer, where the
first weight matrix is randomly initialized [147,148]. This means that only the last weight
matrix is optimized, and thus, the training is faster. K-ELM is a modified version of the
simple ELM that introduces intrinsic kernel mapping [147]. In comparison to the simple
ELM, the K-ELM algorithm does not require either the number of nodes in the hidden
layer to be specified or the feature mapping to be known. According to Zhang et al. [146],
K-ELM proved to be very robust and in comparison to the conventional LCF approach, they
slightly reduced the extracted temperature error by 0.3 ◦ C and improved the temperature
extraction time by 120 times. The authors also applied simple ELM and found that they
perform significantly worse than the conventional LCF.
Apart from neural networks, SVMs have also been applied to extract temperature
from BGSs [149]. SVMs are simpler than ANNs, and fewer hyperparameters need to be
Sensors 2023, 23, 6187 14 of 26
optimized. Furthermore, the SVM average training procedure is significantly faster than
that of the ANNs. Wu et al. [149] used SVMs to extract temperature and concluded that
SVMs outperform the conventional LCF when the SNR of the data is low. At high SNR
values, the temperature accuracy of the SVM is comparable with that of the LCF method.
The authors stated that these results are very promising for long-range sensing because, at
distant positions, the SNR is significantly lower. Furthermore, the performance difference
between the SVMs and the LCF increases with the frequency scanning step. This agrees
completely with the results shown in [133] and indicates that not only the ANNs but
also the SVMs can handle sparse data very well. Wu et al. [149] also mentioned that the
training time, as well as the prediction time, is very short. As an example, the training
time of an SVM was approximately 1 s, while the prediction time of 101,500 BGSs was
less than 16 s. We note that even though the prediction times of the SVMs and the ANNs
are similar, the SVMs can be trained much faster than the ANNs. In another paper, the
same authors used PCA to further reduce the data processing time without sacrificing
temperature accuracy [150]. The results reported by the authors indicate that the PCA
reduced the prediction time by up to 20%.
Nordin et al. [151–153] proposed the use of GLM to extract temperature. GLM is a gen-
eralized form of linear regression that does not assume that the response variables (targets)
are normally distributed. Similar to the previously mentioned machine learning algorithms,
GLM is capable of predicting the temperature directly from the BGS without estimating
the BFS. The authors concluded that GLM extracts temperature faster and more accurately
than the conventional LCF. Specifically, the temperature extraction time was approximately
two orders of magnitude faster than the LCF, while the temperature error improvement
varied from approximately 0.4 ◦ C to 5 ◦ C, depending on the frequency-tuning step and
the temperature conditions. The authors in [151] concluded that GLM in combination
with conventional BFS extraction methods, such as LCF, results in a significant increase
in temperature accuracy even when the SNR is low. The most important characteristic of
the GLM is the easy interpretation, which arises from the algorithm’s simplicity and its
straightforward implementation.
In another publication, Nordin et al. [154] trained different machine learning algo-
rithms for direct temperature extraction and found that random forest performs slightly
better than the GLM in terms of temperature precision. We note that random forest is an
ensemble of decision trees that usually outperforms single decision trees but at the cost of
complexity [155]. The authors also applied ANNs, but surprisingly, they found that they
perform worse than the conventional LCF. This is in contrast to all the aforementioned
studies [132–134,141] that showed that ANNs outperform the conventional LCF. However,
we note that in comparison to other machine learning algorithms, such as random forest,
SVM and GLM, ANNs require, in general, much larger datasets and the hyperparameter
tuning is more complex and time-consuming. Therefore, the relatively low ANN perfor-
mance reported by Nordin et al. [154] may be attributed to an insufficient dataset or to a
not well-optimized neural network structure.
Apart from direct temperature extraction, similar machine learning approaches have
been proposed for direct strain extraction [156–160]. As an example, we mention that
Song et al. in 2020 proposed deep ANNs to detect microcracks in structural elements [156].
Even though the algorithm performed very well, in 2021 they made use of PCA and SVM
for the same purpose, asserting that the deep ANNs were difficult to implement and
interpret [158].
In comparison to the approaches presented in Section 3.1, the temperature or strain
predictions directly from the BGS represents a more compact solution and allows for
predictions based not only on the BFS but also on other features that can be extracted from
the BGS, such as linewidth and gain. Because, in many cases, these features depend on
the experimental settings, e.g., pulse width and power, most of the authors trained the
machine learning models using synthetic data, so that no relationship between linewidth
or gain and the measurand can be learned. However, we note that the use of additional
Sensors 2023, 23, 6187 15 of 26
features can potentially result in improved temperature errors, and this can be investigated
in the future.
Due to temperature and strain cross-sensitivity, the direct temperature (or strain)
extraction from the BGS can completely fail if strain (or temperature) changes occur. This is
a clear disadvantage compared to the previous approaches described in the Section 3.1, and
thus, methods to extract temperature and strain simultaneously using machine learning
have also been proposed.
Researchers used machine learning to simultaneously predict two parameters address-
ing the well-known cross-sensitivity problem. This is of great importance for accurate
temperature or strain monitoring but also for industrial applications, where simultaneous
temperature and strain monitoring is needed.
Wang B. et al. [72] proposed ANNs for temperature and strain discrimination using
a LEAF fiber. LEAF fibers are characterized by a BGS with two peaks, as illustrated in
Figure 12. These two peaks have different temperature and strain sensitivities, which means
that the two parameters could be decoupled even with the conventional equation-solving
method as described in Section 2. However, if the SNR is low, the conventional approach
comes at the cost of large errors, which does not allow for any practical application.
Wang B. et al. [72] trained the ANN with a synthetic double-peak BGS. The ANN was
tested not only on synthetic data but also on BOTDA experimental data resulting from an
optical fiber of 24 km. They concluded that ANNs provide temperature and strain RMSE of
4.2 ◦ C and 134.2 µε, respectively. These temperature and strain errors were approximately
Sensors 2023, 23, x FOR PEER REVIEW 16 of 27
seven and five times lower than those obtained from the conventional equation-solving
method, respectively.
12.Simultaneous
Figure 12. Simultaneous temperature
temperatureandand
strain extraction
strain using ausing
extraction two-peak Brillouin Brillouin
a two-peak gain spectrum
gain
spectrum
from from
a large a largearea
effective effective area fiber
fiber (LEAF) and(LEAF) and
artificial artificial
neural neuralI:networks.
networks. I: input
input layer; layer;
H1: first H1:
hidden
first hidden
layer; layer; H2:
H2: second second
hidden hidden
layer; layer; layer.
O: output O: output layer. Reprinted
Reprinted with permission
with permission from [72] from [72]
© Optica
© Optica Publishing
Publishing Group. Group.
[73] followed
Yang et al. [73] followed aa similar
similar methodology
methodology but but used
used one-dimensional
one-dimensional CNNs
instead of ANNs. Specifically, they used a synthetic two-peak BGS and experimental data
to train and test the CNN,
CNN, respectively.
respectively. They
They employed
employed an approximately 20 km optical
fiber and concluded that CNNs provide a
and concluded that CNNs provide a temperaturetemperature and
andstrain RMSE
strain RMSEof 2of◦ C
2 and 32.332.3
°C and µε,
respectively.
µε, respectively.
Ruiz-Lombera et et al.
al.[71]
[71]reported
reportedon onsimultaneous
simultaneoustemperature
temperature andand strain
strain sensing
sensing in
in a standard
a standard optical
optical fiberusing
fiber usingPCAPCAandandANN,
ANN,but butusing
usingaa classification
classification instead
instead of a
regression algorithm. Specifically,
Specifically, the ANNANN was was trained
trained to predict 40 temperature and
strain classes, in total. The temperature and strain ranges were
were from
from 22 ◦ C to 62 ◦ C and
strain classes, in total. The temperature and strain ranges 22 °C to 62 °C and
from
from 00µεµε to to
1536 µε, respectively.
1536 With the
µε, respectively. Withhyperparameters of the ANN
the hyperparameters being
of the ANNoptimized,
being
the classification rate reached almost 90%. Even though the classification
optimized, the classification rate reached almost 90%. Even though the classification accuracy is high,
we have to
accuracy note that
is high, the set
we have to temperature
note that the and strain steps and
set temperature 10 ◦ Csteps
werestrain and were
approximately
10 °C and
200 µε, respectively.
approximately 200 µε, respectively.
The
The majority
majority of of the
the authors
authors estimated
estimated thethe performance
performance of of their
their machine
machine learning
learning
models in terms of the BFS, temperature or strain error. However,
models in terms of the BFS, temperature or strain error. However, we need we need to note thatthat
to note the
reported performances do not depend only on the applied machine
the reported performances do not depend only on the applied machine learning learning algorithm, but
on a plethora
algorithm, butofonfactors, suchofasfactors,
a plethora the experimental parameters (length
such as the experimental of the(length
parameters fiber, spatial
of the
resolution, measurement settings [161]), the error estimation methodology
fiber, spatial resolution, measurement settings [161]), the error estimation methodology and metric, the
and metric, the stability of the climate chambers, the accuracy of the reference sensors, the
precision of the fiber optic stretchers, etc.
Apart from accuracy, many authors estimated the performance of their methods by
considering the prediction time. However, this criterion alone cannot be used to compare
Sensors 2023, 23, 6187 16 of 26
stability of the climate chambers, the accuracy of the reference sensors, the precision of the
fiber optic stretchers, etc.
Apart from accuracy, many authors estimated the performance of their methods by
considering the prediction time. However, this criterion alone cannot be used to compare
the various reported machine learning methodologies. This limitation arises from the fact
that prediction time is influenced not only by the machine learning algorithm itself but
also by the hardware and software utilized. Factors, such as the type and number of CPU
threads, the computational power of the GPU and the machine learning framework em-
ployed (e.g., Keras, PyTorch, TensorFlow), strongly affect the prediction time [92,100,162].
Consequently, it is not reliable to compare previously employed methodologies solely
based on errors or the prediction time. Hence, it is crucial to carefully consider the context
and specific details of each study when evaluating the reported performance of machine
learning algorithms.
To enhance the understanding of the appropriate application and suitability of each
algorithm, a comprehensive table is provided below (Table 1), outlining the strengths and
weaknesses of the employed machine learning methodologies.
Table 1. Comparison of the strengths and weaknesses of the applied machine learning algorithms.
The last column provides the references that apply the corresponding machine learning algorithms.
Figure13.
Figure TemperatureRMSE
13.Temperature RMSEofofthetheconventional
conventionalLCFLCFmethod
method(blue
(bluedots)
dots)vs.
vs.the
themeasurement
measurement
time.
time. The
The dashed
dashed red
red line
line corresponds
corresponds toto the
the CNN
CNN performance
performance based
based onon data
data obtained
obtained using
using 4 min
4 min
measurements.
measurements. Adapted
Adapted from
from [69].
[69] Copyright2021
. Copyright 2021 licensed
licensed under
under a Creative
a Creative Commons
Commons Attribution
Attribution
4.04.0
Sensors 2023, 23, x FOR PEER REVIEW International
International license.
license. 18 of 27
Besidesthe
Besides the measurement
measurement time, time, the
theproblem
problemof of cross-sensitivity
cross-sensitivity is also of great
is also impor-
of great
tance towards
importance towardsa wider use of
a wider useBOFDA
of BOFDA in industrial
in industrialapplications
applications in the
in future,
the future,andandthus,
Karapanagiotis
the way for fiberet al. [70]
optic proposed
monitoring simple
using machine
the already learning
existing
thus, Karapanagiotis et al. [70] proposed simple machine learning to discriminate to discriminate
laid-out fiber temperature
optic networks.
and
The strain
authors
temperature in
andstandard
strain intelecom
demonstrated standardaoptical
BOFDA
telecomfibers.
systemThe fibers.
optical use
of highof The
these
SNR
usefibers opens
tothese
of obtain the
theway
fibers for
multipeak
opens
fiber
the optic
spectrum monitoring
of the
way for fiber legacy
optic using the
monitoring already
standardusingSMF28 existing
® (Corning
the already laid-out
® fiber
) optical
existing optic networks.
fiber.fiber
laid-out Theoptic The
multipeak authors
networks.spectrum
demonstrated
Theof the standard
authors a BOFDA
fiber
demonstrated issystem
nota of high
easily
BOFDA SNR
obtainable,
system to obtain
and
of the SNR
thus,
high multipeak
a high
to spectrum
SNR
obtain is of
required
the the [168].
legacyThat
multipeak
standard ® (Corning® ) optical fiber. The multipeak spectrum of the standard fiber is
spectrumSMF28 is characterized by three secondary peaks, of which the amplitude is more than
not easily obtainable, and thus, a high SNR is required [168]. That spectrum is characterized
two orders of magnitude lower than the fundamental. The BFSs were extracted using the
by three secondary peaks, of which the amplitude is more than two orders of magnitude
conventional LCF method, as described in the Section 2. Ridge regression [155] ,which is
lower than the fundamental. The BFSs were extracted using the conventional LCF method,
nothing more than a simple polynomial regression, including a penalty term to avoid
as described in the Section 2. Ridge regression [155] ,which is nothing more than a simple
overfitting, regression,
polynomial was used. including
The algorithm a penaltymanaged
term toto capture
avoid nonlinearities
overfitting, was used. in The
the algo-
data and
rithm managed to capture nonlinearities in the data and delivered temperature and strain that
delivered temperature and strain errors of 2.6 °C and 58 με, respectively. We note
both the
errors training
of 2.6 ◦ C andand testrespectively.
58 µε, datasets consisted
We noteof experimental
that both the trainingdata, andandtest
thedatasets
errors were
calculated
consisted ofusing cross-validation.
experimental data, and theGaussian
errors were process regression
calculated using(GPR) [169], which
cross-validation. is based
Gaus-
on Bayesian
sian statistics (GPR)
process regression was also used
[169], to extract
which is based temperature
on Bayesianand strainwas
statistics andalso
delivered
used to 22%
lower temperature
extract temperatureand andstrain
strainanderrors than22%
delivered the lower
ridge temperature
regression. and We strain
note that
errors the optical
than
the ridgetotal
fiber’s regression.
length We wasnote that the optical400
approximately fiber’s
m, total
and length was approximately
the temperature and strain 400 m,errors
and the temperature
resulting and strain errors resulting
from the equation-solving method from werethe 5 equation-solving
°C and 114 με, method were The
respectively.
◦ C and 114 µε, respectively. The proposed methodology is shown in Figure 14.
5proposed methodology is shown in Figure 14.
Figure 14. Temperature and strain discrimination using the BFS extracted via conventional
Figure 14. Temperature and strain discrimination using the BFS extracted via conventional Lorentzian
Lorentzian curve-fitting. Adapted with permission from [70] © Optica Publishing Group.
curve-fitting. Adapted with permission from [70] © Optica Publishing Group.
Apart from temperature and strain discrimination, temperature and relative humidity
effects were also decoupled by using a humidity sensitive Polyimide (PI)-coated optical
fiber [170,171]. We note that humidity causes the PI coating to sweal, which in turn,
induces strain to the optical fiber, and thus, the BFS changes. Due to the high SNR of the
system, the authors managed again to obtain a multipeak spectrum and followed a similar
methodology with [70]. The difference lies in the fact that the temperature and humidity
effects could not be separated by using only the BFS, and thus, the linewidths were also
employed. Algorithms, such as ridge regression, decision trees and ANNs, were used.
ANNs seemed to outperform the other algorithms delivering temperature and relative
humidity errors of 0.9 ◦ C and 6.5%RH, respectively.
Unlike the previous machine learning approaches for temperature and strain discrimi-
nation in BOTDA sensors [71–73], which used the entire BGS as input, these last papers
employed, as inputs, spectral parameters extracted via LCF. The advantage of extracting
features is that they render the interpretability easier. For example, in [171] the authors used
backward feature elimination [155] to study the feature importance and found that only
the features extracted from the first two peaks contributed to the algorithm’s decision. This
finding indicates that half of the spectrum does not need to be obtained, which positively
affects the measurement time. However, we need to mention that the feature extraction via
LCF may be challenging in cases of low SNR.
We note that these methods, as described above, can potentially be combined so
that simultaneous multiparameter sensing, including temperature, strain and humidity,
is demonstrated. Specifically, this could be achieved by applying machine learning and
using the two-fiber configuration, including an acrylate-coated fiber and a PI-coated fiber,
placed in parallel and close to the other. With the acrylate-coated fiber measuring strain
and temperature and the PI-coated fiber measuring humidity, temperature and strain, a
multiparameter Brillouin DFOS could be feasible.
The aforementioned algorithms can also be employed in other Brillouin DFOS systems
(e.g., BOTDA and BOCDA) providing that those sensors are able to record a high SNR
multipeak BGS similar to the one shown in Figure 14. This results from the fact, that the au-
thors in [70,170,171] made use of spectral properties that can be extracted via conventional
LCF in all Brillouin DFOS systems.
Time domain systems are more commonly employed in both research and industry
compared to frequency domain systems. As a result, the majority of machine learning
approaches have been primarily implemented in the context of time domain systems.
Nevertheless, it is worth noting that in many instances, machine learning methodologies
employed in time domain systems can be readily adapted and applied to frequency domain
systems as well.
showed that they can potentially outperform well-known denoising algorithms, such as
BM3D and NLM.
While most of the machine learning approaches can be applied to Brillouin DFOS
systems, regardless of whether they operate in the time or frequency domain, there are some
approaches that have been specifically tailored to specific systems. For instance, machine
learning has enabled a simple BOTDA system to achieve the same spatial resolution as
a more complex DPP-BOTDA setup. Additionally, in BOFDA sensors, machine learning
contributed to a significant reduction of the measurement time, which is expected to render
BOFDA more attractive for applications in the field.
In the future, machine learning can also be combined with other newly developed
signal processing techniques. Recently, compressed sensing, for example, has gained
increasing attention for reconstructing signals that have been sampled below the Nyquist
frequency [172]. Compressed sensing has already been applied in Brillouin DFOSs to
reduce the recorded data and consequently, to shorten the measurement time [173–175]. We
believe that compressed sensing in combination with machine learning will contribute to the
further development of Brillouin DFOSs. We note that the combination of machine learning
and compressed sensing is already known in the literature as compressed learning [176].
In this paper, we highlighted the achievements that machine learning has brought in
Brillouin DFOSs, and we also clarified the weaknesses, so that the limits will be pushed
even further in the future. Among the most important weaknesses of the proposed method-
ologies is related to the interpretability. However, we believe that with the help of new
techniques that recently shed light on complex machine learning algorithms, we will
soon start witnessing an increasing number of interpretable machine learning-assisted
Brillouin DFOS systems. The interpretation of the models will render the hyperparameter
optimization process more efficient and will facilitate the release of industrial machine
learning systems. We hope that this review will contribute towards further investigations
in the future.
Author Contributions: Conceptualization, C.K.; formal analysis, C.K.; writing—original draft prepa-
ration, C.K.; writing—review and editing, C.K. and K.K.; visualization, C.K.; supervision, K.K.; project
administration, K.K. All authors have read and agreed to the published version of the manuscript.
Funding: PhD-program of Bundesanstalt für Materialforschung und-prüfung (BAM).
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: Not applicable.
Conflicts of Interest: The authors declare no conflict of interest.
Abbreviations
The following abbreviations are used in this manuscript:
ANN artificial neural network
BOTDA Brillouin time domain analysis
BOTDR Brillouin time domain reflectometry
BOFDA Brillouin frequency domain analysis
BOFDR Brillouin frequency domain reflectometry
BOCDA Brillouin correlation domain analysis
BOCDR Brillouin correlation domain reflectometry
BFS Brillouin frequency shift
BGS Brillouin gain spectrum
BM3D block-matching and 3D filtering
CNN convolutional neural network
Sensors 2023, 23, 6187 20 of 26
References
1. Ecke, W.; Nöther, N.; Peters, K.J.; Wosniok, A.; Krebber, K.; Meyendorf, N.G.; Thiele, E. A distributed fiber optic sensor system for
dike monitoring using Brillouin optical frequency domain analysis. In Proceedings of the Smart Sensor Phenomena, Technology,
Networks, and Systems, San Diego, CA, USA, 10–12 March 2008.
2. Schenato, L. A Review of Distributed Fibre Optic Sensors for Geo-Hydrological Applications. Appl. Sci. 2017, 7, 896. [CrossRef]
3. Bado, M.F.; Casas, J.R. A Review of Recent Distributed Optical Fiber Sensors Applications for Civil Engineering Structural Health
Monitoring. Sensors 2021, 21, 1818. [CrossRef] [PubMed]
4. Monsberger, C.M.; Bauer, P.; Buchmayer, F.; Lienhart, W. Large-scale distributed fiber optic sensing network for short and
long-term integrity monitoring of tunnel linings. J. Civ. Struct. Health 2022, 12, 1317–1327. [CrossRef]
5. Wu, T.; Liu, G.; Fu, S.; Xing, F. Recent Progress of Fiber-Optic Sensors for the Structural Health Monitoring of Civil Infrastructure.
Sensors 2020, 20, 4517. [CrossRef] [PubMed]
6. Stajanca, P.; Chruscicki, S.; Homann, T.; Seifert, S.; Schmidt, D.; Habib, A. Detection of Leak-Induced Pipeline Vibrations Using
Fiber—Optic Distributed Acoustic Sensing. Sensors 2018, 18, 2841. [CrossRef] [PubMed]
7. Matsumoto, H.; Araki, E.; Kimura, T.; Fujie, G.; Shiraishi, K.; Tonegawa, T.; Obana, K.; Arai, R.; Kaiho, Y.; Nakamura, Y.; et al.
Detection of hydroacoustic signals on a fiber-optic submarine cable. Sci. Rep. 2021, 11, 2797. [CrossRef]
8. Fang, G.; Li, Y.E.; Zhao, Y.; Martin, E.R. Urban Near-Surface Seismic Monitoring Using Distributed Acoustic Sensing. Geophys.
Res. Lett. 2020, 47, e2019GL086115. [CrossRef]
9. Sladen, A.; Rivet, D.; Ampuero, J.P.; De Barros, L.; Hello, Y.; Calbris, G.; Lamare, P. Distributed sensing of earthquakes and
ocean-solid Earth interactions on seafloor telecom cables. Nat. Commun. 2019, 10, 5777. [CrossRef]
10. Spica, Z.J.; Perton, M.; Martin, E.R.; Beroza, G.C.; Biondi, B. Urban Seismic Site Characterization by Fiber-Optic Seismology.
J. Geophys. Res. Solid Earth 2020, 125, e2019JB018656. [CrossRef]
11. Masoudi, A.; Pilgrim, J.A.; Newson, T.P.; Brambilla, G. Subsea Cable Condition Monitoring with Distributed Optical Fiber
Vibration Sensor. J. Light. Technol. 2019, 37, 1352–1358. [CrossRef]
12. Min, R.; Liu, Z.; Pereira, L.; Yang, C.; Sui, Q.; Marques, C. Optical fiber sensing for marine environment and marine structural
health monitoring: A review. Opt. Laser Technol. 2021, 140, 107082. [CrossRef]
13. Thomas, P.J.; Hellevang, J.O. A fully distributed fibre optic sensor for relative humidity measurements. Sens. Actuators B Chem.
2017, 247, 284–289. [CrossRef]
14. Stajanca, P.; Hicke, K.; Krebber, K. Distributed Fiberoptic Sensor for Simultaneous Humidity and Temperature Monitoring Based
on Polyimide-Coated Optical Fibers. Sensors 2019, 19, 5279. [CrossRef] [PubMed]
15. He, C.; Korposh, S.; Correia, R.; Liu, L.; Hayes-Gill, B.R.; Morgan, S.P. Optical fibre sensor for simultaneous temperature and
relative humidity measurement: Towards absolute humidity evaluation. Sens. Actuators B Chem. 2021, 344, 130154. [CrossRef]
16. Schenato, L.; Galtarossa, A.; Pasuto, A.; Palmieri, L. Distributed optical fiber pressure sensors. Opt. Fiber Technol. 2020, 58, 102239.
[CrossRef]
Sensors 2023, 23, 6187 21 of 26
17. Jaroszewicz, L.R.; Kusche, N.; Schukar, V.; Hofmann, D.; Basedau, F.; Habel, W.; Woschitz, H.; Lienhart, W. Field examples for
optical fibre sensor condition diagnostics based on distributed fibre optic strain sensing. In Proceedings of the 5th European
Workshop on Optical Fibre Sensors, Cracow, Poland, 19–22 May 2013.
18. Stajanca, P.; Mihai, L.; Sporea, D.; Neguţ, D.; Sturm, H.; Schukar, M.; Krebber, K. Effects of gamma radiation on perfluorinated
polymer optical fibers. Opt. Mater. 2016, 58, 226–233. [CrossRef]
19. Stajanca, P.; Krebber, K. Radiation-Induced Attenuation of Perfluorinated Polymer Optical Fibers for Radiation Monitoring.
Sensors 2017, 17, 1959. [CrossRef]
20. Lewis, E.; Wosniok, A.; Sporea, D.; Neguţ, D.; Krebber, K. Gamma radiation influence on silica optical fibers measured by optical
backscatter reflectometry and Brillouin sensing technique. In Proceedings of the 6th European Workshop on Optical Fibre Sensors,
Limerick, Ireland, 31 May–3 June 2016.
21. Rizzolo, S.; Boukenter, A.; Ouerdane, Y.; Michalon, J.-Y.; Marin, E.; Macé, J.-R.; Girard, S. Distributed and discrete hydrogen
monitoring through optical fiber sensors based on optical frequency domain reflectometry. J. Phys. Photonics 2020, 2, 14009.
[CrossRef]
22. Lin, Y.; Liu, F.; He, X.; Jin, W.; Zhang, M.; Yang, F.; Ho, H.L.; Tan, Y.; Gu, L. Distributed gas sensing with optical fibre photothermal
interferometry. Opt. Express 2017, 25, 31568–31585. [CrossRef]
23. Hartog, A.H. An Introduction to Distributed Optical Fibre Sensors; CRC Press: Boca Raton, FL, USA, 2017.
24. Taranov, M.A.; Gorshkov, B.G.; Alekseev, A.E. Achievement of an 85 km Distance Range of Strain (Temperature) Measurements
Using Low-Coherence Rayleigh Reflectometry. Instrum. Exp. Tech. 2020, 63, 527–531. [CrossRef]
25. Lu, Z.; Feng, T.; Li, F.; Yao, X.S. Optical Frequency-Domain Reflectometry Based Distributed Temperature Sensing Using Rayleigh
Backscattering Enhanced Fiber. Sensors 2023, 23, 5748. [CrossRef]
26. Pedraza, A.; del Río, D.; Bautista-Juzgado, V.; Fernández-López, A.; Sanz-Andrés, Á. Study of the Feasibility of Decoupling
Temperature and Strain from a φ-PA-OFDR over an SMF Using Neural Networks. Sensors 2023, 23, 5515. [CrossRef]
27. Palmieri, L.; Schenato, L.; Santagiustina, M.; Galtarossa, A. Rayleigh-Based Distributed Optical Fiber Sensing. Sensors 2022,
22, 6811. [CrossRef]
28. Bernini, R.; Minardo, A.; Zeni, L. Dynamic strain measurement in optical fibers by stimulated Brillouin scattering. Opt. Lett. 2009,
34, 2613–2615. [CrossRef]
29. Voskoboinik, A.; Yilmaz, O.F.; Willner, A.W.; Tur, M. Sweep-free distributed Brillouin time-domain analyzer (SF-BOTDA).
Opt. Express 2011, 19, B842–B847. [CrossRef]
30. Zhou, D.; Dong, Y.; Wang, B.; Pang, C.; Ba, D.; Zhang, H.; Lu, Z.; Li, H.; Bao, X. Single-shot BOTDA based on an optical chirp
chain probe wave for distributed ultrafast measurement. Light Sci. Appl. 2018, 7, 32. [CrossRef]
31. Minardo, A.; Porcaro, G.; Giannetta, D.; Bernini, R.; Zeni, L. Real-time monitoring of railway traffic using slope-assisted Brillouin
distributed sensors. Appl. Opt. 2013, 52, 3770–3776. [CrossRef]
32. Motil, A.; Bergman, A.; Tur, M. [INVITED] State of the art of Brillouin fiber-optic distributed sensing. Opt. Laser Technol. 2016, 78,
81–103. [CrossRef]
33. Sun, X.; Yang, Z.; Hong, X.; Jin, S.; Luo, J.; Soto, M.A.; Wu, J. Ultra-long Brillouin optical time-domain analyzer based on distortion
compensating pulse and hybrid lumped–distributed amplification. APL Photonics 2022, 7, 126107. [CrossRef]
34. Zhang, L.; Wang, Z.; Li, J.; Zeng, J.; Li, Y.; Jia, X.; Rao, Y. Ultra-long dual-sideband BOTDA with balanced detection. Opt. Laser
Technol. 2015, 68, 206–210. [CrossRef]
35. Soto, M.A.; Bolognini, G.; Di Pasquale, F. Optimization of long-range BOTDA sensors with high resolution using first-order
bi-directional Raman amplification. Opt. Express 2011, 19, 4444–4457. [CrossRef]
36. Denisov, A.; Soto, M.A.; Thévenaz, L. Going beyond 1000000 resolved points in a Brillouin distributed fiber sensor: Theoretical
analysis and experimental demonstration. Light Sci. Appl. 2016, 5, e16074. [CrossRef] [PubMed]
37. Bernini, R.; Minardo, A.; Zeni, L. Distributed Sensing at Centimeter-Scale Spatial Resolution by BOFDA: Measurements and
Signal Processing. IEEE Photonics J. 2012, 4, 48–56. [CrossRef]
38. Sperber, T.; Eyal, A.; Tur, M.; Thévenaz, L. High spatial resolution distributed sensing in optical fibers by Brillouin gain-profile
tracing. Opt. Express 2010, 18, 8671–8679. [CrossRef] [PubMed]
39. Garus, D.; Gogolla, T.; Krebber, K.; Schliep, F. Distributed sensing technique based on Brillouin optical-fiber frequency-domain
analysis. Opt. Lett. 1996, 21, 1402–1404. [CrossRef]
40. Jayawickrema, U.M.N.; Herath, H.M.C.M.; Hettiarachchi, N.K.; Sooriyaarachchi, H.P.; Epaarachchi, J.A. Fibre-optic sensor and
deep learning-based structural health monitoring systems for civil structures: A review. Measurement 2022, 199, 111543. [CrossRef]
41. Kandamali, D.F.; Cao, X.; Tian, M.; Jin, Z.; Dong, H.; Yu, K. Machine learning methods for identification and classification of
events in φ-OTDR systems: A review. Appl. Opt. 2022, 61, 2975–2997. [CrossRef]
42. Shiloh, L.; Eyal, A.; Giryes, R. Efficient Processing of Distributed Acoustic Sensing Data Using a Deep Learning Approach.
J. Light. Technol. 2019, 37, 4755–4762. [CrossRef]
43. Ohodnicki, P.R.; Zhang, P.; Lalam, N.; Karki, D.; Venketeswaran, A.; Babaee, H.; Wright, R. Fusion of Distributed Fiber Optic
Sensing, Acoustic NDE, and Artificial Intelligence for Infrastructure Monitoring. In Proceedings of the 27th International
Conference on Optical Fiber Sensors, Alexandria, VA, USA, 29 August–2 September 2022.
44. Shiloh, L.; Eyal, A.; Giryes, R. Deep Learning Approach for Processing Fiber-Optic DAS Seismic Data. In Proceedings of the 26th
International Conference on Optical Fiber Sensors, Lausanne, Switzerland, 24–28 September 2018.
Sensors 2023, 23, 6187 22 of 26
45. Shi, Y.; Wang, Y.; Wang, L.; Zhao, L.; Fan, Z. Multi-event classification for Φ-OTDR distributed optical fiber sensing system using
deep learning and support vector machine. Optik 2020, 221, 165373. [CrossRef]
46. Shi, Y.; Wang, Y.; Zhao, L.; Fan, Z. An Event Recognition Method for Φ-OTDR Sensing System Based on Deep Learning. Sensors
2019, 19, 3421. [CrossRef]
47. Peng, Z.; Jian, J.; Wen, H.; Gribok, A.; Wang, M.; Liu, H.; Huang, S.; Mao, Z.-H.; Chen, K.P. Distributed fiber sensor and
machine learning data analytics for pipeline protection against extrinsic intrusions and intrinsic corrosions. Opt. Express 2020, 28,
27277–27292. [CrossRef] [PubMed]
48. Li, S.; Peng, R.; Liu, Z. A surveillance system for urban buried pipeline subject to third-party threats based on fiber optic sensing
and convolutional neural network. Struct. Health Monit. 2020, 20, 1704–1715. [CrossRef]
49. Bai, Y.; Xing, J.; Xie, F.; Liu, S.; Li, J. Detection and identification of external intrusion signals from 33 km optical fiber sensing
system based on deep learning. Opt. Fiber Technol. 2019, 53, 102060. [CrossRef]
50. Chen, J.; Wu, H.; Liu, X.; Xiao, Y.; Wang, M.; Yang, M.; Rao, Y. A Real-Time Distributed Deep Learning Approach for Intelligent
Event Recognition in Long Distance Pipeline Monitoring with DOFS. In Proceedings of the 2018 International Conference
on Cyber-Enabled Distributed Computing and Knowledge Discovery (CyberC), Zhengzhou, China, 18–20 October 2018; pp.
290–2906.
51. Wu, Z.; Wang, Q.; Gribok, A.V.; Chen, K.P. Pipeline Degradation Evaluation Based on Distributed Fiber Sensors and Convolutional
Neural Networks (CNNs). In Proceedings of the 27th International Conference on Optical Fiber Sensors, Alexandria, VA, USA, 29
August–2 September 2022.
52. Wang, Q.; Jian, J.; Wang, M.; Wu, J.; Mao, Z.-H.; Gribok, A.V.; Chen, K.P. Pipeline Defects Detection and Classification Based on
Distributed Fiber Sensors and Neural Networks. In Optical Fiber Sensors Conference 2020 Special Edition; Optica Publishing Group:
Washington, DC, USA, 2020.
53. Wu, H.; Chen, J.; Liu, X.; Xiao, Y.; Wang, M.; Zheng, Y.; Rao, Y. One-Dimensional CNN-Based Intelligent Recognition of Vibrations
in Pipeline Monitoring with DAS. J. Light. Technol. 2019, 37, 4359–4366. [CrossRef]
54. Li, Z.; Zhang, J.; Wang, M.; Zhong, Y.; Peng, F. Fiber distributed acoustic sensing using convolutional long short-term memory
network: A field test on high-speed railway intrusion detection. Opt. Express 2020, 28, 2925–2938. [CrossRef]
55. Wang, Z.; Zheng, H.; Li, L.; Liang, J.; Wang, X.; Lu, B.; Ye, Q.; Qu, R.; Cai, H. Practical multi-class event classification approach for
distributed vibration sensing using deep dual path network. Opt. Express 2019, 27, 23682–23692. [CrossRef]
56. Kowarik, S.; Hussels, M.-T.; Chruscicki, S.; Münzenberger, S.; Lämmerhirt, A.; Pohl, P.; Schubert, M. Fiber Optic Train Monitoring
with Distributed Acoustic Sensing: Conventional and Neural Network Data Analysis. Sensors 2020, 20, 450. [CrossRef]
57. Hamadi, A.; Montarsolo, E.; Kabalan, A.; Garbini, G.P.; Hammi, T. Machine Learning Based Analysis of Optical Fiber Sensing
Intensity Data for Train Tracking Application. In Optical Fiber Sensors Conference 2020 Special Edition; Optica Publishing Group:
Washington, DC, USA, 2020.
58. Hernandez, P.D.; Ramirez, J.A.; Soto, M.A. Deep-Learning-Based Earthquake Detection for Fiber-Optic Distributed Acoustic
Sensing. J. Light. Technol. 2022, 40, 2639–2650. [CrossRef]
59. van den Ende, M.; Lior, I.; Ampuero, J.-P.; Sladen, A.; Ferrari, A.; Richard, C. A Self-Supervised Deep Learning Approach for
Blind Denoising and Waveform Coherence Enhancement in Distributed Acoustic Sensing Data. IEEE Trans. Neural Netw. Learn.
Syst. 2021, 1–14. [CrossRef]
60. Wang, M.; Deng, L.; Zhong, Y.; Zhang, J.; Peng, F. Rapid Response DAS Denoising Method Based on Deep Learning. J. Light.
Technol. 2021, 39, 2583–2593. [CrossRef]
61. Zhong, T.; Cheng, M.; Lu, S.; Dong, X.; Li, Y. RCEN: A Deep-Learning-Based Background Noise Suppression Method for DAS-VSP
Records. IEEE Geosci. Remote Sens. Lett. 2022, 19, 3004905. [CrossRef]
62. Yang, L.; Fomel, S.; Wang, S.; Chen, X.; Chen, W.; Saad, O.M.; Chen, Y. Denoising of distributed acoustic sensing data using
supervised deep learning. Geophysics 2022, 88, WA91–WA104. [CrossRef]
63. Liehr, S.; Borchardt, C.; Münzenberger, S. Long-distance fiber optic vibration sensing using convolutional neural networks as
real-time denoisers. Opt. Express 2020, 28, 39311–39325. [CrossRef]
64. Wang, Y.; Liu, Q.; Li, B.; Chen, D.; Li, H.; He, Z. Boosting the data processing speed by artificial neural network in distributed
fiber-optic sensor. In Optical Fiber Sensors Conference 2020 Special Edition; Optica Publishing Group: Washington, DC, USA, 2020.
65. Liehr, S.; Jäger, L.A.; Karapanagiotis, C.; Münzenberger, S.; Kowarik, S. Real-time dynamic strain sensing in optical fibers using
artificial neural networks. Opt. Express 2019, 27, 7405–7425. [CrossRef] [PubMed]
66. Venketeswaran, A.; Lalam, N.; Wuenschell, J.; Ohodnicki, P.R.; Badar, M.; Chen, K.P.; Lu, P.; Duan, Y.; Chorpening, B.; Buric, M.
Recent Advances in Machine Learning for Fiber Optic Sensor Applications. Adv. Intell. Syst. 2021, 4, 2100067. [CrossRef]
67. Krivosheev, A.I.; Barkov, F.L.; Konstantinov, Y.A.; Belokrylov, M.E. State-of-the-Art Methods for Determining the Frequency Shift
of Brillouin Scattering in Fiber-Optic Metrology and Sensing (Review). Instrum. Exp. Tech. 2022, 65, 687–710. [CrossRef]
68. Ge, Z.; Shen, L.; Zhao, C.; Wu, H.; Zhao, Z.; Tang, M. Enabling variable high spatial resolution retrieval from a long pulse BOTDA
sensor. IEEE Internet Things J. 2022, 10, 1813–1821. [CrossRef]
69. Karapanagiotis, C.; Wosniok, A.; Hicke, K.; Krebber, K. Time-Efficient Convolutional Neural Network-Assisted Brillouin Optical
Frequency Domain Analysis. Sensors 2021, 21, 2724. [CrossRef]
70. Karapanagiotis, C.; Hicke, K.; Krebber, K. Machine learning assisted BOFDA for simultaneous temperature and strain sensing in
a standard optical fiber. Opt. Express 2023, 31, 5027–5041. [CrossRef]
Sensors 2023, 23, 6187 23 of 26
71. Ruiz-Lombera, R.; Fuentes, A.; Rodriguez-Cobo, L.; Lopez-Higuera, J.M.; Mirapeix, J. Simultaneous Temperature and Strain
Discrimination in a Conventional BOTDA via Artificial Neural Networks. J. Light. Technol. 2018, 36, 2114–2121. [CrossRef]
72. Wang, B.W.; Wang, L.; Guo, N.; Zhao, Z.Y.; Yu, C.Y.; Lu, C. Deep neural networks assisted BOTDA for simultaneous temperature
and strain measurement with enhanced accuracy. Opt. Express 2019, 27, 2530–2543. [CrossRef] [PubMed]
73. Yang, G.; Zeng, K.; Wang, L.; Tang, M.; Liu, D. Integrated denoising and extraction of both temperature and strain based on a
single CNN framework for a BOTDA sensing system. Opt. Express 2022, 30, 34453–34467. [CrossRef] [PubMed]
74. Bao, X.; Webb, D.J.; Jackson, D.A. Combined Distributed Temperature and Strain Sensor-Based on Brillouin Loss in an Optical-
Fiber. Opt. Lett. 1994, 19, 141–143. [CrossRef]
75. Alahbabi, M.N.; Cho, Y.T.; Newson, T.P. Simultaneous temperature and strain measurement with combined spontaneous Raman
and Brillouin scattering. Opt. Lett. 2005, 30, 1276–1278. [CrossRef]
76. Coscetta, A.; Catalano, E.; Cerri, E.; Cennamo, N.; Zeni, L.; Minardo, A. Hybrid Brillouin/Rayleigh sensor for multiparameter
measurements in optical fibers. Opt. Express 2021, 29, 24025–24031. [CrossRef]
77. Kee, H.H.; Lees, G.P.; Newson, T.P. All-fiber system for simultaneous interrogation of distributed strain and temperature sensing
by spontaneous Brillouin scattering. Opt. Lett. 2000, 25, 695–697. [CrossRef]
78. Kishida, K.; Yamauchi, Y.; Guzik, A. Study of Optical Fibers Strain-Temperature Sensitivities Using Hybrid Brillouin-Rayleigh
System. Photonic Sens. 2014, 4, 1–11. [CrossRef]
79. Liu, X.; Bao, X. Brillouin Spectrum in LEAF and Simultaneous Temperature and Strain Measurement. J. Light. Technol. 2012, 30,
1053–1059. [CrossRef]
80. Peng, J.Q.; Lu, Y.G.; Zhang, Z.L.; Wu, Z.N.; Zhang, Y.Y. Distributed Temperature and Strain Measurement Based on Brillouin
Gain Spectrum and Brillouin Beat Spectrum. IEEE Photonic Technol. Lett. 2021, 33, 1217–1220. [CrossRef]
81. Zhang, X.; Liu, S.; Zhang, J.; Qiao, L.; Wang, T.; Gao, S.; Zhang, M. Simultaneous Strain and Temperature Measurement Based on
Chaotic Brillouin Optical Correlation-Domain Analysis in Large-Effective-Area Fibers. Photonic Sens. 2021, 11, 377–386. [CrossRef]
82. Zou, L.F.; Bao, X.Y.; Afshar, V.S.; Chen, L. Dependence of the Brillouin frequency shift on strain and temperature in a photonic
crystal fiber. Opt. Lett. 2004, 29, 1485–1487. [CrossRef] [PubMed]
83. Li, Z.L.; Yan, L.S.; Zhang, X.P.; Pan, W. Temperature and Strain Discrimination in BOTDA Fiber Sensor by Utilizing Dispersion
Compensating Fiber. IEEE Sens. J. 2018, 18, 7100–7105. [CrossRef]
84. Ekechukwu, G.K.; Sharma, J. Well-scale demonstration of distributed pressure sensing using fiber-optic DAS and DTS. Sci. Rep.
2021, 11, 12505. [CrossRef] [PubMed]
85. Hotate, K.; Hasegawa, T. Measurement of Brillouin Gain Spectrum Distribution along an Optical Fiber Using a Correlation-Based
Technique: Proposal, Experiment and Simulation (Special Issue on Optical Fiber Sensors). IEICE Trans. Electron. 2000, 83, 405–412.
86. Hotate, K. Recent achievements in BOCDA/BOCDR. In Proceedings of the IEEE SENSORS 2014 Proceedings, Valencia, Spain,
2–5 November 2014; pp. 142–145.
87. Mizuno, Y.; Zou, W.; He, Z.; Hotate, K. Proposal of Brillouin optical correlation-domain reflectometry (BOCDR). Opt. Express
2008, 16, 12148–12153. [CrossRef]
88. Bao, X.; Brown, A.; DeMerchant, M.; Smith, J. Characterization of the Brillouin-loss spectrum of single-mode fibers by use of very
short (<10-ns) pulses. Opt. Lett. 1999, 24, 510–512.
89. Liu, Z.; Ferrier, G.; Bao, X.; Zeng, X.; Yu, Q.; Kim, A. Brillouin Scattering Based Distributed Fiber Optic Temperature Sensing for
Fire Detection. Fire Saf. Sci. 2003, 7, 221–232. [CrossRef]
90. Farahani, M.A.; Castillo-Guerra, E.; Colpitts, B.G. Accurate estimation of Brillouin frequency shift in Brillouin optical time domain
analysis sensors using cross correlation. Opt. Lett. 2011, 36, 4275–4277. [CrossRef]
91. Farahani, M.A.; Castillo-Guerra, E.; Colpitts, B.G. A Detailed Evaluation of the Correlation-Based Method Used for Estimation of
the Brillouin Frequency Shift in BOTDA Sensors. IEEE Sens. J. 2013, 13, 4589–4598. [CrossRef]
92. Liang, Y.; Jiang, J.; Chen, Y.; Zhu, R.; Lu, C.; Wang, Z. Optimized Feedforward Neural Network Training for Efficient Brillouin
Frequency Shift Retrieval in Fiber. IEEE Access 2019, 7, 68034–68042. [CrossRef]
93. Bishop, C.M.; Nasrabadi, N.M. Pattern Recognition and Machine Learning; Springer: Berlin/Heidelberg, Germany, 2006.
94. Ying, X. An Overview of Overfitting and its Solutions. J. Phys. Conf. Ser. 2019, 1168, 22022. [CrossRef]
95. Prechelt, L. Early Stopping—But when? In Neural Networks: Tricks of the Trade; Springer: Berlin/Heidelberg, Germany, 1998;
pp. 55–69.
96. Zhang, Y.; Yang, Y. Cross-validation for selecting a model selection procedure. J. Econom. 2015, 187, 95–112. [CrossRef]
97. Lu, C.; Liang, Y.; Jia, X.; Fu, Y.; Liang, J.; Wang, Z. Artificial Neural Network for Accurate Retrieval of Fiber Brillouin Frequency
Shift with Non-Local Effects. IEEE Sens. J. 2020, 20, 8559–8569. [CrossRef]
98. Chen, B.; Su, L.; Zhang, Z.; Liu, X.; Dai, T.; Song, M.; Yu, H.; Wang, Y.; Yang, J. Wavelet convolutional neural network for robust
and fast temperature measurements in Brillouin optical time domain reflectometry. Opt. Express 2022, 30, 13942–13958. [CrossRef]
[PubMed]
99. Chang, Y.; Wu, H.; Zhao, C.; Shen, L.; Fu, S.; Tang, M. Distributed Brillouin frequency shift extraction via a convolutional neural
network. Photonics Res. 2020, 8, 690–697. [CrossRef]
100. Buber, E.; Diri, B. Performance Analysis and CPU vs GPU Comparison for Deep Learning. In Proceedings of the 2018 6th
International Conference on Control Engineering & Information Technology (CEIT), Istanbul, Turkey, 25–27 October 2018;
pp. 1–6.
Sensors 2023, 23, 6187 24 of 26
101. Qi, D.; Li, J.; Guan, X.; Chan, C.-K. Dynamic polarization-insensitive BOTDA in direct-detection OFDM with CNN-based BFS
extraction. Opt. Express 2022, 30, 7725–7736. [CrossRef]
102. Caceres, J.N.; Noda, K.; Zhu, G.; Lee, H.; Nakamura, K.; Mizuno, Y. Spatial Resolution Enhancement of Brillouin Optical
Correlation-Domain Reflectometry Using Convolutional Neural Network: Proof of Concept. IEEE Access 2021, 9, 124701–124710.
[CrossRef]
103. Lalam, N.; Venketeswaran, A.; Lu, P.; Buric, M.P.; Schröder, H.; Chen, R.T. Probabilistic deep neural network based signal
processing for Brillouin gain and phase spectrums of vector BOTDA system. In Proceedings of the Optical Interconnects XXI,
Online, 6–11 March 2021.
104. Soto, M.A.; Thévenaz, L. Modeling and evaluating the performance of Brillouin distributed optical fiber sensors. Opt. Express
2013, 21, 31347–31366. [CrossRef]
105. Meng, X.; Zhang, D.; Li, H.; Huang, Y. Efficient two-stage strain/temperature measurement method for BOTDA system based on
Bayesian uncertainty quantification. Measurement 2022, 203, 111966. [CrossRef]
106. Cortes, C.; Vapnik, V. Support-vector networks. Mach. Learn. 1995, 20, 273–297. [CrossRef]
107. Yao, Y.; Mizuno, Y. Dynamic strain measurement in Brillouin optical correlation-domain sensing facilitated by dimensionality
reduction and support vector machine. Opt. Express 2022, 30, 15616–15633. [CrossRef]
108. Zheng, H.; Xiao, F.; Sun, S.; Qin, Y. Brillouin Frequency Shift Extraction Based on AdaBoost Algorithm. Sensors 2022, 22, 3354.
[CrossRef]
109. Freund, Y.; Schapire, R.E. A Decision-Theoretic Generalization of On-Line Learning and an Application to Boosting. J. Comput.
Syst. Sci. 1997, 55, 119–139. [CrossRef]
110. Hastie, T.; Rosset, S.; Zhu, J.; Zou, H. Multi-class AdaBoost. Stat. Interface 2009, 2, 349–360. [CrossRef]
111. Quinlan, J.R. Learning with Continuous Classes. In Proceedings of the Australian Joint Conference on Artificial Intelligence,
Hobart, Australia, 16–18 November 1992; pp. 343–348.
112. Dobra, A.; Gehrke, J. SECRET: A scalable linear regression tree algorithm. In Proceedings of the 8th ACM SIGKDD International
Conference on Knowledge Discovery and Data Mining, Edmonton, AB, Canada, 23–26 July 2002.
113. Fix, E.; Hodges, J.L. Discriminatory Analysis. Nonparametric Discrimination: Consistency Properties. Int. Stat. Rev. Rev. Int. Stat.
1989, 57, 238–247. [CrossRef]
114. Zheng, H.; Peng, G.-D.; He, Z. Extraction of Brillouin frequency shift in Brillouin distributed fiber sensors by neighbors-based
machine learning. In Proceedings of the Advanced Sensor Systems and Applications X, Online, 11–16 October 2020.
115. Zheng, H.; Sun, S.; Qin, Y.; Xiao, F.; Dai, C. Extraction of Brillouin frequency shift from Brillouin gain spectrum in Brillouin
distributed fiber sensors using K nearest neighbor algorithm. Opt. Fiber Technol. 2022, 71, 102903. [CrossRef]
116. Xiao, F.; Lv, M.; Li, X. Fast Measurement of Brillouin Frequency Shift in Optical Fiber Based on a Novel Feedforward Neural
Network. Photonics 2021, 8, 474. [CrossRef]
117. Jolliffe, I.T.; Cadima, J. Principal component analysis: A review and recent developments. Philos. Trans. R. Soc. A Math. Phys. Eng.
Sci. 2016, 374, 20150202. [CrossRef]
118. Abdolrasol, M.G.M.; Hussain, S.M.S.; Ustun, T.S.; Sarker, M.R.; Hannan, M.A.; Mohamed, R.; Ali, J.A.; Mekhilef, S.; Milad, A.
Artificial Neural Networks Based Optimization Techniques: A Review. Electronics 2021, 10, 2689. [CrossRef]
119. Aszemi, N.M.; Dominic, P. Hyperparameter optimization in convolutional neural network using genetic algorithms. Int. J. Adv.
Comput. Sci. Appl. 2019, 10, 269–278. [CrossRef]
120. Yu, T.; Zhu, H. Hyper-parameter optimization: A review of algorithms and applications. arXiv 2020, arXiv:2003.05689.
121. Montavon, G.; Samek, W.; Müller, K.-R. Methods for interpreting and understanding deep neural networks. Digit. Signal Process.
2018, 73, 1–15. [CrossRef]
122. Bansal, M.; Goyal, A.; Choudhary, A. A comparative analysis of K-Nearest Neighbor, Genetic, Support Vector Machine, Decision
Tree, and Long Short Term Memory algorithms in machine learning. Decis. Anal. J. 2022, 3, 100071. [CrossRef]
123. Wang, B.; Guo, N.; Wang, L.; Yu, C.; Lu, C. Denoising and Robust Temperature Extraction for BOTDA Systems based on Denoising
Autoencoder and DNN. In Proceedings of the 26th International Conference on Optical Fiber Sensors, Lausanne, Switzerland,
24–28 September 2018.
124. Wang, B.; Guo, N.; Wang, L.; Yu, C.; Lu, C. Robust and Fast Temperature Extraction for Brillouin Optical Time-Domain Analyzer
by Using Denoising Autoencoder-Based Deep Neural Networks. IEEE Sens. J. 2020, 20, 3614–3620. [CrossRef]
125. Yang, Y.-n.; Dong, Y.; Yu, K. SNR Improvement based on Attention-DNet for Brillouin Distributed Optical Fiber Sensors. In
Proceedings of the 2022 27th OptoElectronics and Communications Conference (OECC) and 2022 International Conference on
Photonics in Switching and Computing (PSC), Toyama, Japan, 3–6 July 2022; pp. 1–3.
126. Wu, H.; Wan, Y.; Tang, M.; Chen, Y.; Zhao, C.; Liao, R.; Chang, Y.; Fu, S.; Shum, P.P.; Liu, D. Real-Time Denoising of Brillouin
Optical Time Domain Analyzer with High Data Fidelity Using Convolutional Neural Networks. J. Light. Technol. 2019, 37,
2648–2653. [CrossRef]
127. Zheng, H.; Yan, Y.; Wang, Y.; Shen, X.; Lu, C. Deep Learning Enhanced Long-Range Fast BOTDA for Vibration Measurement.
J. Light. Technol. 2022, 40, 262–268. [CrossRef]
128. Tian, C.; Xu, Y.; Li, Z.; Zuo, W.; Fei, L.; Liu, H. Attention-guided CNN for image denoising. Neural Netw. 2020, 124, 117–129.
[CrossRef]
Sensors 2023, 23, 6187 25 of 26
129. Zheng, M.; Xu, J.; Shen, Y.; Tian, C.; Li, J.; Fei, L.; Zong, M.; Liu, X. Attention-based CNNs for Image Classification: A Survey.
J. Phys. Conf. Ser. 2022, 2171, 12068. [CrossRef]
130. Wu, H.; Wang, L.; Zhao, Z.; Guo, N.; Shu, C.; Lu, C. Brillouin optical time domain analyzer sensors assisted by advanced image
denoising techniques. Opt. Express 2018, 26, 5126–5139. [CrossRef] [PubMed]
131. Hashemi, M. Enlarging smaller images before inputting into convolutional neural network: Zero-padding vs. interpolation. J. Big
Data 2019, 6, 98. [CrossRef]
132. Azad, A.K.; Wang, L.; Guo, N.; Lu, C.; Tam, H.Y. Temperature sensing in BOTDA system by using artificial neural network.
Electron. Lett. 2015, 51, 1578–1580. [CrossRef]
133. Azad, A.K.; Wang, L.; Guo, N.; Tam, H.Y.; Lu, C. Signal processing using artificial neural network for BOTDA sensor system.
Opt. Express 2016, 24, 6769–6782. [CrossRef] [PubMed]
134. Wang, L.; Wang, B.; Jin, C.; Guo, N.; Yu, C.; Lu, C. Brillouin optical time domain analyzer enhanced by artificial/deep neural
networks. In Proceedings of the 2017 16th International Conference on Optical Communications and Networks (ICOCN), Wuzhen,
China, 7–10 August 2017; pp. 1–3.
135. Wang, J.; Li, Y.; Liao, J. Temperature extraction for Brillouin optical fiber sensing system based on extreme learning machine.
Opt. Commun. 2019, 453, 124418. [CrossRef]
136. Cao, Z.Y.; Guo, N.; Li, M.H.; Yu, K.L.; Gao, K.Q. Back propagation neutral network based signal acquisition for Brillouin
distributed optical fiber sensors. Opt. Express 2019, 27, 4549–4561. [CrossRef]
137. Madaschi, A.; Morosi, J.; Brunero, M.; Boffi, P. Enhanced Neural Network Implementation for Temperature Profile Extraction in
Distributed Brillouin Scattering-Based Sensors. IEEE Sens. J. 2022, 22, 6871–6878. [CrossRef]
138. Li, Y.; Wang, J. Optimized neural network for temperature extraction from Brillouin scattering spectra. Opt. Fiber Technol. 2020,
58, 102314. [CrossRef]
139. Motil, A.; Hadar, R.; Sovran, I.; Tur, M. Gain dependence of the linewidth of Brillouin amplification in optical fibers. Opt. Express
2014, 22, 27535–27541. [CrossRef]
140. Wang, B.; Guo, N.; Khan, F.N.; Azad, A.K.; Wang, L.; Yu, C.; Lu, C. Extraction of temperature distribution using deep neural
networks for BOTDA sensing system. In Proceedings of the 2017 Conference on Lasers and Electro-Optics Pacific Rim (CLEO-PR),
Singapore, 31 July–4 August 2017; pp. 1–4.
141. Wang, M.H.; Sui, Y.; Zhou, W.N.; An, X.; Dong, W. AIoT enabled resampling filter for temperature extraction of the Brillouin gain
spectrum. Opt. Express 2022, 30, 36110–36121. [CrossRef]
142. Wang, M.H.; Sui, Y.; Zhou, W.N.; Dong, W.; Zhang, X.D. Sweep frequency method with variance weight probability for
temperature extraction of the Brillouin gain spectrum based on an artificial neural network. Opt. Express 2021, 29, 28994–29006.
[CrossRef]
143. Zhang, Y.; Li, Y.; Cheng, L.; Yu, L.; Zhu, H.; Luo, B.; Zou, X. Fast temperature extraction via Echo State Network for BOTDA
sensors. In Proceedings of the Asia Communications and Photonics Conference/International Conference on Information
Photonics and Optical Communications 2020 (ACP/IPOC), Beijing, China, 24–27 October 2020.
144. Zhou, H.; Zhu, H.; Zhang, Y.; Huang, M.; Li, G.; Yang, Y. Fast and accurate temperature extraction via general regression neural
network for BOTDA sensors. In Proceedings of the 12th International Conference on Information Optics and Photonics, Xi’an,
China, 23–26 July 2021.
145. Kumar, S.; Tiwari, P.; Zymbler, M. Internet of Things is a revolutionary approach for future technology enhancement: A review.
J. Big Data 2019, 6, 111. [CrossRef]
146. Zhang, Y.; Yu, L.; Hu, Z.; Cheng, L.; Sui, H.; Zhu, H.; Li, G.; Luo, B.; Zou, X.; Yan, L. Ultrafast and Accurate Temperature Extraction
via Kernel Extreme Learning Machine for BOTDA Sensors. J. Light. Technol. 2021, 39, 1537–1543. [CrossRef]
147. Huang, G.; Huang, G.-B.; Song, S.; You, K. Trends in extreme learning machines: A review. Neural Netw. 2015, 61, 32–48.
[CrossRef] [PubMed]
148. Guang-Bin, H.; Hongming, Z.; Xiaojian, D.; Rui, Z. Extreme Learning Machine for Regression and Multiclass Classification. IEEE
Trans. Syst. Man Cybern. Part B 2012, 42, 513–529. [CrossRef] [PubMed]
149. Wu, H.; Wang, L.; Guo, N.; Shu, C.; Lu, C. Brillouin Optical Time-Domain Analyzer Assisted by Support Vector Machine for
Ultrafast Temperature Extraction. J. Light. Technol. 2017, 35, 4159–4167. [CrossRef]
150. Wu, H.; Wang, L.; Zhao, Z.; Shu, C.; Lu, C. Support Vector Machine based Differential Pulse-width Pair Brillouin Optical Time
Domain Analyzer. IEEE Photonics J. 2018, 10, 6802911. [CrossRef]
151. Nordin, N.D.; Abdullah, F.; Zan, M.S.D.; A Bakar, A.A.; Krivosheev, A.I.; Barkov, F.L.; Konstantinov, Y.A. Improving Prediction
Accuracy and Extraction Precision of Frequency Shift from Low-SNR Brillouin Gain Spectra in Distributed Structural Health
Monitoring. Sensors 2022, 22, 2677. [CrossRef]
152. Nordin, N.D.; Abdullah, F.; Zan, M.S.D.; Ismail, A.; Jamaludin, M.Z.; Bakar, A.A.A. Fast temperature extraction approach for
BOTDA using Generalized Linear Model. In Proceedings of the 2020 IEEE 8th International Conference on Photonics (ICP), Kota
Bharu, Malaysia, 12 May–30 June 2020; pp. 13–14.
153. Nordin, N.D.; Zan, M.S.D.; Abdullah, F. Generalized linear model for enhancing the temperature measurement performance in
Brillouin optical time domain analysis fiber sensor. Opt. Fiber Technol. 2020, 58, 102298. [CrossRef]
154. Nordin, N.D.; Zan, M.S.D.; Abdullah, F. Comparative Analysis on the Deployment of Machine Learning Algorithms in the
Distributed Brillouin Optical Time Domain Analysis (BOTDA) Fiber Sensor. Photonics 2020, 7, 79. [CrossRef]
Sensors 2023, 23, 6187 26 of 26
155. Murphy, K.P. Machine Learning: A Probabilistic Perspective; MIT Press: Cambridge, MA, USA, 2012.
156. Song, Q.; Zhang, C.; Tang, G.; Ansari, F. Deep learning method for detection of structural microcracks by brillouin scattering
based distributed optical fiber sensors. Smart Mater. Struct. 2020, 29, 75008. [CrossRef]
157. Wei, C.; Deng, Q.; Yin, Y.; Yan, M.; Lu, M.; Deng, K. A Machine Learning Study on Internal Force Characteristics of the Anti-Slide
Pile Based on the DOFS-BOTDA Monitoring Technology. Sensors 2022, 22, 2085. [CrossRef]
158. Song, Q.; Yan, G.; Tang, G.; Ansari, F. Robust principal component analysis and support vector machine for detection of
microcracks with distributed optical fiber sensors. Mech. Syst. Signal Process. 2021, 146, 107019. [CrossRef]
159. Zhang, L.; Shi, B.; Zhu, H.; Yu, X.; Wei, G. A machine learning method for inclinometer lateral deflection calculation based on
distributed strain sensing technology. Bull. Eng. Geol. Environ. 2020, 79, 3383–3401. [CrossRef]
160. Ruiz-Lombera, R.; Serrano, J.M.; Lopez-Higuera, J.M. Automatic strain detection in a Brillouin Optical Time Domain sensor using
Principal Component Analysis and Artificial Neural Networks. In Proceedings of the IEEE SENSORS 2014 Proceedings, Valencia,
Spain, 2–5 November 2014; pp. 1539–1542.
161. Lv, T.; Ye, X.; Zheng, Y.; Ge, Z.; Xu, Z.; Sun, X. Error Estimation of BFS Extraction with Optimized Neural Network & Frequency
Scanning Range. J. Light. Technol. 2021, 39, 5149–5155.
162. Elshawi, R.; Wahab, A.; Barnawi, A.; Sakr, S. DLBench: A comprehensive experimental evaluation of deep learning frameworks.
Clust. Comput. 2021, 24, 2017–2038. [CrossRef]
163. Yao, Y.; Set, S.Y.; Yamashita, S. Proposal of signal processing based on machine learning in Brillouin optical correlation domain
analysis/ reflectometry. In Proceedings of the 2017 22nd Microoptics Conference (MOC), Tokyo, Japan, 19–22 November 2017;
pp. 228–229.
164. Yao, Y.; Mizuno, Y. Neural network-assisted signal processing in Brillouin optical correlation-domain sensing for potential
high-speed implementation. Opt. Express 2021, 29, 35474–35489. [CrossRef]
165. Chen, X.; Yu, H.; Huang, W. A high accurate fitting algorithm for Brillouin scattering spectrum of distributed sensing systems
based on LSSVM networks. In Proceedings of the 2021 International Conference on Electronic Information Engineering and
Computer Science (EIECS), Changchun, China, 23–26 September 2021; pp. 107–110.
166. Wan, D.; Shan, L.; Xi, L.; Xiao, Z.; Zhang, Y.a.; Yuan, X.; Zhang, X.; Zhang, H. An improved lorentz fitting algorithm for BOTDR
using SVM model to capture the main peak of power cumulative average data. Opt. Fiber Technol. 2022, 74, 103082. [CrossRef]
167. Karapanagiotis, C. Evaluation of the generalization performance of a CNN-assisted BOFDA system. In Proceedings of the Sensors
and Measuring Systems; 21st ITG/GMA-Symposium, Nuremberg, Germany, 10–11 May 2022; pp. 1–4.
168. Gyger, F.; Yang, Z.; Soto, M.A.; Yang, F.; Tow, K.H.; Thévenaz, L. High Signal-to-Noise Ratio Stimulated Brillouin Scattering Gain
Spectrum Measurement. In Proceedings of the 26th International Conference on Optical Fiber Sensors, Lausanne, Switzerland,
24–28 September 2018.
169. Rasmussen, C.E.; Williams, C.K.I. Gaussian Processes for Machine Learning; The MIT Press: Cambridge, MA, USA, 2005.
170. Karapanagiotis, C.; Hicke, K.; Krebber, K. Temperature and humidity discrimination in Brillouin distributed fiber optic sensing
using machine learning algorithms. In Proceedings of the Optical Sensing and Detection VII, Strasbourg, France, 3–7 April 2022.
Online, 9–15 May 2022.
171. Karapanagiotis, C.; Hicke, K.; Wosniok, A.; Krebber, K. Distributed humidity fiber-optic sensor based on BOFDA using a simple
machine learning approach. Opt. Express 2022, 30, 12484–12494. [CrossRef]
172. Donoho, D.L. Compressed sensing. IEEE Trans. Inf. Theory 2006, 52, 1289–1306. [CrossRef]
173. Zhou, D.-P.; Peng, W.; Chen, L.; Bao, X. Brillouin optical time-domain analysis via compressed sensing. Opt. Lett. 2018, 43,
5496–5499. [CrossRef] [PubMed]
174. Dong, Y.; Yang, Y.-N.; Azad, A.K.; Yang, Z.; Yu, K.; Zhao, S. Compressed Sensing Based on K-SVD for Brillouin Optical Fiber
Distributed Sensors. IEEE Sens. J. 2022, 22, 16414–16421. [CrossRef]
175. Zheng, H.; Yan, Y.; Zhao, Z.; Zhu, T.; Zhang, J.; Guo, N.; Lu, C. Accelerated Fast BOTDA Assisted by Compressed Sensing and
Image Denoising. IEEE Sens. J. 2021, 21, 25723–25729. [CrossRef]
176. Calderbank, R. Compressed Learning: Universal Sparse Dimensionality Reduction and Learning in the Measurement Do-
main, Preprint 2009. Available online: https://www.semanticscholar.org/paper/Compressed-Learning-%3A-Universal-Sparse-
Reduction-in-Calderbank/627c14fe9097d459b8fd47e8a901694198be9d5d#citing-papers (accessed on 31 May 2023).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.