Detection of Respiratory Infections Using RGB-Infrared Sensors On Portable Device
Detection of Respiratory Infections Using RGB-Infrared Sensors On Portable Device
Detection of Respiratory Infections Using RGB-Infrared Sensors On Portable Device
Manuscript received June 18, 2020; accepted June 21, 2020. Date of I. I NTRODUCTION
publication June 24, 2020; date of current version October 16, 2020. This
work was supported in part by the National Natural Science Foundation
of China under Grant 61901172, Grant 61831015, and Grant U1908210,
in part by the Shanghai Sailing Program under Grant 19YF1414100,
T O TACKLE the outbreak of the COVID-19 pandemic,
early control is essential. Among all the control measures,
efficient and safe identification of potential patients is the
in part by the Science and Technology Commission of Shanghai Munic-
ipality (STCSM) under Grant 18DZ2270700 and Grant 19511120100, most important part. Existing researches show that the human
in part by the Foundation of Key Laboratory of Artificial Intelligence, physiological state can be perceived through breathing [1],
Ministry of Education under Grant AI2019002, and in part by the Funda- which means respiratory signals are vital signs that can reflect
mental Research Funds for the Central Universities. The associate editor
coordinating the review of this article and approving it for publication was human health conditions to a certain extent [2]. Many clinical
Dr. Ioannis Raptis. (Zheng Jiang and Menghan Hu contributed equally literature suggests that abnormal respiratory symptoms may
to this work.) (Corresponding authors: Guangtao Zhai; Yong Lu.) be important factors for the diagnosis of some specific dis-
Zheng Jiang, Zhongpai Gao, Lei Fan, and Guangtao Zhai are with the
Institute of Image Communication and Information Processing, Shanghai eases [3]. Recent studies have found that COVID-19 patients
Jiao Tong University, Shanghai 200240, China, and also with the Key have obvious respiratory symptoms such as shortness of
Laboratory of Artificial Intelligence, Ministry of Education, Shanghai breath fever, tiredness, and dry cough [4], [5]. Among those
200240, China (e-mail: zhaiguangtao@sjtu.edu.cn).
Menghan Hu is with the Shanghai Key Laboratory of Multidimen- symptoms, atypical or irregular breathing is considered as
sional Information Processing, East China Normal University, Shanghai one of the early signs according to the recent research [6].
200062, China, and also with the Key Laboratory of Artificial Intelligence, For many people, early mild respiratory symptoms are diffi-
Ministry of Education, Shanghai 200240, China.
Ranran Dai is with the Department of Pulmonary and Critical Care cult to be recognized. Therefore, through the measurement
Medicine, Ruijin Hospital, School of Medicine, Shanghai Jiao Tong of respiration conditions, potential COVID-19 patients can
University, Shanghai 200240, China. be screened to some extent. This may play an auxiliary
Yaling Pan and Yong Lu are with the Department of Radiology, Ruijin
Hospital Luwan Branch, School of Medicine, Shanghai Jiao Tong Univer- diagnostic role, thus helping find potential patients as early
sity, Shanghai 200240, China (e-mail: 18917762053@163.com). as possible.
Wei Tang is with the Department of Respiratory Disease, Ruijin Traditional respiration measurement requires attachments of
Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai
200240, China. sensors to the patient’s body [7]. The monitor of respiration
Digital Object Identifier 10.1109/JSEN.2020.3004568 is measured through the movement of the chest or abdomen.
© IEEE 2020. This article is free to access and download, along with rights for full text and data mining, re-use and analysis
Authorized licensed use limited to: Mapua University. Downloaded on March 31,2021 at 03:47:34 UTC from IEEE Xplore. Restrictions apply.
JIANG et al.: DETECTION OF RESPIRATORY INFECTIONS USING RGB-INFRARED SENSORS 13675
Contact measurement equipment is bulky, expensive, and time- depth camera to classify abnormal respiratory patterns in
consuming. The most important thing is that the contact during real-time and achieved excellent results [9].
measurement may increase the risk of spreading infectious dis- In this paper, we propose a remote, portable and intelli-
eases such as COVID-19. Therefore, the non-contact measure- gent health screening system based on respiratory data for
ment is more suitable for the current situation. In recent years, pre-screening and auxiliary diagnosis of respiratory diseases
many non-contact respiration measurement methods have been like COVID-19. To be more practical in situations where
developed based on image sensors, doppler radar [8], depth people often choose to wear masks, the breathing data capture
camera [9] and thermal camera [10]. Considering factors such method for people wearing masks is introduced. After extract-
as safety, stability and price, the measurement technology of ing breathing data from the videos obtained by the thermal
thermal imaging is the most suitable for extensive promotion. camera, a deep learning neural network is performed to work
So far, thermal imaging has been used as a monitoring tech- on the classification between healthy and abnormal respiration
nology in a wide range of medical fields such as estimations of conditions. To verify the robustness of our algorithm and the
heart rate [11] and breathing rate [12]–[14]. Another important effectiveness of the proposed equipment, we analyze the influ-
thing is that many existing respiration measurement devices ence of mask type, measurement distance and measurement
are large and immovable. Given the worldwide pandemic, angle on breathing data collection.
the portable and intelligent screening equipment is required to The main contributions of this paper are threefold. First,
meet the needs of large-scale screening and other application we combine the face recognition technology with dual-mode
scenarios in a real-time manner. For thermal imaging-based imaging to accomplish a respiratory data extraction method for
respiration measurement, nostril regions and mouth regions people wearing masks, which is quite essential for the current
are the only focused regions since only these two parts have situation. Based on our dual-camera algorithm, the respiration
periodic heat exchange between the body and the outside data is successfully obtained from masked facial thermal
environment. However, until now, researchers have barely con- videos. Subsequently, we propose a classification method
sidered measuring thermal respiration data for people wearing to judge abnormal respiratory states with a deep learning
masks. During the epidemic of infectious diseases, masks framework. Finally, based on the two contributions mentioned
may effectively suppress the spread of the virus according to above, we have implemented a non-contact and efficient health
recent studies [15], [16]. Therefore, to develop the respiration screening system for respiratory infections using the collected
measurement method for people wearing masks becomes quite data from the hospital, which may contribute to finding the
practical. In this study, we develop a portable and intelligent possible cases of COVID-19 and keeping the control of
health screening device that uses thermal imaging to extract the second spread of SARS-CoV-2.
respiration data from masked people, which is then used
to do the health screening classification via deep learning II. M ETHOD
architecture. A brief introduction to the proposed respiration condition
In classification tasks, deep learning has achieved state- screening method is shown below. We first use the portable
of-the-art performance in most research areas. Compared and intelligent screening device to get the thermal and the
with traditional classifiers, classifiers based on deep learning corresponding RGB videos. During the data collection, we also
can automatically identify the corresponding features and perform a simple real-time screening result. After getting the
their correlations rather than extracting features manually. thermal videos, the first step is to extract respiration data
Recently, many researchers have developed detection methods from faces in thermal videos. During the extraction process,
of COVID-19 cases through medical imaging techniques such we use the face detection method to capture people’s masked
as chest X-ray imaging and chest CT imaging [17]–[19]. areas. Then a region of interest (ROI) selection algorithm is
These studies have proved that deep learning can achieve proposed to get the region from the mask that stands for
high accuracy in detection of COVID-19. Based on the the characteristic of breath most. Finally, we use a bidi-
nature of the above methods, they can only be used for rectional GRU neural network with an attention mechanism
the examination of highly suspected patients in hospitals, (BiGRU-AT) model to work on the classification task with the
and may not meet the requirements for the larger-scale input respiration data. A key point in our method is to collect
screening in public places. Therefore, this paper pro- respiration data from facial thermal videos, which has been
poses a scheme based on breath detection via a thermal proved to be effective by many previous studies [26]–[28].
camera.
For breathing tasks, deep learning-based algorithm can also
better extract the corresponding features such as breathing rate A. Overview of the Portable and Intelligent Health
and inhale-to-exhale ratio, and make more accurate predic- Screening System for Respiratory Infections
tions [20]–[23]. Recently, many researchers made use of deep Our data collection obtained by the system is shown in
learning to analyze the respiratory process. Cho et al. used a Fig. 1. The whole screening system includes a FLIR one
convolutional neural network (CNN) to analyze human breath- thermal camera, an Android smartphone and the corresponding
ing parameters to determine the degree of nervousness through application we have written, which is used for data acquisition
thermal imaging [24]. Romero et al. applied a language model and simple instant analysis. Our screening equipment, whose
to detect acoustic events in sleep-disordered breathing through main advantage is portable, can be easily applied to measure
related sounds [25]. Wang et al. utilized deep learning and abnormal breathing in many occasions of instant detection.
Authorized licensed use limited to: Mapua University. Downloaded on March 31,2021 at 03:47:34 UTC from IEEE Xplore. Restrictions apply.
13676 IEEE SENSORS JOURNAL, VOL. 20, NO. 22, NOVEMBER 15, 2020
Authorized licensed use limited to: Mapua University. Downloaded on March 31,2021 at 03:47:34 UTC from IEEE Xplore. Restrictions apply.
JIANG et al.: DETECTION OF RESPIRATORY INFECTIONS USING RGB-INFRARED SENSORS 13677
Fig. 2. The pipeline of the respiration data extraction: a) record the RGB video and thermal video through a FLIR one thermal camera; b) use face
detection method to detect face and mask region in the RGB frames and then map the region to the thermal frames; c) capture the ROIs in the
thermal frames of mask region by tracking method; d) extract the respiration data from the ROIs.
Fig. 3. The structure of the BiGRU-AT network: the network consists of four layers: the input layer, the bidirectional layer, the attention layer and an
output layer. The output is a 2 dimension tensor which indicates normal or abnormal respiration condition.
calculated as shown in Eq. 2, where μ stands for the mean healthy or not as shown in Fig. 3. The input of the network is
value of s̄(t). the respiration data obtained by our extraction method. Since
the respiratory data is time series, it can be regarded as a
(s̄(t) − μ)2
σs2 (n) = (0 < t < T ) (2) time series classification problem. Therefore, we choose the
T Gate Recurrent Unit (GRU) network with the bi-direction and
Since respiration is a periodic data spread out from the attention layer to work on the sequence prediction task.
nostril area, we can consider that the block with the largest Among all the deep learning structures, recurrent neural
variance is the position where the heat changes most in network (RNN) is a type of neural network which is specially
both frequency and value within the mask, which stands for used to process time series data samples [31]. For a time step
the breath data mostly in the masked region. We adjust the t, the RNN model can be represented by:
corresponding block size according to the size of the masked
region. For a masked region with N blocks, the final ROI is h (t ) = φ U x (t ) + W h (t −1 ) + b (4)
selected by: o(t ) = V h (t ) + c (5)
RO I = arg maxσs2 (n) (3) y (t ) = σ o(t )
(6)
1≤n<N
where x (t ), h (t ) and o(t ) stand for the current input state,
For each thermal video, we traverse all possible blocks
hidden state and output at time step t respectively. V, W, U
in the mask regions of each frame and find the ROIs for
are parameters obtained by training procedure. b is the bias
each frame by the method above. The respiration data is then
and σ and φ are activation functions. The final prediction
defined as s̄(t)(0 < t < T ), which is the pixel intensities of
y (t ).
is
ROIs in all the frames.
Long-short term memory network is developed based on
RNN [32]. Compared to RNN, which can only memorize
D. BiGRU-AT Neural Network and analyze short-term information, it can process relatively
We apply a BiGRU-AT neural network to do the classi- long-term information, and is suitable for problems with
fication task on judging whether the respiration condition is short-term delays or long time intervals. Based on LSTM,
Authorized licensed use limited to: Mapua University. Downloaded on March 31,2021 at 03:47:34 UTC from IEEE Xplore. Restrictions apply.
13678 IEEE SENSORS JOURNAL, VOL. 20, NO. 22, NOVEMBER 15, 2020
many related structures are proposed in recent years [33]. GRU research areas. The structure of attention layer is:
is a simplified LSTM which merges three gates of LSTM (for-
get, input and output) into two gates (update and reset) [34]. u t = tanh (Wu h t + bw ) (12)
For tasks with a few data, GRU may be more suitable than exp u t uw
LSTM since it includes less parameters. In our task, since the at =
(13)
t exp u t u w
input of the neural network is only the respiration data in time
sequence, the GRU network may perform a better result than s= αt h t (14)
t
LSTM network. The structure of GRU can be expressed by
the following equations: where h t represents the BiGRU layer output at time step t,
which is bidirectional. Wu and bw are also parameters that
vary in the training process. at performs a softmax function
rt = σ Wr · h t −1 , x t + br (7)
on u t to get the weight of each step t. Finally, the output of the
z t = σ Wz · h t −1 , x t + bz (8) attention layer s is a combination of all the steps from BiGRU
h̃ t = tanh Wh̄ · rt ∗ h t −1 , x t + bh (9) with different weights. By applying another softmax function
h t = (1 − z t ) ∗ h t −1 + z t ∗ h̃ t (10) to the output s, we get the final prediction of the classification
task. The structure of the whole network is shown in Fig. 3.
Authorized licensed use limited to: Mapua University. Downloaded on March 31,2021 at 03:47:34 UTC from IEEE Xplore. Restrictions apply.
JIANG et al.: DETECTION OF RESPIRATORY INFECTIONS USING RGB-INFRARED SENSORS 13679
TABLE I
E XPRIMENTAL R ESULTS ON THE T EST S ET
Fig. 5. Confusion matrices of the four models. Each row is the number
of real labels and each column is the number of predicted labels. The left
one is the result of BiGRU-AT, the right one is the result of LSTM.
Fig. 4(c) and Fig. 4(d) represent the normal respiratory pat-
tern called Eupnea from healthy participants. By comparison, from the results, the performance improvement of BiGRU-AT
we can find that the respiratory of normal people are in strong compared to LSTM is mainly in the accuracy rate of the
periodic and evenly distributed while abnormal respiratory negative class. This is because many scatter-like abnormalities
data tend to be more irregular. Generally speaking, most in the time series of abnormal breathing are better recognized
abnormal breathing data from respiratory infections have faster by the attention mechanism. Besides, the misclassification rate
frequency and irregular amplitude. of the four networks is relatively high to some extent which
may be because many positive samples do not have typical
B. Experimental Result respiratory infections characteristics.
The experimental results are shown in Table. I. We consider
four evaluation metrics viz. Accuracy, Sensitivity, Specificity C. Analysis
and F1. To measure the performance of our model, we com- During the data collection process, all testers were required
pare the result of our model with three other models which to be about 50 cm away from the camera and face the
are GRU-AT, BiLSTM-AT and LSTM respectively. The result camera directly to ensure data consistency. However, in real-
of sensitivity reaches 90.23% which is far more higher than time conditions, the distance and angles of the testers towards
the specificity of 76.31%. This may have a positive effect the device cannot be so accurate. Therefore, in the analysis
on the screening of potential patients since the false nega- section, we give 3 comparisons from different aspects to prove
tive rate is relatively low. Our method performs better than the robustness of our algorithm and device.
any other network in all evaluation metrics with the only 1) Influence of Mask Types on Respiratory Data: To measure
exception in the sensitivity value of GRU-AT. By comparison, the robustness of our breathing data acquisition algorithm and
the experimental result demonstrates that attention mechanism the effectiveness of the proposed portable device, we analyze
is well-performed in keeping important node features in the the breathing data of the same person wearing different
time series of breathing data since the networks with attention masks. We design 3 mask-wearing scenarios that cover most
layer all perform a better result than LSTM. Another point is situations: wearing one surgical mask (blue line); wearing one
that GRU based networks achieve better results than LSTM KN95 (N95) mask (red line) and wearing two surgical masks
based networks. This may because our data set is relatively (green line). The results are shown in Fig. 6. It can be seen
small which can’t fill the demand of the LSTM based net- from the experimental results that no matter what kind of
works. GRU based networks require less data than LSTM and mask is worn, or even two masks, the respiratory data can be
perform better result in our respiration condition classification well recognized. This proves the stability of our algorithm and
task. device. However, since different masks have different thermal
To figure out the detailed information about the classifica- insulation capabilities, the average breathing temperature may
tion of the respiratory state, we plotted the confusion matrix vary as the mask changes. To minimize this error, respiratory
of the four models as demonstrated in Fig. 5. As can be seen data are normalized before input into the neural network.
Authorized licensed use limited to: Mapua University. Downloaded on March 31,2021 at 03:47:34 UTC from IEEE Xplore. Restrictions apply.
13680 IEEE SENSORS JOURNAL, VOL. 20, NO. 22, NOVEMBER 15, 2020
Fig. 6. The raw respiratory data obtained through the breathing data Fig. 8. The raw respiratory data obtained while the rotation angle varies
extraction algorithm with different types of masks. from 45 degrees to 0 degrees at a stable speed. The blue line stands for
the rotation in pitch axis and the red line stands for the rotation in yaw
axis.
Authorized licensed use limited to: Mapua University. Downloaded on March 31,2021 at 03:47:34 UTC from IEEE Xplore. Restrictions apply.
JIANG et al.: DETECTION OF RESPIRATORY INFECTIONS USING RGB-INFRARED SENSORS 13681
abnormal breathing in many scenarios such as communities, [16] C. C. Leung, T. H. Lam, and K. K. Cheng, “Mass masking in
campuses and hospitals, contributing to distinguishing the the COVID-19 epidemic: People need guidance,” Lancet, vol. 395,
no. 10228, p. 945, Mar. 2020.
possible cases, and then slowing down the spreading of the [17] L. Wang and A. Wong, “COVID-Net: A tailored deep convolu-
virus. tional neural network design for detection of COVID-19 cases from
In future research, based on ensuring portability, we plan chest X-ray images,” 2020, arXiv:2003.09871. [Online]. Available:
http://arxiv.org/abs/2003.09871
to use a more stable algorithm to minimize the effects caused [18] O. Gozes et al., “Rapid AI development cycle for the coronavirus
by different masks on the measurement of breathing condi- (COVID-19) pandemic: Initial results for automated detection &
tions. Besides, temperature may be taken into consideration to patient monitoring using deep learning CT image analysis,” 2020,
arXiv:2003.05037. [Online]. Available: http://arxiv.org/abs/2003.05037
achieve a higher detection accuracy on respiratory infections. [19] L. Li et al., “Artificial intelligence distinguishes COVID-19 from
community acquired pneumonia on chest CT,” Radiology, Mar. 2020,
Art. no. 200905.
R EFERENCES [20] J. Chauhan, J. Rajasegaran, S. Seneviratne, A. Misra, A. Seneviratne,
[1] M. A. Cretikos, R. Bellomo, K. Hillman, J. Chen, S. Finfer, and and Y. Lee, “Performance characterization of deep learning models for
A. Flabouris, “Respiratory rate: The neglected vital sign,” Med. J. Aust., breathing-based authentication on resource-constrained devices,” Proc.
vol. 188, no. 11, pp. 657–659, Jun. 2008. ACM Interact., Mobile, Wearable Ubiquitous Technol., vol. 2, no. 4,
[2] A. D. Droitcour et al., “Non-contact respiratory rate measurement pp. 1–24, Dec. 2018.
validation for hospitalized patients,” in Proc. Annu. Int. Conf. IEEE Eng. [21] B. Liu et al., “Deep learning versus professional healthcare equipment:
Med. Biol. Soc., Sep. 2009, pp. 4812–4815. A fine-grained breathing rate monitoring model,” Mobile Inf. Syst.,
[3] R. Boulding, R. Stacey, R. Niven, and S. J. Fowler, “Dysfunctional vol. 2018, pp. 1–9, Jan. 2018.
breathing: A review of the literature and proposal for classification,” [22] Q. Zhang, X. Chen, Q. Zhan, T. Yang, and S. Xia, “Respiration-based
Eur. Respiratory Rev., vol. 25, no. 141, pp. 287–294, Sep. 2016. emotion recognition with deep learning,” Comput. Ind., vols. 92–93,
[4] Z. Xu et al., “Pathological findings of COVID-19 associated with acute pp. 84–90, Nov. 2017.
respiratory distress syndrome,” Lancet Respiratory Med., vol. 8, no. 4, [23] U. M. Khan, Z. Kabir, S. A. Hassan, and S. H. Ahmed, “A deep learning
pp. 420–422, Apr. 2020. framework using passive WiFi sensing for respiration monitoring,” in
[5] C. Sohrabi et al., “World health organization declares global emergency: Proc. IEEE Global Commun. Conf. (GLOBECOM), Dec. 2017, pp. 1–6.
A review of the 2019 novel coronavirus (COVID-19),” Int. J. Surg., [24] Y. Cho, N. Bianchi-Berthouze, and S. J. Julier, “DeepBreath: Deep
vol. 76, pp. 71–76, Apr. 2020. learning of breathing patterns for automatic stress recognition using low-
[6] E. J. Chow et al., “Symptom screening at illness onset of health care cost thermal imaging in unconstrained settings,” in Proc. 7th Int. Conf.
personnel with SARS-CoV-2 infection in King County, Washington,” Affect. Comput. Intell. Interact. (ACII), Oct. 2017, pp. 456–463.
JAMA, vol. 323, no. 20, p. 2087, May 2020. [25] H. E. Romero, N. Ma, G. J. Brown, A. V. Beeston, and M. Hasan,
[7] F. Q. Al-Khalidi, R. Saatchi, D. Burke, H. Elphick, and S. Tan, “Deep learning features for robust detection of acoustic events in sleep-
“Respiration rate monitoring methods: A review,” Pediatric Pulmonol., disordered breathing,” in Proc. IEEE Int. Conf. Acoust., Speech Signal
vol. 46, no. 6, pp. 523–529, Jun. 2011. Process. (ICASSP), May 2019, pp. 810–814.
[8] J. Kranjec, S. Beguš, J. Drnovšek, and G. Geršak, “Novel [26] Z. Zhu, J. Fei, and I. Pavlidis, “Tracking human breath in infrared
methods for noncontact heart rate measurement: A feasibility imaging,” in Proc. 5th IEEE Symp. Bioinf. Bioeng. (BIBE), Oct. 2005,
study,” IEEE Trans. Instrum. Meas., vol. 63, no. 4, pp. 838–847, pp. 227–231.
Apr. 2014. [27] C. B. Pereira, X. Yu, M. Czaplik, V. Blazek, B. Venema, and
[9] Y. Wang, M. Hu, Q. Li, X.-P. Zhang, G. Zhai, and N. Yao, S. Leonhardt, “Estimation of breathing rate in thermal imaging videos:
“Abnormal respiratory patterns classifier may contribute to large- A pilot study on healthy human subjects,” J. Clin. Monit. Comput.,
scale screening of people infected with COVID-19 in an accurate vol. 31, no. 6, pp. 1241–1254, Dec. 2017.
and unobtrusive manner,” 2020, arXiv:2002.05534. [Online]. Available: [28] A. Ishida and K. Murakami, “Extraction of nostril regions using period-
http://arxiv.org/abs/2002.05534 ical thermal change for breath monitoring,” in Proc. Int. Workshop Adv.
[10] M.-H. Hu, G.-T. Zhai, D. Li, Y.-Z. Fan, X.-H. Chen, and X.-K. Yang, Image Technol. (IWAIT), Jan. 2018, pp. 1–5.
“Synergetic use of thermal and visible imaging techniques for contact- [29] X. Tang, D. K. Du, Z. He, and J. Liu, “Pyramidbox: A context-assisted
less and unobtrusive breathing measurement,” J. Biomed. Opt., vol. 22, single shot face detector,” in Proc. Eur. Conf. Comput. Vis. (ECCV),
no. 3, Mar. 2017, Art. no. 036006. 2018, pp. 797–813.
[11] M. Hu et al., “Combination of near-infrared and thermal imaging [30] Y. Cho, S. J. Julier, N. Marquardt, and N. Bianchi-Berthouze, “Robust
techniques for the remote and simultaneous measurements of breathing tracking of respiratory rate in high-dynamic range scenes using mobile
and heart rates under sleep situation,” PLoS ONE, vol. 13, no. 1, thermal imaging,” Biomed. Opt. Express, vol. 8, no. 10, pp. 4480–4503,
Jan. 2018, Art. no. e0190466. 2017.
[12] C. B. Pereira, X. Yu, M. Czaplik, R. Rossaint, V. Blazek, and [31] J. L. Elman, “Finding structure in time,” Cognit. Sci., vol. 14, no. 2,
S. Leonhardt, “Remote monitoring of breathing dynamics using infrared pp. 179–211, Mar. 1990.
thermography,” Biomed. Opt. Express, vol. 6, no. 11, pp. 4378–4394, [32] S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural
2015. Comput., vol. 9, no. 8, pp. 1735–1780, 1997.
[13] G. F. Lewis, R. G. Gatto, and S. W. Porges, “A novel method [33] K. Greff, R. K. Srivastava, J. Koutník, B. R. Steunebrink, and
for extracting respiration rate and relative tidal volume from J. Schmidhuber, “LSTM: A search space odyssey,” IEEE Trans. Neural
infrared thermography,” Psychophysiology, vol. 48, no. 7, pp. 877–887, Netw. Learn. Syst., vol. 28, no. 10, pp. 2222–2232, Oct. 2017.
Jul. 2011. [34] K. Cho et al., “Learning phrase representations using RNN encoder-
[14] L. Chen, N. Liu, M. Hu, and G. Zhai, “RGB-thermal imaging system decoder for statistical machine translation,” 2014, arXiv:1406.1078.
collaborated with marker tracking for remote breathing rate measure- [Online]. Available: http://arxiv.org/abs/1406.1078
ment,” in Proc. IEEE Vis. Commun. Image Process. (VCIP), Dec. 2019, [35] D. Bahdanau, K. Cho, and Y. Bengio, “Neural machine translation by
pp. 1–4. jointly learning to align and translate,” 2014, arXiv:1409.0473. [Online].
[15] S. Feng, C. Shen, N. Xia, W. Song, M. Fan, and B. J. Cowling, “Rational Available: http://arxiv.org/abs/1409.0473
use of face masks in the COVID-19 pandemic,” Lancet Respiratory [36] A. Vaswani et al., “Attention is all you need,” in Proc. Adv. Neural Inf.
Med., vol. 8, no. 5, pp. 434–436, May 2020. Process. Syst., 2017, pp. 5998–6008.
Authorized licensed use limited to: Mapua University. Downloaded on March 31,2021 at 03:47:34 UTC from IEEE Xplore. Restrictions apply.