Etik 5
Etik 5
Etik 5
00654
pISSN 1976-8710 eISSN 2005-0720
Review
Department of Mechanical Engineering, Pohang University of Science and Technology, Pohang; 2Department of Otolaryngology-Head and Neck
1
Surgery, Seoul St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul; 3Graduate School of Artificial Intelligence,
Pohang University of Science and Technology, Pohang, Korea
This study presents an up-to-date survey of the use of artificial intelligence (AI) in the field of otorhinolaryngology, consid-
ering opportunities, research challenges, and research directions. We searched PubMed, the Cochrane Central Register of
Controlled Trials, Embase, and the Web of Science. We initially retrieved 458 articles. The exclusion of non-English publica-
tions and duplicates yielded a total of 90 remaining studies. These 90 studies were divided into those analyzing medical
images, voice, medical devices, and clinical diagnoses and treatments. Most studies (42.2%, 38/90) used AI for image-based
analysis, followed by clinical diagnoses and treatments (24 studies). Each of the remaining two subcategories included 14
studies. Machine learning and deep learning have been extensively applied in the field of otorhinolaryngology. However,
the performance of AI models varies and research challenges remain.
Keywords. Artificial Intelligence; Machine Learning; Deep Learning; Otorhinolaryngology
326
Tama BA et al. Artificial Intelligence in Otolaryngology 327
Fig. 1. Flowchart of the literature search and study selection. 300 Studies screened 130 Studies excluded
AI IN THE FIELD OF
H I G H L I G H T S OTORHINOLARYNGOLOGY
Ninety studies that implemented artificial intelligence (AI) in
otorhinolaryngology were reviewed and classified. AI aids medical image-based analysis
Medical imaging yields a visual representation of an internal
The studies were divided into four subcategories.
bodily region to facilitate analysis and treatment. Ear, nose, and
Research challenges regarding future applications of AI in oto- throat-related diseases are imaged in various manners. Table 1
rhinolaryngology are discussed.
summarizes the 38 studies that used AI to assist medical image-
Table 1. AI techniques used for medical image-based analysis
328
Best result
Analysis Validation No. of samples in No. of samples in
Study Objective AI technique Accuracy (%)/ Sensitivity (%)/
modality method the training dataset the testing dataset
AUC specificity (%)
[20] CT Anterior ethmoidal artery anatomy CNN: Inception-V3 Hold-out 675 Images from 388 197 Images 82.7/0.86 -
patients
[21] CT Osteomeatal complex occlusion CNN: Inception-V3 - 1.28 Million images from 239 - 85.0/0.87 -
patients
[22] CT Chronic otitis media diagnosis CNN: Inception-V3 Hold-out 975 Images 172 Images -/0.92 83.3/91.4
[23] DECT HNSCC lymph nodes RF, GBM Hold-out Training and testing set are randomly chosen with a ratio 90.0/0.96 89.0/91.0
70:30 from a total of 412 lymph nodes from 50 patients.
[24] microCT Intratemporal facial nerve anatomy PCA+SSM - 40 Cadaveric specimens - - -
from 21 donors
[25] CT Extranodal extension of HNSCC CNN Hold out 2,875 Lymph nodes 200 Lymph nodes 83.1/0.84 71.0/85.0
[26] CT Prediction of overall survival of NN, DT, boosting, Bayesian, 10-CV 101 Head and neck cancer patients, 440 radiomic features -/0.67 -
head and neck cancer bagging, RF, MARS, SVM,
k-NN, GLM, PLSR
[27] DECT Benign parotid tumors RF Hold-out 882 Images from 42 patients Two-thirds of the samples 92.0/0.97 86.0/100
classification
Clinical and Experimental Otorhinolaryngology
[28] fMRI Predicting the language outcomes SVM LOOCV 22 Training samples, including 15 labeled samples and 81.3/0.97 77.8/85.7
following cochlear implantation 7 unlabeled samples
[29] fMRI Auditory perception SVM 10-CV 42 Images from 6 participants 47.0/- -
[30] MRI Relationship between tinnitus and ELM Repeated 46 Images from 23 healthy subjects and 23 patients. Test 94.0/- -
thicknesses of internal auditory hold-out was repeated 10 times for three training ratios, i.e., 50%,
canal and nerves 60%, and 70%.
[31] MRI Prediction of treatment outcomes SVM 9-CV 36 Lesions from 36 patients 92.0/- 100/82.0
of sinonasal squamous cell
carcinomas
[32] Neuroimaging Tinnitus SVM 5-CV 102 Images from 46 patients and 56 healthy subjects 80.0/0.86 -
biomarkers
[33] MRI Differentiate sinonasal squamous SVM LOOCV 22 Patients with inverted papilloma and 24 patients with 89.1/- 91.7/86.4
cell carcinoma from inverted SCC
Vol. 13, No. 4: 326-339, November 2020
papilloma
[34] MRI Speech improvement for CI SVM LOOCV 37 Images from 37 children with hearing loss and 84.0/0.84 80.0/88.0
candidates 40 images from 40 children with normal hearing
[35] Endoscopic Laryngeal soft tissue Weighted voting Hold-out 200 Images 100 Images 84.7/- -
images (UNet+ErfNet)
[36] Laryngoscope laryngeal neoplasms CNN Hold-out 14,340 Images from 5,250 5,093 Images from 2,271 96.24/- 92.8/98.9
images patients patients
[37] Laryngoscope Laryngeal cancer CNN Hold-out 13,721 Images 1,176 Images 86.7/0.92 73.1/92.2
images
[38] Laryngoscope Oropharyngeal cariconoma Naive Bayes Hold-out 4 Patients with 16 Patients with 65.9/- 66.8/64.9
images oropharyngeal cariconoma oropharyngeal cariconoma
and 1 healthy subject and 9 healthy subjects
[39] Otoscopic Otologic diseases CNN Hold-out 734 Images; 80% of the images were used for the training 84.4/- -
images and 20% were used for validation.
(Continued to the next page)
Table 1. Continued
Best result
Analysis Validation No. of samples in No. of samples in
Study Objective AI technique Accuracy (%)/ Sensitivity (%)/
modality method the training dataset the testing dataset
AUC specificity (%)
[40] Otoscopic Otitis media MJSR Hold-out 1,230 Images; 80% of the images were used for the training 91.41/- 89.48/93.33
images and 20% were used for validation.
[41] Otoscopic Otoscopic diagnosis AutoML Hold-out 1,277 Images 89 Images 88.7/- 86.1/-
images
[42] Digitized H&E-stained tissue of oral cavity LDA, QDA, RF, SVM Hold-out 50 Images 65 Images 88.0/0.87 78.0/93.0
images squamous cell carcinoma
[43] PESI-MS Intraoperative specimens of LR LOOCV 114 Non-cancerous specimens and 141 cancerous 95.35/- -
HNSCC specimens
[44] Biopsy Frozen section of oral cavity SVM LOOCV 176 Specimen pairs from 27 subjects -/0.94 100/88.78
specimen cancer
[45] HSI Head and neck cancer CNN LOOCV 88 Samples from 50 patients 80.0/- 81.0/78.0
classification
[46] HSI Head and neck cancer CNN LOOCV 12 Tumor-bearing samples for 12 mice 91.36/- 86.05/93.36
classification
[47] HSI Oral cancer SVM, LDA, QDA, RF, 10-CV 10 Images from 10 mice 79.0/0.86 79.0/79.0
RUSBoost
[48] HSI Head and neck cancer LDA, QDA, ensemble LDA, Repeated 20 Specimens from 20 16 Specimens from 16 94.0/0.97 95.0/90.0
classification SVM, RF hold-out patients patients
[49] HSI Tissue surface shape SSRNet 5-CV 200 SL images 96.81/- 92.5/-
reconstruction
[50] HSI Tumor margin of HNSCC CNN 5-CV 395 Surgical specimens 98.0/0.99 -
[51] HSI Tumor margin of HNSCC LDA 10-CV 16 Surgical specimens 90.0/- 89.0/91.0
[52] HSI Optical biopsy of head and neck CNN LOOCV 21 Surgical gross-tissue specimens 81.0/0.82 81.0/80.0
cancer
[53] SRS Frozen section of laryngeal CNN 5-CV 18,750 Images from 45 patients 100/- -
squamous cell carcinoma
[54] HSI Cancer margins of ex-vivo human CNN Hold-out 11 Surgical specimens 9 Surgical specimens 81.0/0.86 84.0/77.0
surgical specimens
[55] USG Genetic risk stratification of thyroid AutoML Hold-out 556 Images from 21 patients 127 Images 77.4/- 45.0/97.0
nodules
[56] CT Concha bullosa on coronal sinus CNN: Inception-V3 Hold-out 347 Images (163 concha 100 Images (50 concha 81.0/0.93 -
classification bullosa images and 184 bullosa images and 50
normal images) normal images)
[57] Panoramic Maxillary sinusitis diagnosis AlexNet CNN Hold-out 400 Healthy images and 400 60 Healthy and 60 inflamed 87.5/0.875 86.7/88.3
radiography inflamed maxillary sinuses maxillary sinuses images
images
AI, artificial intelligence; AUC, area under the receiver operating characteristic curve; CT, computed tomography; CNN, convolutional neural network; DECT, dual-energy computed tomography; HN-
SCC, head and neck squamous cell carcinoma; RF, random forest; GBM, gradient boosting machine; PCA, principle component analysis; SSM, statistical shape model; NN, neural network; DT, decision
tree; MARS, multi adaptive regression splines; SVM, support vector machine; k-NN, k-nearest neighbor; GLM, generalized linear model; PLSR, partial least squares and principal component regression;
Tama BA et al. Artificial Intelligence in Otolaryngology
CV, cross-validation; fMRI, functional magnetic resonance imaging; LOOCV, leave-one-out cross-validation; ELM, extreme learning machine; CI, cochlear implant; MJSR, multitask joint sparse represen-
tation; LDA, linear discriminant analysis; QDA, quadratic discriminant analysis; PESI-MS, probe electrospray ionization mass spectrometry; LR, logistic regression; HSI, hyperspectral imaging; SSRNet,
329
Fig. 3. Artificial intelligence (AI) techniques used for medical image-based analysis.
based analysis in clinical otorhinolaryngology. Nine studies non-contact devices. These studies are summarized in Table 3.
(23.7%) addressed hyperspectral imaging, nine studies (23.7%) Of these 14 studies, most (50%, seven studies) focused on anal-
analyzed computed tomography, six studies (15.8%) applied AI yses of gene expression data. Three studies (21.43%) used AI to
to magnetic resonance imaging, and one study (2.63%) ana- examine polysomnography data in an effort to score sleep stages
lyzed panoramic radiography. Laryngoscopic and otoscopic im- [72,73] or to identify long-term cardiovascular disease [74]. Most
aging were addressed in three studies each (7.89% each). The algorithms employed ensemble learning (random forests, Gentle
remaining seven studies (18.39%) used AI to aid in the analysis Boost, XGBoost, and a general linear model+support vector ma-
of neuroimaging biomarker levels, biopsy specimens, simulated chine ensemble); this approach was followed by neural network-
Raman scattering data, ultrasonography and mass spectrometry based algorithms (convolutional neural networks, autoencoders,
data, and digitized images. Nearly all AI algorithms comprised and shallow artificial neural networks). Fig. 5 presents a schemat-
convolutional neural networks. Fig. 3 presents a schematic dia- ic diagram of the application of the autoencoder and the support
gram of the application of convolutional neural networks in vector machine in the analysis of gene expression data.
medical image-based analysis; the remaining algorithms consist-
ed of support vector machines and random forests. AI for clinical diagnoses and treatments
Clinical diagnoses and treatments consider only symptoms, med-
AI aids voice-based analysis ical records, and other clinical documentation. We retrieved 24
The subfield of voice-based analysis within otorhinolaryngology relevant studies (Table 4). Of the ML algorithms, most used lo-
seeks to improve speech, to detect voice disorders, and to reduce gistic regression for classification, followed by random forests and
the noise experienced by patients with (CIs; Table 2 lists the 14 support vector machines. Notably, many studies used hold-outs
studies that used AI for speech-based analyses. Nine (64.29%) to validate new methods. Fig. 6 presents a schematic diagram of
sought to improve speech intelligibility or reduce noise for pa- the process cycle of utilizing AI for clinical diagnoses and treat-
tients with CIs. Two (14.29%) used acoustic signals to detect ments.
voice disorders [67] and “hot potato voice” [70]. In other stud-
ies, AI was used for symptoms, voice pathologies, or electromyo-
graphic signals as a way to detect voice disorders [68,69], or to DISCUSSION
restore the voice of a patient who had undergone total laryngec-
tomy [71]. Neural networks were favored, followed by k-nearest We systematically analyzed reports describing the integration of
neighbor methods, support vector machines, and other widely AI in the field of otorhinolaryngology, with an emphasis on how
known classifiers (e.g., decision trees and XGBoost). Fig. 4 pres- AI may best be implemented in various subfields. Various AI
ents a schematic diagram of the application of convolutional techniques and validation methods have found favor. As described
neural networks in medical voice-based analysis. above, advances in 2015 underscored that AI would play a ma-
jor role in future medicine. Here, we reviewed post-2015 AI ap-
AI analysis of biosignals detected from medical devices plications in the field of otorhinolaryngology. Before 2015, most
Medical device-based analyses seek to predict the responses to AI-based technologies focused on CIs [10,75-86]. However, AI
clinical treatments in order to guide physicians who may wish to applications have expanded greatly in recent years. In terms of
choose alternative or more aggressive therapies. AI has been used image-based analysis, images yielded by rigid endoscopes, laryn-
to assist polysomnography, to explore gene expression profiles, goscopes, stroboscopes, computed tomography, magnetic reso-
to interpret cellular cartographs, and to evaluate the outputs of nance imaging, and multispectral narrow-band imaging [38], as
Gene expression
Auto encoder SVM
Fig. 5. Artificial intelligence (AI) analyses of biosignals detected from medical devices. SVM, support vector machine.
Table 4. Continued
Validation No. of samples in No. of samples in
Study Analysis modality Objective AI technique Best result
method the training dataset the testing dataset
[108] Clinicopathologic data Delayed adjuvant RF Hold-out 61,258 Patients 15,315 Patients Accuracy: 64.4%;
radiation prediction precision: 58.5%
[109] Clinicopathologic data Occult nodal metastasis LR, RF, SVM, Hold-out 1,570 Patients 391 Patients AUC: 0.71;
prediction in oral cavity GBM sensitivity: 75.3%;
squamous cell specificity: 49.2%
carcinoma
[110] Dataset of the center Peripheral vestibular GBDT, bagging, CV 75 Patients with vestibular dysfunction AUC: 0.9; recall:
of pressure sway dysfunction prediction LR and 163 healthy controls 0.84
during foam
posturography
[111] TEOAE signals Meniere’s disease SVM 5-CV 30 Unilateral patients Accuracy: 82.7%
hearing outcome
prediction
[112] Semantic and syntactic Vestibular diagnoses NLP+Naïve 10-CV 866 Physician-generated histories from Sensitivity: 93.4%;
patterns in clinical Bayes vestibular patients specificity: 98.2%;
documentation AUC: 1.0
[113] Endoscopic imaging Nasal polyps diagnosis ResNet50, Hold-out 23,048 Patches (167 patients) as training Inception V3: AUC:
Xception, and set, 1,577 patches (12 patients) as 0.974
Inception V3 internal validation set, and 1,964
patches (16 patients) as external test set
[114] Intradermal skin tests Allergic rhinitis diagnosis Associative 10-CV 872 Patients with allergic symptoms Accuracy: 88.31%
classifier
[115] Clinical data Identified phenotype Cluster analysis - 46 Patients with CRS without nasal polyps -
and mucosal and 67 patients with nasal polyps
eosinophilia endotype
subgroups of patients
with medical refractory
CRS
[116] Clinical data Prognostic information Discriminant - 690 Patients -
of patient with CRS analysis
[117] Clinical data Identified phenotypic Discriminant - 382 Patients -
subgroups of CRS analysis
patients
[118] Clinical data Characterization of Cluster analysis - 97 Surgical patients with CRS -
distinguishing clinical
features between
subgroups of patients
with CRS
[119] Clinical data Identified features of Cluster analysis - 145 Patients of CRS without nasal -
CRS without nasal polyposis
polyposis
[120] Clinical data Identified inflammatory Cluster analysis - 682 Cases (65% with CRS without nasal -
endotypes of CRS polyps)
[121] Clinical data Identified features of Cluster analysis - 375 Patients -
CRS with nasal polyps
AI, artificial intelligence; CRDN, cascade recurring deep network; MAPE, mean absolute percentage error; RF, random forest; LOOCV, leave-one-out cross
validation; CI, cochlear implant; MAE, mean absolute error; SSHL, sudden sensorineural hearing loss; DBN, deep belief network; LR, logistic regression;
SVM, support vector machine; MLP, multilayer perceptron; CV, cross-validation; AUC, area under the receiver operating characteristic curve; NN, neural
network; DT, decision tree; DF, decision forest; DJ, decision jungle; NPV, negative predictive value; GBDT, gradient boosted decision trees; GBM, gradient
boosting machine; TEOAE, transient-evoked otoacoustic emission; NLP, natural language processing; CRS, chronic rhinosinusitis.
well as hyperspectral imaging [45-52,54], are now interpreted tients with CIs. In medical device-based analyses, AI is used to
by AI. In voice-based analysis, AI is used to evaluate pathologi- evaluate tissue and blood test results, as well as the outcomes of
cal voice conditions associated with vocal fold disorders, to ana- otorhinolaryngology-specific tests (e.g., polysomnography)
lyze and decode phonation itself [67], to improve speech per- [72,73,122] and audiometry [123,124]. AI has also been used to
ception in noisy conditions, and to improve the hearing of pa- support clinical diagnoses and treatments, decision-making, the
Tama BA et al. Artificial Intelligence in Otolaryngology 335
EMR data
Input Output
Input
Output Generate
Diagnosis Manual records
Fig. 6. Artificial intelligence (AI) techniques used for clinical diagnoses and treatments. EMR, electronic medical record.
35. Laves MH, Bicker J, Kahrs LA, Ortmaier T. A dataset of laryngeal 52. Halicek M, Little JV, Wang X, Chen AY, Fei B. Optical biopsy of
endoscopic images with comparative study on convolution neural head and neck cancer using hyperspectral imaging and convolu-
network-based semantic segmentation. Int J Comput Assist Radiol tional neural networks. J Biomed Opt. 2019 Mar;24(3):1-9.
Surg. 2019 Mar;14(3):483-92. 53. Zhang L, Wu Y, Zheng B, Su L, Chen Y, Ma S, et al. Rapid histology
36. Ren J, Jing X, Wang J, Ren X, Xu Y, Yang Q, et al. Automatic recog- of laryngeal squamous cell carcinoma with deep-learning based
nition of laryngoscopic images using a deep-learning technique. stimulated Raman scattering microscopy. Theranostics. 2019 Apr;
Laryngoscope. 2020 Feb 18 [Epub]. https://doi.org/10.1002/lary. 9(9):2541-54.
28539. 54. Halicek M, Little JV, Wang X, Patel M, Griffith CC, Chen AY, et al.
37. Xiong H, Lin P, Yu JG, Ye J, Xiao L, Tao Y, et al. Computer-aided di- Tumor margin classification of head and neck cancer using hyper-
agnosis of laryngeal cancer via deep learning based on laryngo- spectral imaging and convolutional neural networks. Proc SPIE Int
scopic images. EBioMedicine. 2019 Oct;48:92-9. Soc Opt Eng. 2018 Feb;10576:1057605.
38. Mascharak S, Baird BJ, Holsinger FC. Detecting oropharyngeal 55. Daniels K, Gummadi S, Zhu Z, Wang S, Patel J, Swendseid B, et al.
carcinoma using multispectral, narrow-band imaging and machine Machine learning by ultrasonography for genetic risk stratification
learning. Laryngoscope. 2018 Nov;128(11):2514-20. of thyroid nodules. JAMA Otolaryngol Head Neck Surg. 2019 Oct;
39. Livingstone D, Talai AS, Chau J, Forkert ND. Building an otoscopic 146(1):1-6.
screening prototype tool using deep learning. J Otolaryngol Head 56. Parmar P, Habib AR, Mendis D, Daniel A, Duvnjak M, Ho J, et al.
Neck Surg. 2019 Nov;48(1):66. An artificial intelligence algorithm that identifies middle turbinate
40. Tran TT, Fang TY, Pham VT, Lin C, Wang PC, Lo MT. Development pneumatisation (concha bullosa) on sinus computed tomography
of an automatic diagnostic algorithm for pediatric otitis media. Otol scans. J Laryngol Otol. 2020 Apr;134(4):328-31.
Neurotol. 2018 Sep;39(8):1060-5. 57. Murata M, Ariji Y, Ohashi Y, Kawai T, Fukuda M, Funakoshi T, et al.
41. Livingstone D, Chau J. Otoscopic diagnosis using computer vision: Deep-learning classification using convolutional neural network
an automated machine learning approach. Laryngoscope. 2020 Jun; for evaluation of maxillary sinusitis on panoramic radiography. Oral
130(6):1408-13. Radiol. 2019 Sep;35(3):301-7.
42. Lu C, Lewis JS Jr, Dupont WD, Plummer WD Jr, Janowczyk A, 58. Lai YH, Tsao Y, Lu X, Chen F, Su YT, Chen KC, et al. Deep learning-
Madabhushi A. An oral cavity squamous cell carcinoma quantitative based noise reduction approach to improve speech intelligibility
histomorphometric-based image classifier of nuclear morphology for cochlear implant recipients. Ear Hear. 2018 Jul/Aug;39(4):795-
can risk stratify patients for disease-specific survival. Mod Pathol. 809.
2017 Dec;30(12):1655-65. 59. Healy EW,Yoho SE, Chen J,Wang Y,Wang D.An algorithm to increase
43. Ashizawa K, Yoshimura K, Johno H, Inoue T, Katoh R, Funayama S, speech intelligibility for hearing-impaired listeners in novel segments
et al. Construction of mass spectra database and diagnosis algorithm of the same noise type. J Acoust Soc Am. 2015 Sep;138(3):1660-9.
for head and neck squamous cell carcinoma. Oral Oncol. 2017 Dec; 60. Erfanian Saeedi N, Blamey PJ, Burkitt AN, Grayden DB. An inte-
75:111-9. grated model of pitch perception incorporating place and temporal
44. Grillone GA,Wang Z, Krisciunas GP,Tsai AC, Kannabiran VR, Pistey pitch codes with application to cochlear implant research. Hear
RW, et al. The color of cancer: margin guidance for oral cancer re- Res. 2017 Feb;344:135-47.
section using elastic scattering spectroscopy. Laryngoscope. 2017 61. Guerra-Jimenez G, Ramos De Miguel A, Falcon Gonzalez JC, Borkos-
Sep;127 Suppl 4(Suppl 4):S1-9. ki Barreiro SA, Perez Plasencia D, Ramos Macias A. Cochlear im-
45. Halicek M, Lu G, Little JV,Wang X, Patel M, Griffith CC, et al. Deep plant evaluation: prognosis estimation by data mining system. J Int
convolutional neural networks for classifying head and neck can- Adv Otol. 2016 Apr;12(1):1-7.
cer using hyperspectral imaging. J Biomed Opt. 2017 Jun;22(6): 62. Lai YH, Chen F, Wang SS, Lu X, Tsao Y, Lee CH. A deep denoising
60503. autoencoder approach to improving the intelligibility of vocoded
46. Ma L, Lu G,Wang D,Wang X, Chen ZG, Muller S, et al. Deep learn- speech in cochlear implant simulation. IEEE Trans Biomed Eng.
ing based classification for head and neck cancer detection with 2017 Jul;64(7):1568-78.
hyperspectral imaging in an animal model. Proc SPIE Int Soc Opt 63. Chen J, Wang Y, Yoho SE, Wang D, Healy EW. Large-scale training
Eng. 2017 Feb;10137:101372G. to increase speech intelligibility for hearing-impaired listeners in
47. Lu G, Wang D, Qin X, Muller S, Wang X, Chen AY, et al. Detection novel noises. J Acoust Soc Am. 2016 May;139(5):2604.
and delineation of squamous neoplasia with hyperspectral imaging 64. Gao X, Grayden DB, McDonnell MD. Modeling electrode place
in a mouse model of tongue carcinogenesis. J Biophotonics. 2018 discrimination in cochlear implant stimulation. IEEE Trans Biomed
Mar;11(3):e201700078. Eng. 2017 Sep;64(9):2219-29.
48. Lu G, Little JV, Wang X, Zhang H, Patel MR, Griffith CC, et al. De- 65. Hajiaghababa F, Marateb HR, Kermani S. The design and validation
tection of head and neck cancer in surgical specimens using quan- of a hybrid digital-signal-processing plug-in for traditional cochlear
titative hyperspectral imaging. Clin Cancer Res. 2017 Sep;23(18): implant speech processors. Comput Methods Programs Biomed.
5426-36. 2018 Jun;159:103-9.
49. Lin J, Clancy NT, Qi J, Hu Y, Tatla T, Stoyanov D, et al. Dual-modal- 66. Ramos-Miguel A, Perez-Zaballos T, Perez D, Falconb JC, Ramosb A.
ity endoscopic probe for tissue surface shape reconstruction and Use of data mining to predict significant factors and benefits of bi-
hyperspectral imaging enabled by deep neural networks. Med Im- lateral cochlear implantation. Eur Arch Otorhinolaryngol. 2015 Nov;
age Anal. 2018 Aug;48:162-76. 272(11):3157-62.
50. Halicek M, Dormer JD, Little JV, Chen AY, Myers L, Sumer BD, et 67. Powell ME, Rodriguez Cancio M, Young D, Nock W, Abdelmessih B,
al. Hyperspectral imaging of head and neck squamous cell carcino- Zeller A, et al. Decoding phonation with artificial intelligence (DeP
ma for cancer margin detection in surgical specimens from 102 pa- AI): proof of concept. Laryngoscope Investig Otolaryngol. 2019
tients using deep learning. Cancers (Basel). 2019 Sep;11(9):1367. Mar;4(3):328-34.
51. Fei B, Lu G, Wang X, Zhang H, Little JV, Patel MR, et al. Label-free 68. Tsui SY, Tsao Y, Lin CW, Fang SH, Lin FC, Wang CT. Demographic
reflectance hyperspectral imaging for tumor margin assessment: a and symptomatic features of voice disorders and their potential ap-
pilot study on surgical specimens of cancer patients. J Biomed Opt. plication in classification using machine learning algorithms. Folia
2017 Aug;22(8):1-7. Phoniatr Logop. 2018;70(3-4):174-82.
338 Clinical and Experimental Otorhinolaryngology Vol. 13, No. 4: 326-339, November 2020
69. Fang SH, Tsao Y, Hsiao MJ, Chen JY, Lai YH, Lin FC, et al. Detec- nea-hypopnea index using sound data collected by a noncontact
tion of pathological voice using cepstrum vectors: a deep learning device. Otolaryngol Head Neck Surg. 2020 Mar;162(3):392-9.
approach. J Voice. 2019 Sep;33(5):634-41. 88. Ruiz EM, Niu T, Zerfaoui M, Kunnimalaiyaan M, Friedlander PL,
70. Fujimura S, Kojima T, Okanoue Y, Shoji K, Inoue M, Hori R. Dis- Abdel-Mageed AB, et al.A novel gene panel for prediction of lymph-
crimination of “hot potato voice” caused by upper airway obstruc- node metastasis and recurrence in patients with thyroid cancer.
tion utilizing a support vector machine. Laryngoscope. 2019 Jun; Surgery. 2020 Jan;167(1):73-9.
129(6):1301-7. 89. Zhong Q, Fang J, Huang Z, Yang Y, Lian M, Liu H, et al. A response
71. Rameau A. Pilot study for a novel and personalized voice restora- prediction model for taxane, cisplatin, and 5-fluorouracil chemo-
tion device for patients with laryngectomy. Head Neck. 2020 May; therapy in hypopharyngeal carcinoma. Sci Rep. 2018 Aug;8(1):
42(5):839-45. 12675.
72. Zhang L, Fabbri D, Upender R, Kent D. Automated sleep stage scor- 90. Chowdhury NI, Li P, Chandra RK, Turner JH. Baseline mucus cy-
ing of the Sleep Heart Health Study using deep neural networks. tokines predict 22-item Sino-Nasal Outcome Test results after en-
Sleep. 2019 Oct;42(11):zsz159. doscopic sinus surgery. Int Forum Allergy Rhinol. 2020 Jan;10(1):
73. Zhang X, Xu M, Li Y, Su M, Xu Z, Wang C, et al. Automated multi- 15-22.
model deep neural network for sleep stage scoring with unfiltered 91. Urata S, Iida T, Yamamoto M, Mizushima Y, Fujimoto C, Matsumoto
clinical data. Sleep Breath. 2020 Jun;24(2):581-90. Y, et al. Cellular cartography of the organ of Corti based on optical
74. Zhang L,Wu H, Zhang X,Wei X, Hou F, Ma Y. Sleep heart rate vari- tissue clearing and machine learning. Elife. 2019 Jan;8:e40946.
ability assists the automatic prediction of long-term cardiovascular 92. Zhao Z, Li Y, Wu Y, Chen R. Deep learning-based model for pre-
outcomes. Sleep Med. 2020 Mar;67:217-24. dicting progression in patients with head and neck squamous cell
75. Yao J, Zhang YT. The application of bionic wavelet transform to carcinoma. Cancer Biomark. 2020;27(1):19-28.
speech signal processing in cochlear implants using neural network 93. Essers PBM, van der Heijden M, Verhagen CV, Ploeg EM, de Roest
simulations. IEEE Trans Biomed Eng. 2002 Nov;49(11):1299-309. RH, Leemans CR, et al. Drug sensitivity prediction models reveal a
76. Nemati P, Imani M, Farahmandghavi F, Mirzadeh H, Marzban-Rad link between DNA repair defects and poor prognosis in HNSCC.
E, Nasrabadi AM. Artificial neural networks for bilateral prediction Cancer Res. 2019 Nov;79(21):5597-611.
of formulation parameters and drug release profiles from cochlear 94. Ishii H, Saitoh M, Sakamoto K, Sakamoto K, Saigusa D, Kasai H,
implant coatings fabricated as porous monolithic devices based on et al. Lipidome-based rapid diagnosis with machine learning for
silicone rubber. J Pharm Pharmacol. 2014 May;66(5):624-38. detection of TGF-β signalling activated area in head and neck can-
77. Middlebrooks JC, Bierer JA. Auditory cortical images of cochlear- cer. Br J Cancer. 2020 Mar;122(7):995-1004.
implant stimuli: coding of stimulus channel and current level. J 95. Patel KN, Angell TE, Babiarz J, Barth NM, Blevins T, Duh QY, et al.
Neurophysiol. 2002 Jan;87(1):493-507. Performance of a genomic sequencing classifier for the preopera-
78. Charasse B, Thai-Van H, Chanal JM, Berger-Vachon C, Collet L. tive diagnosis of cytologically indeterminate thyroid nodules. JAMA
Automatic analysis of auditory nerve electrically evoked compound Surg. 2018 Sep;153(9):817-24.
action potential with an artificial neural network. Artif Intell Med. 96. Stepp WH, Farquhar D, Sheth S, Mazul A, Mamdani M, Hackman
2004 Jul;31(3):221-9. TG, et al. RNA oncoimmune phenotyping of HPV-positive p16-
79. Botros A, van Dijk B, Killian M. AutoNR: an automated system positive oropharyngeal squamous cell carcinomas by nodal status.
that measures ECAP thresholds with the Nucleus Freedom cochle- Version 2. JAMA Otolaryngol Head Neck Surg. 2018 Nov;144(11):
ar implant via machine intelligence. Artif Intell Med. 2007 May; 967-75.
40(1):15-28. 97. Shew M, New J, Wichova H, Koestler DC, Staecker H. Using ma-
80. van Dijk B, Botros AM, Battmer RD, Begall K, Dillier N, Hey M, et chine learning to predict sensorineural hearing loss based on peri-
al. Clinical results of AutoNRT, a completely automatic ECAP re- lymph micro RNA expression profile. Sci Rep. 2019 Mar;9(1):3393.
cording system for cochlear implants. Ear Hear. 2007 Aug;28(4): 98. Nam Y, Choo OS, Lee YR, Choung YH, Shin H. Cascade recurring
558-70. deep networks for audible range prediction. BMC Med Inform De-
81. Gartner L, Lenarz T, Joseph G, Buchner A. Clinical use of a system cis Mak. 2017 May;17(Suppl 1):56.
for the automated recording and analysis of electrically evoked 99. Kim H, Kang WS, Park HJ, Lee JY, Park JW, Kim Y, et al. Cochlear
compound action potentials (ECAPs) in cochlear implant patients. implantation in postlingually deaf adults is time-sensitive towards
Acta Otolaryngol. 2010 Jun;130(6):724-32. positive outcome: prediction using advanced machine learning
82. Nemati P, Imani M, Farahmandghavi F, Mirzadeh H, Marzban-Rad techniques. Sci Rep. 2018 Dec;8(1):18004.
E, Nasrabadi AM. Dexamethasone-releasing cochlear implant coat- 100. Bing D, Ying J, Miao J, Lan L, Wang D, Zhao L, et al. Predicting the
ings: application of artificial neural networks for modelling of for- hearing outcome in sudden sensorineural hearing loss via machine
mulation parameters and drug release profile. J Pharm Pharmacol. learning models. Clin Otolaryngol. 2018 Jun;43(3):868-74.
2013 Aug;65(8):1145-57. 101. Lau K, Wilkinson J, Moorthy R. A web-based prediction score for
83. Zhang J, Wei W, Ding J, Roland JT Jr, Manolidis S, Simaan N. In- head and neck cancer referrals. Clin Otolaryngol. 2018 Mar;43(4):
roads toward robot-assisted cochlear implant surgery using steer- 1043-9.
able electrode arrays. Otol Neurotol. 2010 Oct;31(8):1199-206. 102. Wilson MB, Ali SA, Kovatch KJ, Smith JD, Hoff PT. Machine learn-
84. Chang CH, Anderson GT, Loizou PC. A neural network model for ing diagnosis of peritonsillar abscess. Otolaryngol Head Neck Surg.
optimizing vowel recognition by cochlear implant listeners. IEEE 2019 Nov;161(5):796-9.
Trans Neural Syst Rehabil Eng. 2001 Mar;9(1):42-8. 103. Priesol AJ, Cao M, Brodley CE, Lewis RF. Clinical vestibular testing
85. Castaneda-Villa N, James CJ. Objective source selection in blind assessed with machine-learning algorithms. JAMA Otolaryngol
source separation of AEPs in children with cochlear implants. Conf Head Neck Surg. 2015 Apr;141(4):364-72.
Proc IEEE Eng Med Biol Soc. 2007;2007:6224-7. 104. Chan J, Raju S, Nandakumar R, Bly R, Gollakota S. Detecting mid-
86. Desmond JM, Collins LM, Throckmorton CS. Using channel-spe- dle ear fluid using smartphones. Sci Transl Med. 2019 May;11(492):
cific statistical models to detect reverberation in cochlear implant eaav1102.
stimuli. J Acoust Soc Am. 2013 Aug;134(2):1112-20. 105. Karadaghy OA, Shew M, New J, Bur AM. Development and assess-
87. Kim JW, Kim T, Shin J, Lee K, Choi S, Cho SW. Prediction of ap- ment of a machine learning model to help predict survival among
Tama BA et al. Artificial Intelligence in Otolaryngology 339
patients with oral squamous cell carcinoma. JAMA Otolaryngol 119. Lal D, Hopkins C, Divekar RD. SNOT-22-based clusters in chronic
Head Neck Surg. 2019 May;145(12):1115-20. rhinosinusitis without nasal polyposis exhibit distinct endotypic
106. Mermod M, Jourdan EF, Gupta R, Bongiovanni M, Tolstonog G, Si- and prognostic differences. Int Forum Allergy Rhinol. 2018 Jul;8(7):
mon C, et al. Development and validation of a multivariable pre- 797-805.
diction model for the identification of occult lymph node metasta- 120. Tomassen P, Vandeplas G, Van Zele T, Cardell LO, Arebro J, Olze H,
sis in oral squamous cell carcinoma. Head Neck. 2020 Aug;42(8): et al. Inflammatory endotypes of chronic rhinosinusitis based on
1811-20. cluster analysis of biomarkers. J Allergy Clin Immunol. 2016 May;
107. Formeister EJ, Baum R, Knott PD, Seth R, Ha P, Ryan W, et al. Ma- 137(5):1449-56.
chine learning for predicting complications in head and neck mi- 121. Kim JW, Huh G, Rhee CS, Lee CH, Lee J, Chung JH, et al. Unsu-
crovascular free tissue transfer. Laryngoscope. 2020 Jan 28 [Epub]. pervised cluster analysis of chronic rhinosinusitis with nasal polyp
https://doi.org/10.1002/lary.28508. using routinely available clinical markers and its implication in
108. Shew M, New J, Bur AM. Machine learning to predict delays in ad- treatment outcomes. Int Forum Allergy Rhinol. 2019 Jan;9(1):79-86.
juvant radiation following surgery for head and neck cancer. Oto- 122. Goldstein CA, Berry RB, Kent DT, Kristo DA, Seixas AA, Redline S,
laryngol Head Neck Surg. 2019 Jun;160(6):1058-64. et al. Artificial intelligence in sleep medicine: background and im-
109. Bur AM, Holcomb A, Goodwin S,Woodroof J, Karadaghy O, Shnay- plications for clinicians. J Clin Sleep Med. 2020 Apr;16(4):609-18.
der Y, et al. Machine learning to predict occult nodal metastasis in 123. Weininger O, Warnecke A, Lesinski-Schiedat A, Lenarz T, Stolle S.
early oral squamous cell carcinoma. Oral Oncol. 2019 May;92:20-5. Computational analysis based on audioprofiles: a new possibility
110. Kamogashira T, Fujimoto C, Kinoshita M, Kikkawa Y, Yamasoba T, for patient stratification in office-based otology. Audiol Res. 2019
Iwasaki S. Prediction of vestibular dysfunction by applying machine Nov;9(2):230.
learning algorithms to postural instability. Front Neurol. 2020 Feb; 124. Barbour DL, Howard RT, Song XD, Metzger N, Sukesan KA,
11:7. DiLorenzo JC, et al. Online machine learning audiometry. Ear
111. Liu YW, Kao SL,Wu HT, Liu TC, Fang TY,Wang PC.Transient-evoked Hear. 2019 Jul/Aug;40(4):918-26.
otoacoustic emission signals predicting outcomes of acute sensori- 125. Wu YH, Ho HC, Hsiao SH, Brummet RB, Chipara O. Predicting
neural hearing loss in patients with Meniere’s disease. Acta Otolar- three-month and 12-month post-fitting real-world hearing-aid out-
yngol. 2020 Mar;140(3):230-5. come using pre-fitting acceptable noise level (ANL). Int J Audiol.
112. Luo J, Erbe C, Friedland DR. Unique clinical language patterns 2016;55(5):285-94.
among expert vestibular providers can predict vestibular diagnoses. 126. Bramhall NF, McMillan GP, Kujawa SG, Konrad-Martin D. Use of
Otol Neurotol. 2018 Oct;39(9):1163-71. non-invasive measures to predict cochlear synapse counts. Hear
113. Wu Q, Chen J, Deng H, Ren Y, Sun Y,Wang W, et al. Expert-level di- Res. 2018 Dec;370:113-9.
agnosis of nasal polyps using deep learning on whole-slide imaging. 127. Rasku J, Pyykko I, Levo H, Kentala E, Manchaiah V. Disease profil-
J Allergy Clin Immunol. 2020 Feb;145(2):698-701. ing for computerized peer support of Meniere’s disease. JMIR Re-
114. Jabez Christopher J, Khanna Nehemiah H, Kannan A. A clinical habil Assist Technol. 2015 Sep;2(2):e9.
decision support system for diagnosis of allergic rhinitis based on 128. Morse JC, Shilts MH, Ely KA, Li P, Sheng Q, Huang LC, et al. Pat-
intradermal skin tests. Comput Biol Med. 2015 Oct;65:76-84. terns of olfactory dysfunction in chronic rhinosinusitis identified by
115. Adnane C, Adouly T, Khallouk A, Rouadi S, Abada R, Roubal M, et hierarchical cluster analysis and machine learning algorithms. Int
al. Using preoperative unsupervised cluster analysis of chronic rhi- Forum Allergy Rhinol. 2019 Mar;9(3):255-64.
nosinusitis to inform patient decision and endoscopic sinus surgery 129. Quon H, Hui X, Cheng Z, Robertson S, Peng L, Bowers M, et al.
outcome. Eur Arch Otorhinolaryngol. 2017 Feb;274(2):879-85. Quantitative evaluation of head and neck cancer treatment-related
116. Soler ZM, Hyer JM, Rudmik L, Ramakrishnan V, Smith TL, Schloss- dysphagia in the development of a personalized treatment deinten-
er RJ. Cluster analysis and prediction of treatment outcomes for sification paradigm. Int J Radiat Oncol Biol Phys. 2017 Dec;99(5):
chronic rhinosinusitis. J Allergy Clin Immunol. 2016 Apr;137(4): 1271-8.
1054-62. 130. Jochems A, Leijenaar RT, Bogowicz M, Hoebers FJ, Wesseling F,
117. Soler ZM, Hyer JM, Ramakrishnan V, Smith TL, Mace J, Rudmik L, Huang SH, et al. Combining deep learning and radiomics to pre-
et al. Identification of chronic rhinosinusitis phenotypes using clus- dict HPV status in oropharyngeal squamous cell carcinoma. Radiat
ter analysis. Int Forum Allergy Rhinol. 2015 May;5(5):399-407. Oncol. 2018 Apr;127:S504-5.
118. Divekar R, Patel N, Jin J, Hagan J, Rank M, Lal D, et al. Symptom- 131. Bao T, Klatt BN, Whitney SL, Sienko KH, Wiens J. Automatically
based clustering in chronic rhinosinusitis relates to history of aspi- evaluating balance: a machine learning approach. IEEE Trans Neu-
rin sensitivity and postsurgical outcomes. J Allergy Clin Immunol ral Syst Rehabil Eng. 2019 Feb;27(2):179-86.
Pract. 2015 Nov-Dec;3(6):934-40.