Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (21)

Search Parameters:
Keywords = voice biomarkers

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 1244 KiB  
Article
Acoustic Characteristics of Voice and Speech in Post-COVID-19
by Larissa Cristina Berti, Marcelo Gauy, Luana Cristina Santos da Silva, Julia Vasquez Valenci Rios, Viviam Batista Morais, Tatiane Cristina de Almeida, Leisi Silva Sossolete, José Henrique de Moura Quirino, Carolina Fernanda Pentean Martins, Flaviane R. Fernandes-Svartman, Beatriz Raposo de Medeiros, Marcelo Queiroz, Murilo Gazzola and Marcelo Finger
Healthcare 2025, 13(1), 63; https://doi.org/10.3390/healthcare13010063 - 1 Jan 2025
Viewed by 896
Abstract
Background/Objectives: The aim of this paper was to compare voice and speech characteristics between post-COVID-19 and control subjects. The hypothesis was that acoustic parameters of voice and speech may differentiate subjects infected by COVID-19 from control subjects. Additionally, we expected to observe the [...] Read more.
Background/Objectives: The aim of this paper was to compare voice and speech characteristics between post-COVID-19 and control subjects. The hypothesis was that acoustic parameters of voice and speech may differentiate subjects infected by COVID-19 from control subjects. Additionally, we expected to observe the persistence of symptoms in women. Methods: In total, 134 subjects participated in the study, were selected for convenience and divided into two groups: 70 control subjects and 64 post-COVID-19 subjects, with an average time of 8.7 months after infection. The recordings were made using the SPIRA software (v.1.0.) on cell phones, based on three verbal tasks: sustained production of the vowel/a/, reading a sentence, and producing a rhyme. Acoustic analyses of speech and voice were carried out with the PRAAT software (v.4.3.18), based on the following parameters: total sentence duration, number of pauses, pause duration, f0, f0SD, jitter, shimmer, and harmonics-to-noise ratio (HNR). Results: Regarding the acoustic characteristics of speech, there were no differences between the groups or between the sexes. Regarding the acoustic characteristics of voice, jitter, shimmer, and HNR, significant differences between the groups were found. Differences between sexes were observed in the following frequency-related parameters: f0, f0SD, and jitter. Conclusions: Some acoustic characteristics of the patients’ voice may show a deteriorated condition even after exacerbation of the disease. These characteristics are compatible with some of the symptoms reported by post-COVID-19 subjects, such as the presence of tension and fatigue. These voice acoustic parameters could be used as biomarkers to screen voice disorders in long-COVID, using artificial intelligence (AI), accelerating the search for diagnosis by specialists. Full article
Show Figures

Figure 1

21 pages, 7343 KiB  
Review
Update on Practical Management of Early-Stage Non-Small Cell Lung Cancer (NSCLC): A Report from the Ontario Forum
by Parneet K. Cheema, Paul F. Wheatley-Price, Matthew J. Cecchini, Peter M. Ellis, Alexander V. Louie, Sara Moore, Brandon S. Sheffield, Jonathan D. Spicer, Patrick James Villeneuve and Natasha B. Leighl
Curr. Oncol. 2024, 31(11), 6979-6999; https://doi.org/10.3390/curroncol31110514 - 8 Nov 2024
Viewed by 1831
Abstract
Therapeutic strategies for early-stage non-small cell lung cancer (NSCLC) are advancing, with immune checkpoint inhibitors (ICIs) and targeted therapies making their way into neoadjuvant and adjuvant settings. With recent advances, there was a need for multidisciplinary lung cancer healthcare providers from across Ontario [...] Read more.
Therapeutic strategies for early-stage non-small cell lung cancer (NSCLC) are advancing, with immune checkpoint inhibitors (ICIs) and targeted therapies making their way into neoadjuvant and adjuvant settings. With recent advances, there was a need for multidisciplinary lung cancer healthcare providers from across Ontario to convene and review recent data from practical and implementation standpoints. The focus was on the following questions: (1) To what extent do patient (e.g., history of smoking) and disease (e.g., histology, tumor burden, nodal involvement) characteristics influence treatment approaches? (2) What are the surgical considerations in early-stage NSCLC? (3) What is the role of radiation therapy in the context of recent evidence? (4) What is the impact of biomarker testing on treatment planning? Ongoing challenges, treatment gaps, outstanding questions, and controversies with the data were assessed through a pre-meeting survey, interactive cases, and polling questions. By reviewing practice patterns across Ontario cancer centers in the context of evolving clinical data, Health Canada indications, and provincial (Cancer Care Ontario [CCO]) funding approvals, physicians treating lung cancer voiced their opinions on how new approaches should be integrated into provincial treatment algorithms. This report summarizes the forum outcomes, including pre-meeting survey and polling question results, as well as agreements on treatment approaches based on specific patient scenarios. Full article
Show Figures

Figure 1

38 pages, 1732 KiB  
Review
Voice as a Biomarker of Pediatric Health: A Scoping Review
by Hannah Paige Rogers, Anne Hseu, Jung Kim, Elizabeth Silberholz, Stacy Jo, Anna Dorste and Kathy Jenkins
Children 2024, 11(6), 684; https://doi.org/10.3390/children11060684 - 4 Jun 2024
Cited by 2 | Viewed by 2298
Abstract
The human voice has the potential to serve as a valuable biomarker for the early detection, diagnosis, and monitoring of pediatric conditions. This scoping review synthesizes the current knowledge on the application of artificial intelligence (AI) in analyzing pediatric voice as a biomarker [...] Read more.
The human voice has the potential to serve as a valuable biomarker for the early detection, diagnosis, and monitoring of pediatric conditions. This scoping review synthesizes the current knowledge on the application of artificial intelligence (AI) in analyzing pediatric voice as a biomarker for health. The included studies featured voice recordings from pediatric populations aged 0–17 years, utilized feature extraction methods, and analyzed pathological biomarkers using AI models. Data from 62 studies were extracted, encompassing study and participant characteristics, recording sources, feature extraction methods, and AI models. Data from 39 models across 35 studies were evaluated for accuracy, sensitivity, and specificity. The review showed a global representation of pediatric voice studies, with a focus on developmental, respiratory, speech, and language conditions. The most frequently studied conditions were autism spectrum disorder, intellectual disabilities, asphyxia, and asthma. Mel-Frequency Cepstral Coefficients were the most utilized feature extraction method, while Support Vector Machines were the predominant AI model. The analysis of pediatric voice using AI demonstrates promise as a non-invasive, cost-effective biomarker for a broad spectrum of pediatric conditions. Further research is necessary to standardize the feature extraction methods and AI models utilized for the evaluation of pediatric voice as a biomarker for health. Standardization has significant potential to enhance the accuracy and applicability of these tools in clinical settings across a variety of conditions and voice recording types. Further development of this field has enormous potential for the creation of innovative diagnostic tools and interventions for pediatric populations globally. Full article
(This article belongs to the Section Pediatric Otolaryngology)
Show Figures

Graphical abstract

37 pages, 1204 KiB  
Article
Respiratory Diseases Diagnosis Using Audio Analysis and Artificial Intelligence: A Systematic Review
by Panagiotis Kapetanidis, Fotios Kalioras, Constantinos Tsakonas, Pantelis Tzamalis, George Kontogiannis, Theodora Karamanidou, Thanos G. Stavropoulos and Sotiris Nikoletseas
Sensors 2024, 24(4), 1173; https://doi.org/10.3390/s24041173 - 10 Feb 2024
Cited by 8 | Viewed by 6461
Abstract
Respiratory diseases represent a significant global burden, necessitating efficient diagnostic methods for timely intervention. Digital biomarkers based on audio, acoustics, and sound from the upper and lower respiratory system, as well as the voice, have emerged as valuable indicators of respiratory functionality. Recent [...] Read more.
Respiratory diseases represent a significant global burden, necessitating efficient diagnostic methods for timely intervention. Digital biomarkers based on audio, acoustics, and sound from the upper and lower respiratory system, as well as the voice, have emerged as valuable indicators of respiratory functionality. Recent advancements in machine learning (ML) algorithms offer promising avenues for the identification and diagnosis of respiratory diseases through the analysis and processing of such audio-based biomarkers. An ever-increasing number of studies employ ML techniques to extract meaningful information from audio biomarkers. Beyond disease identification, these studies explore diverse aspects such as the recognition of cough sounds amidst environmental noise, the analysis of respiratory sounds to detect respiratory symptoms like wheezes and crackles, as well as the analysis of the voice/speech for the evaluation of human voice abnormalities. To provide a more in-depth analysis, this review examines 75 relevant audio analysis studies across three distinct areas of concern based on respiratory diseases’ symptoms: (a) cough detection, (b) lower respiratory symptoms identification, and (c) diagnostics from the voice and speech. Furthermore, publicly available datasets commonly utilized in this domain are presented. It is observed that research trends are influenced by the pandemic, with a surge in studies on COVID-19 diagnosis, mobile data acquisition, and remote diagnosis systems. Full article
(This article belongs to the Special Issue Human Signal Processing Based on Wearable Non-invasive Device)
Show Figures

Figure 1

12 pages, 1629 KiB  
Article
Acoustic Voice Analysis as a Useful Tool to Discriminate Different ALS Phenotypes
by Giammarco Milella, Diletta Sciancalepore, Giada Cavallaro, Glauco Piccirilli, Alfredo Gabriele Nanni, Angela Fraddosio, Eustachio D’Errico, Damiano Paolicelli, Maria Luisa Fiorella and Isabella Laura Simone
Biomedicines 2023, 11(9), 2439; https://doi.org/10.3390/biomedicines11092439 - 31 Aug 2023
Cited by 3 | Viewed by 1940
Abstract
Approximately 80–96% of people with amyotrophic lateral sclerosis (ALS) become unable to speak during the disease progression. Assessing upper and lower motor neuron impairment in bulbar regions of ALS patients remains challenging, particularly in distinguishing spastic and flaccid dysarthria. This study aimed to [...] Read more.
Approximately 80–96% of people with amyotrophic lateral sclerosis (ALS) become unable to speak during the disease progression. Assessing upper and lower motor neuron impairment in bulbar regions of ALS patients remains challenging, particularly in distinguishing spastic and flaccid dysarthria. This study aimed to evaluate acoustic voice parameters as useful biomarkers to discriminate ALS clinical phenotypes. Triangular vowel space area (tVSA), alternating motion rates (AMRs), and sequential motion rates (SMRs) were analyzed in 36 ALS patients and 20 sex/age-matched healthy controls (HCs). tVSA, AMR, and SMR values significantly differed between ALS and HCs, and between ALS with prevalent upper (pUMN) and lower motor neuron (pLMN) impairment. tVSA showed higher accuracy in discriminating pUMN from pLMN patients. AMR and SMR were significantly lower in patients with bulbar onset than those with spinal onset, both with and without bulbar symptoms. Furthermore, these values were also lower in patients with spinal onset associated with bulbar symptoms than in those with spinal onset alone. Additionally, AMR and SMR values correlated with the degree of dysphagia. Acoustic voice analysis may be considered a useful prognostic tool to differentiate spastic and flaccid dysarthria and to assess the degree of bulbar involvement in ALS. Full article
(This article belongs to the Special Issue New Insights into Motor Neuron Diseases)
Show Figures

Figure 1

15 pages, 2637 KiB  
Article
Hybrid Machine Learning Framework for Multistage Parkinson’s Disease Classification Using Acoustic Features of Sustained Korean Vowels
by S. I. M. M. Raton Mondol, Ryul Kim and Sangmin Lee
Bioengineering 2023, 10(8), 984; https://doi.org/10.3390/bioengineering10080984 - 20 Aug 2023
Cited by 3 | Viewed by 2005
Abstract
Recent research has achieved a great classification rate for separating healthy people from those with Parkinson’s disease (PD) using speech and the voice. However, these studies have primarily treated early and advanced stages of PD as equal entities, neglecting the distinctive speech impairments [...] Read more.
Recent research has achieved a great classification rate for separating healthy people from those with Parkinson’s disease (PD) using speech and the voice. However, these studies have primarily treated early and advanced stages of PD as equal entities, neglecting the distinctive speech impairments and other symptoms that vary across the different stages of the disease. To address this limitation, and improve diagnostic precision, this study assesses the selected acoustic features of dysphonia, as they relate to PD and the Hoehn and Yahr stages, by combining various preprocessing techniques and multiple classification algorithms, to create a comprehensive and robust solution for classification tasks. The dysphonia features extracted from the three sustained Korean vowels /아/(a), /이/(i), and /우/(u) exhibit diversity and strong correlations. To address this issue, the analysis of variance F-Value feature selection classifier from scikit-learn was employed, to identify the topmost relevant features. Additionally, to overcome the class imbalance problem, the synthetic minority over-sampling technique was utilized. To ensure fair comparisons, and mitigate the influence of individual classifiers, four commonly used machine learning classifiers, namely random forest (RF), support vector machine (SVM), k-nearest neighbor (kNN), and multi-layer perceptron (MLP), were employed. This approach enables a comprehensive evaluation of the feature extraction methods, and minimizes the variance in the final classification models. The proposed hybrid machine learning pipeline using the acoustic features of sustained vowels efficiently detects the early and mid-advanced stages of PD with a detection accuracy of 95.48%, and with a detection accuracy of 86.62% for the 4-stage, and a detection accuracy of 89.48% for the 3-stage classification of PD. This study successfully demonstrates the significance of utilizing the diverse acoustic features of dysphonia in the classification of PD and its stages. Full article
(This article belongs to the Special Issue Artificial Intelligence in Auto-Diagnosis and Clinical Applications)
Show Figures

Figure 1

16 pages, 3585 KiB  
Article
Voice Disorder Multi-Class Classification for the Distinction of Parkinson’s Disease and Adductor Spasmodic Dysphonia
by Valerio Cesarini, Giovanni Saggio, Antonio Suppa, Francesco Asci, Antonio Pisani, Alessandra Calculli, Rayan Fayad, Mohamad Hajj-Hassan and Giovanni Costantini
Appl. Sci. 2023, 13(15), 8562; https://doi.org/10.3390/app13158562 - 25 Jul 2023
Cited by 8 | Viewed by 2060
Abstract
Parkinson’s Disease and Adductor-type Spasmodic Dysphonia are two neurological disorders that greatly decrease the quality of life of millions of patients worldwide. Despite this great diffusion, the related diagnoses are often performed empirically, while it could be relevant to count on objective measurable [...] Read more.
Parkinson’s Disease and Adductor-type Spasmodic Dysphonia are two neurological disorders that greatly decrease the quality of life of millions of patients worldwide. Despite this great diffusion, the related diagnoses are often performed empirically, while it could be relevant to count on objective measurable biomarkers, among which researchers have been considering features related to voice impairment that can be useful indicators but that can sometimes lead to confusion. Therefore, here, our purpose was aimed at developing a robust Machine Learning approach for multi-class classification based on 6373 voice features extracted from a convenient voice dataset made of the sustained vowel/e/ and an ad hoc selected Italian sentence, performed by 111 healthy subjects, 51 Parkinson’s disease patients, and 60 dysphonic patients. Correlation, Information Gain, Gain Ratio, and Genetic Algorithm-based methodologies were compared for feature selection, to build subsets analyzed by means of Naïve Bayes, Random Forest, and Multi-Layer Perceptron classifiers, trained with a 10-fold cross-validation. As a result, spectral, cepstral, prosodic, and voicing-related features were assessed as the most relevant, the Genetic Algorithm performed as the most effective feature selector, while the adopted classifiers performed similarly. In particular, a Genetic Algorithm + Naïve Bayes approach brought one of the highest accuracies in multi-class voice analysis, being 95.70% for a sustained vowel and 99.46% for a sentence. Full article
(This article belongs to the Section Acoustics and Vibrations)
Show Figures

Figure 1

13 pages, 2574 KiB  
Article
Building Predictive Models for Schizophrenia Diagnosis with Peripheral Inflammatory Biomarkers
by Evgeny A. Kozyrev, Evgeny A. Ermakov, Anastasiia S. Boiko, Irina A. Mednova, Elena G. Kornetova, Nikolay A. Bokhan and Svetlana A. Ivanova
Biomedicines 2023, 11(7), 1990; https://doi.org/10.3390/biomedicines11071990 - 14 Jul 2023
Cited by 3 | Viewed by 2504
Abstract
Machine learning and artificial intelligence technologies are known to be a convenient tool for analyzing multi-domain data in precision psychiatry. In the case of schizophrenia, the most commonly used data sources for such purposes are neuroimaging, voice and language patterns, and mobile phone [...] Read more.
Machine learning and artificial intelligence technologies are known to be a convenient tool for analyzing multi-domain data in precision psychiatry. In the case of schizophrenia, the most commonly used data sources for such purposes are neuroimaging, voice and language patterns, and mobile phone data. Data on peripheral markers can also be useful for building predictive models. Here, we have developed five predictive models for the binary classification of schizophrenia patients and healthy individuals. Data on serum concentrations of cytokines, chemokines, growth factors, and age were among 38 parameters used to build these models. The sample consisted of 217 schizophrenia patients and 90 healthy individuals. The models architecture was involved logistic regression, deep neural networks, decision trees, support vector machine, and k-nearest neighbors algorithms. It was shown that the algorithm based on a deep neural network (consisting of five layers) showed a slightly higher sensitivity (0.87 ± 0.04) and specificity (0.52 ± 0.06) than other algorithms. Combining all variables into a single classifier showed a cumulative effect that exceeded the effectiveness of individual variables, indicating the need to use multiple biomarkers to diagnose schizophrenia. Thus, the data obtained showed the promise of using data on peripheral biomarkers and machine learning methods for diagnosing schizophrenia. Full article
(This article belongs to the Section Neurobiology and Clinical Neuroscience)
Show Figures

Graphical abstract

33 pages, 1486 KiB  
Article
A Gene-Based Algorithm for Identifying Factors That May Affect a Speaker’s Voice
by Rita Singh
Entropy 2023, 25(6), 897; https://doi.org/10.3390/e25060897 - 2 Jun 2023
Cited by 1 | Viewed by 2106
Abstract
Over the past decades, many machine-learning- and artificial-intelligence-based technologies have been created to deduce biometric or bio-relevant parameters of speakers from their voice. These voice profiling technologies have targeted a wide range of parameters, from diseases to environmental factors, based largely on the [...] Read more.
Over the past decades, many machine-learning- and artificial-intelligence-based technologies have been created to deduce biometric or bio-relevant parameters of speakers from their voice. These voice profiling technologies have targeted a wide range of parameters, from diseases to environmental factors, based largely on the fact that they are known to influence voice. Recently, some have also explored the prediction of parameters whose influence on voice is not easily observable through data-opportunistic biomarker discovery techniques. However, given the enormous range of factors that can possibly influence voice, more informed methods for selecting those that may be potentially deducible from voice are needed. To this end, this paper proposes a simple path-finding algorithm that attempts to find links between vocal characteristics and perturbing factors using cytogenetic and genomic data. The links represent reasonable selection criteria for use by computational by profiling technologies only, and are not intended to establish any unknown biological facts. The proposed algorithm is validated using a simple example from medical literature—that of the clinically observed effects of specific chromosomal microdeletion syndromes on the vocal characteristics of affected people. In this example, the algorithm attempts to link the genes involved in these syndromes to a single example gene (FOXP2) that is known to play a broad role in voice production. We show that in cases where strong links are exposed, vocal characteristics of the patients are indeed reported to be correspondingly affected. Validation experiments and subsequent analyses confirm that the methodology could be potentially useful in predicting the existence of vocal signatures in naïve cases where their existence has not been otherwise observed. Full article
(This article belongs to the Special Issue Information-Theoretic Approaches in Speech Processing and Recognition)
Show Figures

Figure 1

15 pages, 632 KiB  
Review
A Review of Voice-Based Pain Detection in Adults Using Artificial Intelligence
by Sahar Borna, Clifton R. Haider, Karla C. Maita, Ricardo A. Torres, Francisco R. Avila, John P. Garcia, Gioacchino D. De Sario Velasquez, Christopher J. McLeod, Charles J. Bruce, Rickey E. Carter and Antonio J. Forte
Bioengineering 2023, 10(4), 500; https://doi.org/10.3390/bioengineering10040500 - 21 Apr 2023
Cited by 5 | Viewed by 3776
Abstract
Pain is a complex and subjective experience, and traditional methods of pain assessment can be limited by factors such as self-report bias and observer variability. Voice is frequently used to evaluate pain, occasionally in conjunction with other behaviors such as facial gestures. Compared [...] Read more.
Pain is a complex and subjective experience, and traditional methods of pain assessment can be limited by factors such as self-report bias and observer variability. Voice is frequently used to evaluate pain, occasionally in conjunction with other behaviors such as facial gestures. Compared to facial emotions, there is less available evidence linking pain with voice. This literature review synthesizes the current state of research on the use of voice recognition and voice analysis for pain detection in adults, with a specific focus on the role of artificial intelligence (AI) and machine learning (ML) techniques. We describe the previous works on pain recognition using voice and highlight the different approaches to voice as a tool for pain detection, such as a human effect or biosignal. Overall, studies have shown that AI-based voice analysis can be an effective tool for pain detection in adult patients with various types of pain, including chronic and acute pain. We highlight the high accuracy of the ML-based approaches used in studies and their limitations in terms of generalizability due to factors such as the nature of the pain and patient population characteristics. However, there are still potential challenges, such as the need for large datasets and the risk of bias in training models, which warrant further research. Full article
(This article belongs to the Special Issue Deep Learning and Medical Innovation in Minimally Invasive Surgery)
Show Figures

Figure 1

22 pages, 2835 KiB  
Article
Artificial Intelligence-Based Voice Assessment of Patients with Parkinson’s Disease Off and On Treatment: Machine vs. Deep-Learning Comparison
by Giovanni Costantini, Valerio Cesarini, Pietro Di Leo, Federica Amato, Antonio Suppa, Francesco Asci, Antonio Pisani, Alessandra Calculli and Giovanni Saggio
Sensors 2023, 23(4), 2293; https://doi.org/10.3390/s23042293 - 18 Feb 2023
Cited by 37 | Viewed by 6157
Abstract
Parkinson’s Disease (PD) is one of the most common non-curable neurodegenerative diseases. Diagnosis is achieved clinically on the basis of different symptoms with considerable delays from the onset of neurodegenerative processes in the central nervous system. In this study, we investigated early and [...] Read more.
Parkinson’s Disease (PD) is one of the most common non-curable neurodegenerative diseases. Diagnosis is achieved clinically on the basis of different symptoms with considerable delays from the onset of neurodegenerative processes in the central nervous system. In this study, we investigated early and full-blown PD patients based on the analysis of their voice characteristics with the aid of the most commonly employed machine learning (ML) techniques. A custom dataset was made with hi-fi quality recordings of vocal tasks gathered from Italian healthy control subjects and PD patients, divided into early diagnosed, off-medication patients on the one hand, and mid-advanced patients treated with L-Dopa on the other. Following the current state-of-the-art, several ML pipelines were compared usingdifferent feature selection and classification algorithms, and deep learning was also explored with a custom CNN architecture. Results show how feature-based ML and deep learning achieve comparable results in terms of classification, with KNN, SVM and naïve Bayes classifiers performing similarly, with a slight edge for KNN. Much more evident is the predominance of CFS as the best feature selector. The selected features act as relevant vocal biomarkers capable of differentiating healthy subjects, early untreated PD patients and mid-advanced L-Dopa treated patients. Full article
Show Figures

Figure 1

13 pages, 782 KiB  
Article
Distinguish the Severity of Illness Associated with Novel Coronavirus (COVID-19) Infection via Sustained Vowel Speech Features
by Yasuhiro Omiya, Daisuke Mizuguchi and Shinichi Tokuno
Int. J. Environ. Res. Public Health 2023, 20(4), 3415; https://doi.org/10.3390/ijerph20043415 - 15 Feb 2023
Cited by 2 | Viewed by 2241
Abstract
The authors are currently conducting research on methods to estimate psychiatric and neurological disorders from a voice by focusing on the features of speech. It is empirically known that numerous psychosomatic symptoms appear in voice biomarkers; in this study, we examined the effectiveness [...] Read more.
The authors are currently conducting research on methods to estimate psychiatric and neurological disorders from a voice by focusing on the features of speech. It is empirically known that numerous psychosomatic symptoms appear in voice biomarkers; in this study, we examined the effectiveness of distinguishing changes in the symptoms associated with novel coronavirus infection using speech features. Multiple speech features were extracted from the voice recordings, and, as a countermeasure against overfitting, we selected features using statistical analysis and feature selection methods utilizing pseudo data and built and verified machine learning algorithm models using LightGBM. Applying 5-fold cross-validation, and using three types of sustained vowel sounds of /Ah/, /Eh/, and /Uh/, we achieved a high performance (accuracy and AUC) of over 88% in distinguishing “asymptomatic or mild illness (symptoms)” and “moderate illness 1 (symptoms)”. Accordingly, the results suggest that the proposed index using voice (speech features) can likely be used in distinguishing the symptoms associated with novel coronavirus infection. Full article
(This article belongs to the Special Issue Data Science and New Technologies in Public Health)
Show Figures

Figure 1

18 pages, 1256 KiB  
Article
Acoustic Voice and Speech Biomarkers of Treatment Status during Hospitalization for Acute Decompensated Heart Failure
by Olivia M. Murton, G. William Dec, Robert E. Hillman, Maulik D. Majmudar, Johannes Steiner, John V. Guttag and Daryush D. Mehta
Appl. Sci. 2023, 13(3), 1827; https://doi.org/10.3390/app13031827 - 31 Jan 2023
Cited by 4 | Viewed by 3641
Abstract
This study investigates acoustic voice and speech features as biomarkers for acute decompensated heart failure (ADHF), a serious escalation of heart failure symptoms including breathlessness and fatigue. ADHF-related systemic fluid accumulation in the lungs and laryngeal tissues is hypothesized to affect phonation and [...] Read more.
This study investigates acoustic voice and speech features as biomarkers for acute decompensated heart failure (ADHF), a serious escalation of heart failure symptoms including breathlessness and fatigue. ADHF-related systemic fluid accumulation in the lungs and laryngeal tissues is hypothesized to affect phonation and respiration for speech. A set of daily spoken recordings from 52 patients undergoing inpatient ADHF treatment was analyzed to identify voice and speech biomarkers for ADHF and to examine the trajectory of biomarkers during treatment. Results indicated that speakers produce more stable phonation, a more creaky voice, faster speech rates, and longer phrases after ADHF treatment compared to their pre-treatment voices. This project builds on work to develop a method of monitoring ADHF using speech biomarkers and presents a more detailed understanding of relevant voice and speech features. Full article
(This article belongs to the Special Issue Computational Methods and Engineering Solutions to Voice III)
Show Figures

Figure 1

23 pages, 1982 KiB  
Article
A Hybrid U-Lossian Deep Learning Network for Screening and Evaluating Parkinson’s Disease
by Rytis Maskeliūnas, Robertas Damaševičius, Audrius Kulikajevas, Evaldas Padervinskis, Kipras Pribuišis and Virgilijus Uloza
Appl. Sci. 2022, 12(22), 11601; https://doi.org/10.3390/app122211601 - 15 Nov 2022
Cited by 26 | Viewed by 5924
Abstract
Speech impairment analysis and processing technologies have evolved substantially in recent years, and the use of voice as a biomarker has gained popularity. We have developed an approach for clinical speech signal processing to demonstrate the promise of deep learning-driven voice analysis as [...] Read more.
Speech impairment analysis and processing technologies have evolved substantially in recent years, and the use of voice as a biomarker has gained popularity. We have developed an approach for clinical speech signal processing to demonstrate the promise of deep learning-driven voice analysis as a screening tool for Parkinson’s Disease (PD), the world’s second most prevalent neurodegenerative disease. Detecting Parkinson’s disease symptoms typically involves an evaluation by a movement disorder expert, which can be difficult to get and yield varied findings. A vocal digital biomarker might supplement the time-consuming traditional manual examination by recognizing and evaluating symptoms that characterize voice quality and level of deterioration. We present a deep learning based, custom U-lossian model for PD assessment and recognition. The study’s goal was to discover anomalies in the PD-affected voice and develop an automated screening method that can discriminate between the voices of PD patients and healthy volunteers while also providing a voice quality score. The classification accuracy was evaluated on two speech corpora (Italian PVS and own Lithuanian PD voice dataset) and we have found the result to be medically appropriate, with values of 0.8964 and 0.7949, confirming the proposed model’s high generalizability. Full article
Show Figures

Figure 1

15 pages, 4825 KiB  
Article
Motor Signatures in Digitized Cognitive and Memory Tests Enhances Characterization of Parkinson’s Disease
by Jihye Ryu and Elizabeth B. Torres
Sensors 2022, 22(12), 4434; https://doi.org/10.3390/s22124434 - 11 Jun 2022
Cited by 1 | Viewed by 2438
Abstract
Although interest in using wearable sensors to characterize movement disorders is growing, there is a lack of methodology for developing clinically interpretable biomarkers. Such digital biomarkers would provide a more objective diagnosis, capturing finer degrees of motor deficits, while retaining the information of [...] Read more.
Although interest in using wearable sensors to characterize movement disorders is growing, there is a lack of methodology for developing clinically interpretable biomarkers. Such digital biomarkers would provide a more objective diagnosis, capturing finer degrees of motor deficits, while retaining the information of traditional clinical tests. We aim at digitizing traditional tests of cognitive and memory performance to derive motor biometrics of pen-strokes and voice, thereby complementing clinical tests with objective criteria, while enhancing the overall characterization of Parkinson’s disease (PD). 35 participants including patients with PD, healthy young and age-matched controls performed a series of drawing and memory tasks, while their pen movement and voice were digitized. We examined the moment-to-moment variability of time series reflecting the pen speed and voice amplitude. The stochastic signatures of the fluctuations in pen drawing speed and voice amplitude of patients with PD show a higher signal-to-noise ratio compared to those of neurotypical controls. It appears that contact motions of the pen strokes on a tablet evoke sensory feedback for more immediate and predictable control in PD, while voice amplitude loses its neurotypical richness. We offer new standardized data types and analytics to discover the hidden motor aspects within the cognitive and memory clinical assays. Full article
(This article belongs to the Topic Human Movement Analysis)
Show Figures

Figure 1

Back to TopTop