Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3533767.3534391acmconferencesArticle/Chapter ViewAbstractPublication PagesisstaConference Proceedingsconference-collections
research-article

ASRTest: automated testing for deep-neural-network-driven speech recognition systems

Published: 18 July 2022 Publication History

Abstract

With the rapid development of deep neural networks and end-to-end learning techniques, automatic speech recognition (ASR) systems have been deployed into our daily and assist in various tasks. However, despite their tremendous progress, ASR systems could also suffer from software defects and exhibit incorrect behaviors. While the nature of DNN makes conventional software testing techniques inapplicable for ASR systems, lacking diverse tests and oracle information further hinders their testing. In this paper, we propose and implement a testing approach, namely ASR, specifically for the DNN-driven ASR systems. ASRTest is built upon the theory of metamorphic testing. We first design the metamorphic relation for ASR systems and then implement three families of transformation operators that can simulate practical application scenarios to generate speeches. Furthermore, we adopt Gini impurity to guide the generation process and improve the testing efficiency. To validate the effectiveness of ASRTest, we apply ASRTest to four ASR models with four widely-used datasets. The results show that ASRTest can detect erroneous behaviors under different realistic application conditions efficiently and improve 19.1% recognition performance on average via retraining with the generated data. Also, we conduct a case study on an industrial ASR system to investigate the performance of ASRTest under the real usage scenario. The study shows that ASRTest can detect errors and improve the performance of DNN-driven ASR systems effectively.

References

[1]
[n.d.]. https://www.apple.com/siri/
[2]
[n.d.]. Amazon has figured out what’s behind Alexa’s random laugh that was freaking people out. https://www.usatoday.com/story/tech/2018/03/07/alexas-weird-random-laughter-freaking-people-out/404476002/ 7 March 2018.
[3]
[n.d.]. Baidu Research. http://research.baidu.com/
[4]
[n.d.]. The Github repository of ASRTest. https://github.com/SATE-Lab/ASRTest
[5]
[n.d.]. The Github repository of DeepSpeech2. https://github.com/PaddlePaddle/DeepSpeech
[6]
[n.d.]. The Github repository of Paddle. https://github.com/PaddlePaddle/Paddle
[7]
[n.d.]. The Github repository of Pyroomacoustics. https://github.com/LCAV/pyroomacoustics
[8]
[n.d.]. Study: Voice-activated systems in cars raise the risk of accidents. https://www.extremetech.com/extreme/216860-study-voice-activated-systems-in-cars-raise-the-risk-of-accidents 26 October 2015.
[9]
Hadi Abdullah, Muhammad Sajidur Rahman, Washington Garcia, Logan Blue, Kevin Warren, Anurag Swarnim Yadav, Tom Shrimpton, and Patrick Traynor. 2019. Hear" no evil", see" kenansville": Efficient and transferable black-box attacks on speech recognition and voice identification systems. arXiv preprint arXiv:1910.05262.
[10]
Alex Acero. 1992. Acoustical and environmental robustness in automatic speech recognition. 201, Springer Science & Business Media.
[11]
Dario Amodei, Sundaram Ananthanarayanan, Rishita Anubhai, Jingliang Bai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Qiang Cheng, and Guoliang Chen. 2016. Deep speech 2: End-to-end speech recognition in english and mandarin. In International conference on machine learning. 173–182.
[12]
Muhammad Hilmi Asyrofi, Ferdian Thung, David Lo, and Lingxiao Jiang. 2020. CrossASR: Efficient Differential Testing of Automatic Speech Recognition via Text-To-Speech. In 2020 IEEE International Conference on Software Maintenance and Evolution (ICSME). 640–650. https://doi.org/10.1109/ICSME46990.2020.00066
[13]
Muhammad Hilmi Asyrofi, Zhou Yang, and David Lo. 2021. CrossASR++: A Modular Differential Testing Framework for Automatic Speech Recognition. arXiv preprint arXiv:2105.14881.
[14]
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.
[15]
Eric Battenberg, Jitong Chen, Rewon Child, Adam Coates, Yashesh Gaur Yi Li, Hairong Liu, Sanjeev Satheesh, Anuroop Sriram, and Zhenyao Zhu. 2017. Exploring neural transducers for end-to-end speech recognition. In 2017 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU). 206–213. https://doi.org/10.1109/ASRU.2017.8268937
[16]
Mariusz Bojarski, Davide Del Testa, Daniel Dworakowski, Bernhard Firner, Beat Flepp, Prasoon Goyal, Lawrence D Jackel, Mathew Monfort, Urs Muller, and Jiakai Zhang. 2016. End to end learning for self-driving cars. arXiv preprint arXiv:1604.07316.
[17]
Gerald J Canter. 1963. Speech characteristics of patients with Parkinson’s disease: I. Intensity, pitch, and duration. Journal of speech and hearing disorders, 28, 3 (1963), 221–229.
[18]
Nicholas Carlini and David Wagner. 2018. Audio adversarial examples: Targeted attacks on speech-to-text. In 2018 IEEE Security and Privacy Workshops (SPW). 1–7.
[19]
Tsong Y Chen, Shing C Cheung, and Shiu Ming Yiu. 2020. Metamorphic testing: a new approach for generating next test cases. arXiv preprint arXiv:2002.12543.
[20]
Tsong Yueh Chen, Fei-Ching Kuo, Huai Liu, Pak-Lok Poon, Dave Towey, T. H. Tse, and Zhi Quan Zhou. 2018. Metamorphic Testing: A Review of Challenges and Opportunities. ACM Comput. Surv., 51, 1 (2018), Article 4, jan, 27 pages. issn:0360-0300 https://doi.org/10.1145/3143561
[21]
Xie Chen, Yu Wu, Zhenghao Wang, Shujie Liu, and Jinyu Li. 2021. Developing real-time streaming transformer transducer for speech recognition on large-scale dataset. In ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 5904–5908.
[22]
Chung-Cheng Chiu and Colin Raffel. 2017. Monotonic chunkwise attention. arXiv preprint arXiv:1712.05382.
[23]
Chung-Cheng Chiu, Tara N Sainath, Yonghui Wu, Rohit Prabhavalkar, Patrick Nguyen, Zhifeng Chen, Anjuli Kannan, Ron J Weiss, Kanishka Rao, and Ekaterina Gonina. 2018. State-of-the-art speech recognition with sequence-to-sequence models. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 4774–4778.
[24]
Linhao Dong, Shuang Xu, and Bo Xu. 2018. Speech-transformer: a no-recurrence sequence-to-sequence model for speech recognition. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 5884–5888.
[25]
Asmaa El Hannani, Rahhal Errattahi, Fatima Zahra Salmam, Thomas Hain, and Hassan Ouahmane. 2021. Evaluation of the effectiveness and efficiency of state-of-the-art features and models for automatic speech recognition error detection. Journal of Big Data, 8, 1 (2021), 1–16.
[26]
Yang Feng, Qingkai Shi, Xinyu Gao, Jun Wan, Chunrong Fang, and Zhenyu Chen. 2020. DeepGini: prioritizing massive tests to enhance the robustness of deep neural networks. In Proceedings of the 29th ACM SIGSOFT International Symposium on Software Testing and Analysis. 177–188.
[27]
Alex Graves, Santiago Fernández, Faustino Gomez, and Jürgen Schmidhuber. 2006. Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. In Proceedings of the 23rd international conference on Machine learning. 369–376.
[28]
Alex Graves and Navdeep Jaitly. 2014. Towards end-to-end speech recognition with recurrent neural networks. In International conference on machine learning. 1764–1772.
[29]
Anmol Gulati, James Qin, Chung-Cheng Chiu, Niki Parmar, Yu Zhang, Jiahui Yu, Wei Han, Shibo Wang, Zhengdong Zhang, and Yonghui Wu. 2020. Conformer: Convolution-augmented transformer for speech recognition. arXiv preprint arXiv:2005.08100.
[30]
Jinxi Guo, Gautam Tiwari, Jasha Droppo, Maarten Van Segbroeck, Che-Wei Huang, Andreas Stolcke, and Roland Maas. 2020. Efficient minimum word error rate training of RNN-Transducer for end-to-end speech recognition. arXiv preprint arXiv:2007.13802.
[31]
Pengcheng Guo, Florian Boyer, Xuankai Chang, Tomoki Hayashi, Yosuke Higuchi, Hirofumi Inaguma, Naoyuki Kamo, Chenda Li, Daniel Garcia-Romero, and Jiatong Shi. 2021. Recent developments on espnet toolkit boosted by conformer. In ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 5874–5878.
[32]
Awni Hannun, Carl Case, Jared Casper, Bryan Catanzaro, Greg Diamos, Erich Elsen, Ryan Prenger, Sanjeev Satheesh, Shubho Sengupta, and Adam Coates. 2014. Deep speech: Scaling up end-to-end speech recognition. arXiv preprint arXiv:1412.5567.
[33]
Tong He, Zhi Zhang, Hang Zhang, Zhongyue Zhang, Junyuan Xie, and Mu Li. 2019. Bag of tricks for image classification with convolutional neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 558–567.
[34]
Yanzhang He, Tara N Sainath, Rohit Prabhavalkar, Ian McGraw, Raziel Alvarez, Ding Zhao, David Rybach, Anjuli Kannan, Yonghui Wu, and Ruoming Pang. 2019. Streaming end-to-end speech recognition for mobile devices. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 6381–6385.
[35]
François Hernandez, Vincent Nguyen, Sahar Ghannay, Natalia Tomashenko, and Yannick Esteve. 2018. TED-LIUM 3: twice as much data and corpus repartition for experiments on speaker adaptation. In International conference on speech and computer. 198–208.
[36]
Takeyuki Hida, Hui-Hsiung Kuo, Jürgen Potthoff, and Ludwig Streit. 2013. White noise: an infinite dimensional calculus. 253, Springer Science & Business Media.
[37]
Florian Hönig, Georg Stemmer, Christian Hacker, and Fabio Brugnara. 2005. Revising perceptual linear prediction (PLP). In Ninth European Conference on Speech Communication and Technology.
[38]
Hirofumi Inaguma, Yashesh Gaur, Liang Lu, Jinyu Li, and Yifan Gong. 2020. Minimum latency training strategies for streaming sequence-to-sequence ASR. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 6064–6068.
[39]
Kazuki Irie, Rohit Prabhavalkar, Anjuli Kannan, Antoine Bruguier, David Rybach, and Patrick Nguyen. 2019. On the choice of modeling unit for sequence-to-sequence speech recognition. arXiv preprint arXiv:1902.01955.
[40]
Chadawan Ittichaichareon, Siwat Suksri, and Thaweesak Yingthawornsuk. 2012. Speech recognition using MFCC. In International conference on computer graphics, simulation and modeling. 135–138.
[41]
Mahaveer Jain, Kjell Schubert, Jay Mahadeokar, Ching-Feng Yeh, Kaustubh Kalgaonkar, Anuroop Sriram, Christian Fuegen, and Michael L Seltzer. 2019. RNN-T for latency controlled ASR with improved beam search. arXiv preprint arXiv:1911.01629.
[42]
Vikas Joshi, Rui Zhao, Rupesh R Mehta, Kshitiz Kumar, and Jinyu Li. 2020. Transfer Learning Approaches for Streaming End-to-End Speech Recognition System. arXiv preprint arXiv:2008.05086.
[43]
Jacob Kahn, Ann Lee, and Awni Hannun. 2020. Self-training for end-to-end speech recognition. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 7084–7088.
[44]
Naoyuki Kanda, Ryu Takeda, and Yasunari Obuchi. 2013. Elastic spectral distortion for low resource speech recognition with deep neural networks. In 2013 IEEE Workshop on Automatic Speech Recognition and Understanding. 309–314.
[45]
Shigeki Karita, Nanxin Chen, Tomoki Hayashi, Takaaki Hori, Hirofumi Inaguma, Ziyan Jiang, Masao Someki, Nelson Enrique Yalta Soplin, Ryuichi Yamamoto, and Xiaofei Wang. 2019. A comparative study on transformer vs rnn in speech applications. In 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU). 449–456.
[46]
Chanwoo Kim, Ananya Misra, Kean Chin, Thad Hughes, Arun Narayanan, Tara Sainath, and Michiel Bacchiani. 2017. Generation of large-scale simulated utterances in virtual rooms to train deep-neural networks for far-field speech recognition in Google Home.
[47]
Suyoun Kim, Takaaki Hori, and Shinji Watanabe. 2017. Joint CTC-attention based end-to-end speech recognition using multi-task learning. In 2017 IEEE international conference on acoustics, speech and signal processing (ICASSP). 4835–4839.
[48]
Tom Ko, Vijayaditya Peddinti, Daniel Povey, and Sanjeev Khudanpur. 2015. Audio augmentation for speech recognition. In Sixteenth annual conference of the international speech communication association.
[49]
Hugo Larochelle, Yoshua Bengio, Jérôme Louradour, and Pascal Lamblin. 2009. Exploring strategies for training deep neural networks. Journal of machine learning research, 10, 1 (2009).
[50]
Kenneth Levenberg. 1944. A method for the solution of certain non-linear problems in least squares. Quarterly of applied mathematics, 2, 2 (1944), 164–168.
[51]
Vladimir I Levenshtein. 1966. Binary codes capable of correcting deletions, insertions, and reversals. In Soviet physics doklady. 10, 707–710.
[52]
Jinyu Li, Li Deng, Yifan Gong, and Reinhold Haeb-Umbach. 2014. An overview of noise-robust automatic speech recognition. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 22, 4 (2014), 745–777.
[53]
Jinyu Li, Yu Wu, Yashesh Gaur, Chengyi Wang, Rui Zhao, and Shujie Liu. 2020. On the comparison of popular end-to-end models for large scale speech recognition. arXiv preprint arXiv:2005.14327.
[54]
Jinyu Li, Rui Zhao, Hu Hu, and Yifan Gong. 2019. Improving RNN transducer modeling for end-to-end speech recognition. In 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU). 114–121.
[55]
Bo Liu, Ying Wei, Yu Zhang, and Qiang Yang. 2017. Deep Neural Networks for High Dimension, Low Sample Size Data. In IJCAI. 2287–2293.
[56]
Sri Harish Mallidi and Hynek Hermansky. 2016. Novel neural network based fusion for multistream ASR. In 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 5680–5684.
[57]
Niko Moritz, Takaaki Hori, and Jonathan Le. 2020. Streaming automatic speech recognition with the transformer model. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 6074–6078.
[58]
Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur. 2015. Librispeech: an asr corpus based on public domain audio books. In 2015 IEEE international conference on acoustics, speech and signal processing (ICASSP). 5206–5210.
[59]
K. Pei, Y. Cao, J. Yang, and S. Jana. 2019. DeepXplore: automated whitebox testing of deep learning systems. Commun. ACM, 62, 11 (2019), 137–145.
[60]
Ayesha Pervaiz, Fawad Hussain, Huma Israr, Muhammad Ali Tahir, Fawad Riasat Raja, Naveed Khan Baloch, Farruh Ishmanov, and Yousaf Bin Zikria. 2020. Incorporating noise robustness in speech command recognition by noise augmentation of training data. Sensors, 20, 8 (2020), 2326.
[61]
Rohit Prabhavalkar, Tara N Sainath, Yonghui Wu, Patrick Nguyen, Zhifeng Chen, Chung-Cheng Chiu, and Anjuli Kannan. 2018. Minimum word error rate training for attention-based sequence-to-sequence models. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 4839–4843.
[62]
Anthony Rousseau, Paul Deléglise, and Yannick Esteve. 2014. Enhancing the TED-LIUM corpus with selected data for language modeling and more TED talks. In LREC. 3935–3939.
[63]
Lea Schönherr, Katharina Kohls, Steffen Zeiler, Thorsten Holz, and Dorothea Kolossa. 2018. Adversarial attacks against automatic speech recognition systems via psychoacoustic hiding. arXiv preprint arXiv:1808.05665.
[64]
Christian Schörkhuber, Anssi Klapuri, and Alois Sontacchi. 2013. Audio pitch shifting using the constant-Q transform. Journal of the Audio Engineering Society, 61, 7/8 (2013), 562–572.
[65]
Sergio Segura, Gordon Fraser, Ana B Sanchez, and Antonio Ruiz-Cortés. 2016. A survey on metamorphic testing. IEEE Transactions on software engineering, 42, 9 (2016), 805–824.
[66]
Ben J Shannon and Kuldip K Paliwal. 2003. A comparative study of filter bank spacing for speech recognition. In Microelectronic engineering research conference. 41, 310–12.
[67]
Apeksha Shewalkar, Deepika Nyavanandi, and Simone A Ludwig. 2019. Performance evaluation of deep neural networks applied to speech recognition: RNN, LSTM and GRU. Journal of Artificial Intelligence and Soft Computing Research, 9, 4 (2019), 235–245.
[68]
Jose Sotelo, Soroush Mehri, Kundan Kumar, Joao Felipe Santos, Kyle Kastner, Aaron Courville, and Yoshua Bengio. 2017. Char2wav: End-to-end speech synthesis.
[69]
Andrew Varga and Herman JM Steeneken. 1993. Assessment for automatic speech recognition: II. NOISEX-92: A database and an experiment to study the effect of additive noise on speech recognition systems. Speech communication, 12, 3 (1993), 247–251.
[70]
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems. 5998–6008.
[71]
Thilo von Neumann, Keisuke Kinoshita, Lukas Drude, Christoph Boeddeker, Marc Delcroix, Tomohiro Nakatani, and Reinhold Haeb-Umbach. 2020. End-to-end training of time domain audio separation and recognition. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 7004–7008.
[72]
Dong Wang, Xiaodong Wang, and Shaohe Lv. 2019. An overview of end-to-end automatic speech recognition. Symmetry, 11, 8 (2019), 1018.
[73]
Shu Wang, Jiahao Cao, Kun Sun, and Qi Li. 2020. SIEVE: Secure In-Vehicle Automatic Speech Recognition Systems. In 23rd International Symposium on Research in Attacks, Intrusions and Defenses (RAID 2020). USENIX Association, San Sebastian. 365–379. isbn:978-1-939133-18-2 https://www.usenix.org/conference/raid2020/presentation/wang-shu
[74]
Shinji Watanabe, Takaaki Hori, Shigeki Karita, Tomoki Hayashi, Jiro Nishitoba, Yuya Unno, Nelson Enrique Yalta Soplin, Jahn Heymann, Matthew Wiesner, Nanxin Chen, Adithya Renduchintala, and Tsubasa Ochiai. 2018. ESPnet: End-to-End Speech Processing Toolkit. In Proceedings of Interspeech. 2207–2211. https://doi.org/10.21437/Interspeech.2018-1456
[75]
Shinji Watanabe, Takaaki Hori, Suyoun Kim, John R Hershey, and Tomoki Hayashi. 2017. Hybrid CTC/attention architecture for end-to-end speech recognition. IEEE Journal of Selected Topics in Signal Processing, 11, 8 (2017), 1240–1253.
[76]
Jan Wenzel, Hans Matter, and Friedemann Schmidt. 2019. Predictive multitask deep neural network models for ADME-Tox properties: learning from large data sets. Journal of chemical information and modeling, 59, 3 (2019), 1253–1268.
[77]
Shin Yoo and Mark Harman. 2012. Regression testing minimization, selection and prioritization: a survey. Software testing, verification and reliability, 22, 2 (2012), 67–120.
[78]
Takuya Yoshioka, Armin Sehr, Marc Delcroix, Keisuke Kinoshita, Roland Maas, Tomohiro Nakatani, and Walter Kellermann. 2012. Making machines understand us in reverberant rooms: Robustness against reverberation for automatic speech recognition. IEEE Signal Processing Magazine, 29, 6 (2012), 114–126.
[79]
Dong Yu and Li Deng. 2016. AUTOMATIC SPEECH RECOGNITION. Springer.
[80]
Xuejing Yuan, Yuxuan Chen, Yue Zhao, Yunhui Long, Xiaokang Liu, Kai Chen, Shengzhi Zhang, Heqing Huang, Xiaofeng Wang, and Carl A Gunter. 2018. Commandersong: A systematic approach for practical adversarial voice recognition. In 27th $USENIX$ Security Symposium ($USENIX$ Security 18). 49–64.
[81]
Qian Zhang, Han Lu, Hasim Sak, Anshuman Tripathi, Erik McDermott, Stephen Koo, and Shankar Kumar. 2020. Transformer transducer: A streamable speech recognition model with transformer encoders and rnn-t loss. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 7829–7833.
[82]
Bowen Zhou, Daniel Déchelotte, and Yuqing Gao. 2004. Two-way speech-to-speech translation on handheld devices. In Eighth International Conference on Spoken Language Processing.
[83]
Shiyu Zhou, Linhao Dong, Shuang Xu, and Bo Xu. 2018. Syllable-based sequence-to-sequence speech recognition with the transformer in mandarin chinese. arXiv preprint arXiv:1804.10752.

Cited By

View all
  • (2024)ObjTest: Object-Level Mutation for Testing Object Detection SystemsProceedings of the 15th Asia-Pacific Symposium on Internetware10.1145/3671016.3671400(61-70)Online publication date: 24-Jul-2024
  • (2024)COSTELLO: Contrastive Testing for Embedding-Based Large Language Model as a Service EmbeddingsProceedings of the ACM on Software Engineering10.1145/36437671:FSE(906-928)Online publication date: 12-Jul-2024
  • (2024)MetaSem: metamorphic testing based on semantic information of autonomous driving scenesSoftware Testing, Verification and Reliability10.1002/stvr.187834:5Online publication date: May-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ISSTA 2022: Proceedings of the 31st ACM SIGSOFT International Symposium on Software Testing and Analysis
July 2022
808 pages
ISBN:9781450393799
DOI:10.1145/3533767
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 18 July 2022

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Automated Testing
  2. Automatic Speech Recognition
  3. Deep Neural Networks
  4. Metamorphic Testing

Qualifiers

  • Research-article

Conference

ISSTA '22
Sponsor:

Acceptance Rates

Overall Acceptance Rate 58 of 213 submissions, 27%

Upcoming Conference

ISSTA '25

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)133
  • Downloads (Last 6 weeks)15
Reflects downloads up to 10 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)ObjTest: Object-Level Mutation for Testing Object Detection SystemsProceedings of the 15th Asia-Pacific Symposium on Internetware10.1145/3671016.3671400(61-70)Online publication date: 24-Jul-2024
  • (2024)COSTELLO: Contrastive Testing for Embedding-Based Large Language Model as a Service EmbeddingsProceedings of the ACM on Software Engineering10.1145/36437671:FSE(906-928)Online publication date: 12-Jul-2024
  • (2024)MetaSem: metamorphic testing based on semantic information of autonomous driving scenesSoftware Testing, Verification and Reliability10.1002/stvr.187834:5Online publication date: May-2024
  • (2023)ROME: Testing Image Captioning Systems via Recursive Object MeltingProceedings of the 32nd ACM SIGSOFT International Symposium on Software Testing and Analysis10.1145/3597926.3598094(766-778)Online publication date: 12-Jul-2023
  • (2023)Augmented Datasheets for Speech Datasets and Ethical Decision-MakingProceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency10.1145/3593013.3594049(881-904)Online publication date: 12-Jun-2023
  • (2023)ASDF: A Differential Testing Framework for Automatic Speech Recognition Systems2023 IEEE Conference on Software Testing, Verification and Validation (ICST)10.1109/ICST57152.2023.00050(461-463)Online publication date: Apr-2023
  • (2023)FedSlice: Protecting Federated Learning Models from Malicious Participants with Model SlicingProceedings of the 45th International Conference on Software Engineering10.1109/ICSE48619.2023.00049(460-472)Online publication date: 14-May-2023
  • (2023)Application of sensor network and speech recognition system in online english teachingInternational Journal of System Assurance Engineering and Management10.1007/s13198-023-02137-2Online publication date: 24-Sep-2023
  • (2023)MetaLiDAR: Automated metamorphic testing of LiDAR‐based autonomous driving systemsJournal of Software: Evolution and Process10.1002/smr.2644Online publication date: 20-Dec-2023
  • (2022)Boosting the Revealing of Detected Violations in Deep Learning Testing: A Diversity-Guided MethodProceedings of the 37th IEEE/ACM International Conference on Automated Software Engineering10.1145/3551349.3556919(1-13)Online publication date: 10-Oct-2022

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media