Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Exploration on Advanced Intelligent Algorithms of Artificial Intelligence for Verb Recognition in Machine Translation

Published: 08 August 2024 Publication History

Abstract

This article aimed to address the problems of word order confusion, context dependency, and ambiguity in traditional machine translation (MT) methods for verb recognition. By applying advanced intelligent algorithms of artificial intelligence, verb recognition can be better processed and the quality and accuracy of MT can be improved. Based on Neural machine translation (NMT), basic attention mechanisms, historical attention information, dynamically obtain information related to the generated words, and constraint mechanisms were introduced to embed semantic information, represent polysemy, and annotate semantic roles of verbs. This article used the Workshop on MT (WMT), British National Corpus (BNC), Gutenberg, Reuters Corpus, and OpenSubtitles corpus, and enhanced the data in the corpora. The improved NMT model was compared with traditional NMT models, Rule-Based MT (RBMT), and Statistical MT (SMT). The experimental results showed that the average verb semantic matching degree of the improved NMT model in five corpora was 0.85, and the average Bilingual Evaluation Understudy (BLEU) score in five corpora was 0.90. The improved NMT model in this article can effectively improve the accuracy of verb recognition in MT, providing new methods for verb recognition in MT.

References

[1]
Blanka Klímová, Marcel Pikhart, Alice Delorme Benites, Caroline Lehr, and Christina Sanchez-Stockhammer. 2023. Neural machine translation in foreign language teaching and learning: A systematic review. Education and Information Technologies 28, 1 (2023), 663–682.
[2]
Mingjie Li, Po-Yao Huang, Xiaojun Chang, Junjie Hu, Yi Yang, and Alex Hauptmann. 2022. Video pivoting unsupervised multi-modal machine translation. IEEE Transactions on Pattern Analysis and Machine Intelligence 45, 3 (2022), 3918–3932.
[3]
Cheng Ying, Yue Shuyu, Li Jing, Deng Lin, and Quan Qi. 2021. Errors of machine translation of terminology in the patent text from English into Chinese. ASP Transactions on Computers 1, 1 (2021), 12–17.
[4]
Neha Bhadwal, Prateek Agrawal, and Vishu Madaan. 2020. A machine translation system from Hindi to Sanskrit language using rule based approach. Scalable Computing: Practice and Experience 21, 3 (2020), 543–554.
[5]
Tanmai Khanna, Jonathan N. Washington, Francis M. Tyers, Sevilay Bayatlı, Daniel G. Swanson, and Tommi A. Pirinen. 2021. Recent advances in Apertium, a free/open-source rule-based machine translation platform for low-resource languages. Machine Translation 35, 4 (2021), 475–502.
[6]
Benyamin Ahmadnia, Gholamreza Haffari, and Javier Serrano. 2019. Round-trip training approach for bilingually low-resource statistical machine translation systems. International Journal of Artificial Intelligence 17, 1 (2019), 167–185.
[7]
Md Nawab Yousuf Ali, Md. Lizur Rahman, Jyotismita Chaki, Nilanjan Dey, and K. C. Santosh. 2021. Machine translation using deep learning for universal networking language based on their structure. International Journal of Machine Learning and Cybernetics 12, 8 (2021), 2365–2376.
[8]
Shah Nawaz Khan and Imran Usman. 2019. A model for English to Urdu and Hindi machine translation system using translation rules and artificial neural network. International Arab Journal of Information Technology 16, 1 (2019), 125–131.
[9]
Candy Lalrempuii, Badal Soni, and Partha Pakray. 2021. An improved English-to-Mizo neural machine translation. Transactions on Asian and Low-Resource Language Information Processing 20, 4 (2021), 1–21.
[10]
Joss Moorkens. 2018. What to expect from Neural Machine Translation: A practical in-class translation evaluation exercise. The Interpreter and Translator Trainer 12, 4 (2018), 375–387.
[11]
Shumpei Nemoto, Tadahaya Mizuno, and Hiroyuki Kusuhara. 2023. Investigation of chemical structure recognition by encoder–decoder models in learning progress. Journal of Cheminformatics 15, 1 (2023), 1–9.
[12]
Santosh Kumar Mishra, Gaurav Rai, Sriparna Saha, and Pushpak Bhattacharyya. 2021. Efficient channel attention based encoder–decoder approach for image captioning in Hindi. Transactions on Asian and Low-Resource Language Information Processing 21, 3 (2021), 1–17.
[13]
Hiroki Konishi, Rui Yamaguchi, Kiyoshi Yamaguchi, Yoichi Furukawa, and Seiya Imoto. 2021. Halcyon: An accurate basecaller exploiting an encoder–decoder model with monotonic attention. Bioinformatics 37, 9 (2021), 1211–1217.
[14]
Joseph G. Makin, David A. Moses, and Edward F. Chang. 2020. Machine translation of cortical activity to text with an encoder–decoder framework. Nature Neuroscience 23, 4 (2020), 575–582.
[15]
F. Meng, Y. Zheng, S. Bao, J. Wang, and S. Yang. 2022. Formulaic language identification model based on GCN fusing associated information. PeerJ Computer Science 8 (2022), e984.
[16]
Yachao Li, Xiong Deyi, and Zhang Min. 2018. A review of neural machine translation. Chinese Journal of Computers 41, 12 (2018), 2734–2755.
[17]
Shi Lei, Wang Yi, Cheng Ying, and Wei Ruibin. 2020. A review of attention mechanisms in natural language processing. Data Analysis and Knowledge Discovery 4, 5 (2020), 1–14.
[18]
Meng Zhang. 2022. Convolutional autoencoder multiple suppression method based on self attention mechanism. Geophysical Prospecting for Petroleum 61, 3 (2022), 454–462.
[19]
Derya Soydaner. 2022. Attention mechanism in neural networks: Where it comes and where it goes. Neural Computing and Applications 34, 16 (2022), 13371–13385.
[20]
Biao Zhang, Deyi Xiong, and Jinsong Su. 2018. Neural machine translation with deep attention. IEEE Transactions on Pattern Analysis and Machine Intelligence 42, 1 (2018), 154–163.
[21]
Raj Dabre, Chenhui Chu, and Anoop Kunchukuttan. 2020. A survey of multilingual neural machine translation. ACM Computing Surveys (CSUR) 53, 5 (2020), 1–38.
[22]
Aizhan Imankulova, Takayuki Sato, and Mamoru Komachi. 2019. Filtered pseudo-parallel corpus improves low-resource neural machine translation. ACM Transactions on Asian and Low-Resource Language Information Processing (TALLIP) 19, 2 (2019), 1–16.
[23]
Amarnath Pathak, Partha Pakray, and Jereemi Bentham. 2019. English–Mizo machine translation using neural and statistical approaches. Neural Computing and Applications 31, 11 (2019), 7615–7631.
[24]
Matthew J. Denny and Arthur Spirling. 2018. Text preprocessing for unsupervised learning: Why it matters, when it misleads, and what to do about it. Political Analysis 26, 2 (2018), 168–189.
[25]
Louis Hickman, Stuti Thapa, and Padmini Srinivasan. 2022. Text preprocessing for text mining in organizational research: Review and recommendations. Organizational Research Methods 25, 1 (2022), 114–146.
[27]
Ehud Reiter. 2018. A structured review of the validity of BLEU. Computational Linguistics 44, 3 (2018), 393–401.
[28]
DeVries, Zachary, Eric Locke, Mohamad Hoda, Dita Moravek, Kim Phan, Alexandra Stratton, Stephen Kingwell, Eugene K. Wai, and Philippe Phan. 2021. Using a national surgical database to predict complications following posterior lumbar surgery and comparing the area under the curve and F1-score for the assessment of prognostic capability. The Spine Journal 21, 7 (2021), 1135–1142.

Index Terms

  1. Exploration on Advanced Intelligent Algorithms of Artificial Intelligence for Verb Recognition in Machine Translation

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Transactions on Asian and Low-Resource Language Information Processing
    ACM Transactions on Asian and Low-Resource Language Information Processing  Volume 23, Issue 8
    August 2024
    343 pages
    EISSN:2375-4702
    DOI:10.1145/3613611
    Issue’s Table of Contents

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 08 August 2024
    Online AM: 28 February 2024
    Accepted: 31 January 2024
    Revised: 26 December 2023
    Received: 24 September 2023
    Published in TALLIP Volume 23, Issue 8

    Check for updates

    Author Tags

    1. Verb recognition
    2. machine translation
    3. advanced intelligence algorithms
    4. artificial intelligence
    5. neural machine translation
    6. attention mechanisms

    Qualifiers

    • Research-article

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 146
      Total Downloads
    • Downloads (Last 12 months)146
    • Downloads (Last 6 weeks)13
    Reflects downloads up to 02 Sep 2024

    Other Metrics

    Citations

    View Options

    Get Access

    Login options

    Full Access

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Full Text

    View this article in Full Text.

    Full Text

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media