Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

A Hybrid Siamese Neural Network for Natural Language Inference in Cyber-Physical Systems

Published: 15 March 2021 Publication History
  • Get Citation Alerts
  • Abstract

    Cyber-Physical Systems (CPS), as a multi-dimensional complex system that connects the physical world and the cyber world, has a strong demand for processing large amounts of heterogeneous data. These tasks also include Natural Language Inference (NLI) tasks based on text from different sources. However, the current research on natural language processing in CPS does not involve exploration in this field. Therefore, this study proposes a Siamese Network structure that combines Stacked Residual Long Short-Term Memory (bidirectional) with the Attention mechanism and Capsule Network for the NLI module in CPS, which is used to infer the relationship between text/language data from different sources. This model is mainly used to implement NLI tasks and conduct a detailed evaluation in three main NLI benchmarks as the basic semantic understanding module in CPS. Comparative experiments prove that the proposed method achieves competitive performance, has a certain generalization ability, and can balance the performance and the number of trained parameters.

    References

    [1]
    Asma Ben Abacha and Dina Demner-Fushman. 2019. A question-entailment approach to question answering. BMC Bioinformatics 20, 1 (2019), 511.
    [2]
    Asma Ben Abacha, Chaitanya Shivade, and Dina Demner-Fushman. 2019. Overview of the MEDIQA 2019 shared task on textual inference, question entailment and question answering. In Proceedings of the 18th BioNLP Workshop and Shared Task. 370–379.
    [3]
    Rod Adams. 2006. Textual entailment through extended lexical overlap. In Proceedings of the 2nd PASCAL Challenges Workshop on Recognising Textual Entailment. 128–133.
    [4]
    Elena Akhmatova. 2005. Textual entailment resolution via atomic propositions. In Proceedings of the 1st PASCAL Challenges Workshop on Recognising Textual Entailment.
    [5]
    Ion Androutsopoulos and Prodromos Malakasiotis. 2010. A survey of paraphrasing and textual entailment methods. Journal of Artificial Intelligence Research 38 (2010), 135–187.
    [6]
    Gabor Angeli, Melvin Jose Johnson Premkumar, and Christopher D. Manning. 2015. Leveraging linguistic structure for open domain information extraction. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). 344–354.
    [7]
    Jorge Balazs, Edison Marrese-Taylor, Pablo Loyola, and Yutaka Matsuo. 2017. Refining raw sentence representations for textual entailment recognition via attention. In Proceedings of the 2nd Workshop on Evaluating Vector Space Representations for NLP. 51–55.
    [8]
    Roy Bar-Haim, Ido Dagan, Iddo Greental, and Eyal Shnarch. 2007. Semantic inference at the lexical-syntactic level. In Proceedings of the 22nd National Conference on Artificial Intelligence, Vol. 1. 871–876.
    [9]
    Samuel Bayer, John Burger, Lisa Ferro, John Henderson, and Er Yeh. 2005. MITRE’s submission to the EU Pascal RTE Challenge. In Proceedings of the 1st Challenge Workshop: Recognizing Textual Entailment.
    [10]
    Luís Borges, Bruno Martins, and Pável Calado. 2019. Combining similarity features and deep representation learning for stance detection in the context of checking fake news. Journal of Data and Information Quality 11, 3 (2019), 1–26.
    [11]
    Samuel Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. 632–642.
    [12]
    Jane Bromley, Isabelle Guyon, Yann LeCun, Eduard Säckinger, and Roopak Shah. 1994. Signature verification using a “Siamese” time delay neural network. In Advances in Neural Information Processing Systems. 737–744.
    [13]
    Peixin Chen, Wu Guo, Zhi Chen, Jian Sun, and Lanhua You. 2018. Gated convolutional neural network for sentence matching. In Proceedings of the 2018 Interspeech Conference. 2853–2857.
    [14]
    Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, and Si Wei. 2018. Neural natural language inference models enhanced with external knowledge. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2406–2417.
    [15]
    Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, and Diana Inkpen. 2017. Enhanced LSTM for natural language inference. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 1657–1668.
    [16]
    Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, and Diana Inkpen. 2017. Recurrent neural network-based sentence encoder with gated attention for natural language inference. In Proceedings of the 2nd Workshop on Evaluating Vector Space Representations for NLP. 36–40.
    [17]
    Alexis Conneau, Douwe Kiela, Holger Schwenk, Loïc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. 670–680.
    [18]
    Clément Delgrange, Jean-Michel Dussoux, and Peter Ford Dominey. 2019. Usage-based learning in human interaction with an adaptive virtual assistant. IEEE Transactions on Cognitive and Developmental Systems 12, 1 (2019), 109–123.
    [19]
    Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long and Short Papers). 4171–4186.
    [20]
    Li Ding, Tim Finin, Anupam Joshi, Rong Pan, R. Scott, Cost Yun Peng, Pavan Reddivari, Vishal Doshi, and Joel Sachs. 2004. Swoogle: A semantic web search and metadata engine. In Proceedings of the ACM Conference on Information and Knowledge Management. ACM, New York, NY.
    [21]
    Bill Dolan, Chris Brockett, and Chris Quirk. 2005. Microsoft research paraphrase corpus. Microsoft. Retrieved May 3, 2020 from https://www.microsoft.com/en-us/download/details.aspx?id=52398.
    [22]
    Qianlong Du, Chengqing Zong, and Keh-Yih Su. 2020. Conducting natural language inference with word-pair-dependency and local context. ACM Transactions on Asian and Low-Resource Language Information Processing 19, 3 (2020), 1–23.
    [23]
    Myroslava O. Dzikovska, Rodney Nielsen, Chris Brew, Claudia Leacock, Danilo Giampiccolo, Luisa Bentivogli, Peter Clark, Ido Dagan, and Hoa Trang Dang. 2013. SemEval-2013 Task 7: The joint student response analysis and 8th recognizing textual entailment challenge. In Proceedings of the 2nd Joint Conference on Lexical and Computational Semantics (*Sem), Volume 2, Co-located with Proceedings of the 7th International Workshop on Semantic Evaluation (SemEval’13). 263–274.
    [24]
    Myroslava O. Dzikovska, Rodney D. Nielsen, and Claudia Leacock. 2016. The joint student response analysis and recognizing textual entailment challenge: Making sense of student responses in educational applications. Language Resources and Evaluation 50, 1 (2016), 67–93.
    [25]
    Jianfeng Gao, Michel Galley, and Lihong Li. 2018. Neural approaches to conversational AI. In Proceedings of the 41st International ACM SIGIR Conference on Research and Development in Information Retrieval. 1371–1374.
    [26]
    Reza Ghaeini, Sadid A. Hasan, Vivek Datla, Joey Liu, Kathy Lee, Ashequl Qadir, Yuan Ling, Aaditya Prakash, Xiaoli Fern, and Oladimeji Farri. 2018. DR-BiLSTM: Dependent reading bidirectional LSTM for natural language inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers). 1460–1469.
    [27]
    Carlos Gómez-Rodríguez, Iago Alonso-Alonso, and David Vilares. 2019. How important is syntactic parsing accuracy? An empirical evaluation on rule-based sentiment analysis. Artificial Intelligence Review 52, 3 (2019), 2081–2097.
    [28]
    Yichen Gong, Heng Luo, and Jian Zhang. 2018. Natural language inference over interaction space. In Proceedings of the International Conference on Learning Representations.
    [29]
    Alex Graves, Abdel-Rahman Mohamed, and Geoffrey Hinton. 2013. Speech recognition with deep recurrent neural networks. In Proceedings of the 2013 IEEE International Conference on Acoustics, Speech, and Signal Processing. IEEE, Los Alamitos, CA, 6645–6649.
    [30]
    Anand Gupta, Manpreet Kaur, Disha Garg, and Karuna Saini. 2017. Using variant directional dis (similarity) measures for the task of textual entailment. In Proceedings of the International Conference on Recent Developments in Science, Engineering, and Technology. 287–297.
    [31]
    Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel Bowman, and Noah A. Smith. 2018. Annotation artifacts in natural language inference data. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers). 107–112.
    [32]
    Sanda Harabagiu and Andrew Hickl. 2006. Methods for using textual entailment in open-domain question answering. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th Annual Meeting of the Association for Computational Linguistics. 905–912.
    [33]
    Sanda Harabagiu, Andrew Hickl, and Finley Lacatusu. 2007. Satisfying information needs with multi-document summaries. Information Processing & Management 43, 6 (2007), 1619–1642.
    [34]
    Michael Heilman and Noah A. Smith. 2010. Tree edit models for recognizing textual entailments, paraphrases, and answers to questions. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics. 1011–1019.
    [35]
    Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. arXiv:1503.02531
    [36]
    Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q. Weinberger. 2017. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 4700–4708.
    [37]
    Jinbae Im and Sungzoon Cho. 2017. Distance-based self-attention network for natural language inference. arXiv:1712.02047
    [38]
    Yangfeng Ji and Jacob Eisenstein. 2013. Discriminative improvements to distributional sentence similarity. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. 891–896.
    [39]
    Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu. 2019. TinyBERT: Distilling BERT for natural language understanding. arXiv:1909.10351
    [40]
    Valentin Jijkoun and Maarten de Rijke. 2005. Recognizing textual entailment using lexical similarity. In Proceedings of the PASCAL Challenges Workshop on Recognising Textual Entailment. 73–76.
    [41]
    Tushar Khot, Ashish Sabharwal, and Peter Clark. 2018. SciTail: A textual entailment dataset from science question answering. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence.
    [42]
    Seonhoon Kim, Inho Kang, and Nojun Kwak. 2019. Semantic sentence matching with densely-connected recurrent and co-attentive information. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33. 6586–6593.
    [43]
    Ryan Kiros, Yukun Zhu, Ruslan Salakhutdinov, Richard S. Zemel, Antonio Torralba, Raquel Urtasun, and Sanja Fidler. 2015. Skip-thought vectors. In Proceedings of the 28th International Conference on Neural Information Processing Systems—Volume 2. 3294–3302.
    [44]
    Daniel Z. Korman, Eric Mack, Jacob Jett, and Allen H. Renear. 2018. Defining textual entailment. Journal of the Association for Information Science and Technology 69, 6 (2018), 763–772.
    [45]
    Guanyu Li, Pengfei Zhang, and Caiyan Jia. 2018. Attention boosted sequential inference model. arXiv:1812.01840
    [46]
    Yuming Li, Pin Ni, Gangmin Li, and Victor Chang.2020. Effective piecewise CNN with attention mechanism for distant supervision on relation extraction task. In Proceedings of the 5th International Conference on Complexity, Future Information Systems, and Risk—Volume 1: COMPLEXIS. 53–60.
    [47]
    Yuming Li, Pin Ni, Junkun Peng, Jiayi Zhu, Zhenjin Dai, Gangmin Li, and Xuming Bai. 2019. A joint model of clinical domain classification and slot filling based on RCNN and BiGRU-CRF. In Proceedings of the 2019 IEEE International Conference on Big Data (Big Data ’19). IEEE, Los Alamitos, CA, 6133–6135.
    [48]
    Chang Liu, Daniel Dahlmeier, and Hwee Tou Ng. 2011. Better evaluation metrics lead to better machine translation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. 375–384.
    [49]
    Xiaodong Liu, Kevin Duh, and Jianfeng Gao. 2018. Stochastic answer networks for natural language inference. arXiv:1804.07888
    [50]
    Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. 2019. Improving multi-task deep neural networks via knowledge distillation for natural language understanding. arXiv:1904.09482
    [51]
    Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. 2019. Multi-task deep neural networks for natural language understanding. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 4487–4496.
    [52]
    Bill MacCartney and Christopher D. Manning. 2007. Natural logic for textual inference. In Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing. 193–200.
    [53]
    Suguru Matsuyoshi. 2016. Identification of event and topic for multi-document summarization. In Human Language Technology. Springer, 304.
    [54]
    Bryan McCann, James Bradbury, Caiming Xiong, and Richard Socher. 2017. Learned in translation: Contextualized word vectors. In Proceedings of the 31st International Conference on Neural Information Processing Systems. 6297–6308.
    [55]
    Yashar Mehdad, Matteo Negri, Elena Cabrio, Milen Ognianov Kouylekov, and Bernardo Magnini. 2009. EDITS: An open source framework for recognizing textual entailment. In Proceedings of the 2009 Text Analysis Conference (TAC ’09).
    [56]
    Dan Moldovan, Christine Clark, Sanda Harabagiu, and Steve Maiorano. 2003. COGEX: A logic prover for question answering. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology—Volume 1. 87–93.
    [57]
    Lili Mou, Rui Men, Ge Li, Yan Xu, Lu Zhang, Rui Yan, and Zhi Jin. 2016. Natural language inference by tree-based convolution and heuristic matching. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). 130–136.
    [58]
    Tsendsuren Munkhdalai and Hong Yu. 2017. Neural tree indexers for text understanding. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics(Volume 1: Long Papers). 11–21.
    [59]
    Pin Ni, Yuming Li, Gangmin Li, and Victor Chang. 2020. Natural language understanding approaches based on joint task of intent detection and slot filling for IoT voice interaction. Neural Computing and Applications 32 (2020), 16149–16166.
    [60]
    Pin Ni, Yuming Li, Jiayi Zhu, Junkun Peng, Zhenjin Dai, Gangmin Li, and Xuming Bai. 2019. Disease diagnosis prediction of EMR based on BiGRU-ATT-CapsNetwork model. In Proceedings of the 2019 IEEE International Conference on Big Data (Big Data ’19). IEEE, Los Alamitos, CA, 6166–6168.
    [61]
    Allen Nie, Erin D. Bennett, and Noah D. Goodman. 2017. Dissent: Sentence representation learning from explicit discourse relations. arXiv:1710.04334
    [62]
    Yixin Nie and Mohit Bansal. 2017. Shortcut-stacked sentence encoders for multi-domain inference. In Proceedings of the 2nd Workshop on Evaluating Vector Space Representations for NLP. 41–45.
    [63]
    Rodney D. Nielsen, Wayne Ward, and James H. Martin. 2009. Recognizing entailment in intelligent tutoring systems. Natural Language Engineering 15, 4 (2009), 479–501.
    [64]
    Bolanle Ojokoh and Emmanuel Adebisi. 2018. A review of question answering systems. Journal of Web Engineering 17, 8 (2018), 717–758.
    [65]
    Sebastian Padó, Daniel Cer, Michel Galley, Dan Jurafsky, and Christopher D. Manning. 2009. Measuring machine translation quality as semantic equivalence: A metric based on entailment features. Machine Translation 23, 2-3 (2009), 181–193.
    [66]
    Boyuan Pan, Yazheng Yang, Zhou Zhao, Yueting Zhuang, Deng Cai, and Xiaofei He. 2018. Discourse marker augmented network with reinforcement learning for natural language inference. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 989–999.
    [67]
    Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers). 2227–2237.
    [68]
    Aaditya Prakash, Sadid A. Hasan, Kathy Lee, Vivek Datla, Ashequl Qadir, Joey Liu, and Oladimeji Farri. 2016. Neural paraphrase generation with stacked residual LSTM networks. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. 2923–2934.
    [69]
    Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. Retrieved January 25, 2021 from https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf.
    [70]
    Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv:1910.10683
    [71]
    Rajat Raina, Andrew Y. Ng, and Christopher D. Manning. 2005. Robust textual inference via learning and abductive reasoning. In Proceedings of the 20th National Conference on Artificial Intelligence (AAAI’05). 1099–1105.
    [72]
    Tim Rocktäschel, Edward Grefenstette, Karl Moritz Hermann, Tomáš Kočiskỳ, and Phil Blunsom. 2016. Reasoning about entailment with neural attention. In Proceedings of the International Conference on Learning Representations (ICLR’16).
    [73]
    Lorenza Romano, Milen Kouylekov, Idan Szpektor, Ido Dagan, and Alberto Lavelli. 2006. Investigating a generic paraphrase-based approach for relation extraction. In Proceedings of the 11th Conference of the European Chapter of the Association for Computational Linguistics.
    [74]
    Subhro Roy, Tim Vieira, and Dan Roth. 2015. Reasoning about quantities in natural language. Transactions of the Association for Computational Linguistics 3 (2015), 1–13.
    [75]
    Sara Sabour, Nicholas Frosst, and Geoffrey E. Hinton. 2017. Dynamic routing between capsules. In Advances in Neural Information Processing Systems. 3856–3866.
    [76]
    Dinghan Shen, Guoyin Wang, Wenlin Wang, Martin Renqiang Min, Qinliang Su, Yizhe Zhang, Chunyuan Li, Ricardo Henao, and Lawrence Carin. 2018. Baseline needs more love: On simple word-embedding-based models and associated pooling mechanisms. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 440–450.
    [77]
    Tao Shen, Jing Jiang, Tianyi Zhou, Shirui Pan, Guodong Long, and Chengqi Zhang. 2018. DiSAN: Directional self-attention network for RNN/CNN-free language understanding. In Proceedings of the 2018 AAAI Conference on Artificial Intelligence. 5446–5455.
    [78]
    Ta-Chun Su and Hsiang-Chih Cheng. 2019. SesameBERT: Attention for anywhere. arXiv:1910.03176
    [79]
    Sandeep Subramanian, Adam Trischler, Yoshua Bengio, and Christopher J. Pal. 2018. Learning general purpose distributed sentence representations via large scale multi-task learning. In Proceedings of the International Conference on Learning Representations.
    [80]
    Siqi Sun, Yu Cheng, Zhe Gan, and Jingjing Liu. 2019. Patient knowledge distillation for BERT model compression. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP’19). 4314–4323.
    [81]
    Janos Sztipanovits, Xenofon Koutsoukos, Gabor Karsai, Nicholas Kottenstette, Panos Antsaklis, Vijay Gupta, Bill Goodwine, John Baras, and Shige Wang. 2011. Toward a science of cyber–physical system integration. Proceedings of the IEEE 100, 1 (2011), 29–44.
    [82]
    Chuanqi Tan, Furu Wei, Wenhui Wang, Weifeng Lv, and Ming Zhou. 2018. Multiway attention networks for modeling sentence pairs. In Proceedings of the 27th International Joint Conference on Artificial Intelligence. 4411–4417.
    [83]
    Ming Tan, Cicero Dos Santos, Bing Xiang, and Bowen Zhou. 2016. Improved representation learning for question answer matching. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 464–473.
    [84]
    Yi Tay, Anh Tuan Luu, and Siu Cheung Hui. 2018. Compare, compress and propagate: Enhancing neural architectures with alignment factorization for natural language inference. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. 1565–1575.
    [85]
    Nam Khanh Tran and Claudia Niedereée. 2018. Multihop attention networks for question answer matching. In Proceedings of the 41st International ACM SIGIR Conference on Research and Development in Information Retrieval. 325–334.
    [86]
    Iulia Turc, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Well-read students learn better: The impact of student initialization on knowledge distillation. arXiv:1980.08962
    [87]
    Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems. 5998–6008.
    [88]
    Andreas Vogelsang, Kerstin Hartig, Florian Pudlitz, Aaron Schlutter, and Jonas Winkler. 2019. Supporting the development of cyber-physical systems with natural language processing: A report. In NLP4RE 2019: The 2nd Workshop on Natural Language Processing for Requirements Engineering.
    [89]
    Hoa Trong Vu, Thuong-Hai Pham, Xiaoyu Bai, Marc Tanti, Lonneke van der Plas, and Albert Gatt. 2017. LCT-MALTA’s submission to RepEval 2017 shared task. In Proceedings of the 2nd Workshop on Evaluating Vector Space Representations for NLP. 56–60.
    [90]
    Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. 353–355.
    [91]
    Shuohang Wang and Jing Jiang. 2016. Learning natural language inference with LSTM. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 1442–1451.
    [92]
    Zhiguo Wang, Wael Hamza, and Radu Florian. 2017. Bilateral multi-perspective matching for natural language sentences. In Proceedings of the 26th International Joint Conference on Artificial Intelligence. 4144–4150.
    [93]
    Stefan Wiesner, Christian Gorldt, Mathias Soeken, Klaus-Dieter Thoben, and Rolf Drechsler. 2014. Requirements engineering for cyber-physical systems. In Proceedings of the IFIP International Conference on Advances in Production Management Systems. 281–288.
    [94]
    Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers). 1112–1122.
    [95]
    Yujia Wu, Jing Li, Jia Wu, and Jun Chang. 2020. Siamese capsule networks with global and local features for text classification. Neurocomputing 390 (2020), 88–98.
    [96]
    Zhenyu Wu, Yuan Xu, Yunong Yang, Chunhong Zhang, Xinning Zhu, and Yang Ji. 2017. Towards a semantic Web of Things: A hybrid semantic annotation, extraction, and reasoning framework for cyber-physical system. Sensors 17, 2 (2017), 403.
    [97]
    Canwen Xu, Wangchunshu Zhou, Tao Ge, Furu Wei, and Ming Zhou. 2020. BERT-of-Theseus: Compressing BERT by progressive module replacing. arxiv:cs.CL/2002.02925
    [98]
    Han Yang, Marta R Costa-Jussà, and José A. R. Fonollosa. 2017. Character-level intra attention network for natural language inference. In Proceedings of the 2nd Workshop on Evaluating Vector Space Representations for NLP. 46–50.
    [99]
    Runqi Yang, Jianhai Zhang, Xing Gao, Feng Ji, and Haiqing Chen. 2019. Simple and effective text matching with richer alignment features. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 4699–4709.
    [100]
    Wenpeng Yin, Hinrich Schütze, Bing Xiang, and Bowen Zhou. 2016. ABCNN: Attention-based convolutional neural network for modeling sentence pairs. Transactions of the Association for Computational Linguistics 4 (2016), 259–272.
    [101]
    Deunsol Yoon, Dongbok Lee, and SangKeun Lee. 2018. Dynamic self-attention: Computing attention over words dynamically for sentence embedding. arXiv:1808.07383
    [102]
    Deniz Yuret, Aydin Han, and Zehra Turgut. 2010. SemEval-2010 Task 12: Parser evaluation using textual entailments. In Proceedings of the 5th International Workshop on Semantic Evaluation. 51–56.
    [103]
    Wojciech Zaremba and Ilya Sutskever. 2014. Learning to execute. arXiv:1410.4615
    [104]
    Kun Zhang, Enhong Chen, Qi Liu, Chuanren Liu, and Guangyi Lv. 2017. A context-enriched neural network method for recognizing lexical entailment. In Proceedings of the 31st AAAI Conference on Artificial Intelligence.
    [105]
    Zhuosheng Zhang, Yuwei Wu, Zuchao Li, and Hai Zhao. 2019. Explicit contextual semantics for text comprehension. In Proceedings of the 33rd Pacific Asia Conference on Language, Information, and Computation (PACLIC’19). 298–308.
    [106]
    Zhuosheng Zhang, Yuwei Wu, Hai Zhao, Zuchao Li, Shuailiang Zhang, Xi Zhou, and Xiang Zhou. 2020. Semantics-aware BERT for language understanding. In Proceedings of the 34th AAAI Conference on Artificial Intelligence (AAAI’20).
    [107]
    Wei Zhao, Jianbo Ye, Min Yang, Zeyang Lei, Suofei Zhang, and Zhou Zhao. 2018. Investigating capsule networks with dynamic routing for text classification. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing.

    Cited By

    View all
    • (2024)Research on English Reading Comprehension Strategies Based on Natural Language ProcessingApplied Mathematics and Nonlinear Sciences10.2478/amns-2024-07689:1Online publication date: 1-Apr-2024
    • (2024)Knowledge Graph and Deep Learning-based Text-to-GraphQL Model for Intelligent Medical Consultation ChatbotInformation Systems Frontiers10.1007/s10796-022-10295-026:1(137-156)Online publication date: 1-Feb-2024
    • (2023)Collaborative Hotspot Data Collection with Drones and 5G Edge Computing in Smart CityACM Transactions on Internet Technology10.1145/361737323:4(1-15)Online publication date: 17-Nov-2023
    • Show More Cited By

    Index Terms

    1. A Hybrid Siamese Neural Network for Natural Language Inference in Cyber-Physical Systems

        Recommendations

        Comments

        Information & Contributors

        Information

        Published In

        cover image ACM Transactions on Internet Technology
        ACM Transactions on Internet Technology  Volume 21, Issue 2
        June 2021
        599 pages
        ISSN:1533-5399
        EISSN:1557-6051
        DOI:10.1145/3453144
        • Editor:
        • Ling Liu
        Issue’s Table of Contents
        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        Published: 15 March 2021
        Accepted: 01 July 2020
        Revised: 01 July 2020
        Received: 01 June 2020
        Published in TOIT Volume 21, Issue 2

        Permissions

        Request permissions for this article.

        Check for updates

        Author Tags

        1. Cyber-physical systems
        2. Natural language inference
        3. Siamese neural networks

        Qualifiers

        • Research-article
        • Refereed

        Funding Sources

        • VC Research
        • AI University Research Centre (AI-URC) through the XJTLU Key Program Special Fund
        • Suzhou Bureau of Sci. and Tech. and the Key Industrial Tech. Inno. program

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)48
        • Downloads (Last 6 weeks)6
        Reflects downloads up to 26 Jul 2024

        Other Metrics

        Citations

        Cited By

        View all
        • (2024)Research on English Reading Comprehension Strategies Based on Natural Language ProcessingApplied Mathematics and Nonlinear Sciences10.2478/amns-2024-07689:1Online publication date: 1-Apr-2024
        • (2024)Knowledge Graph and Deep Learning-based Text-to-GraphQL Model for Intelligent Medical Consultation ChatbotInformation Systems Frontiers10.1007/s10796-022-10295-026:1(137-156)Online publication date: 1-Feb-2024
        • (2023)Collaborative Hotspot Data Collection with Drones and 5G Edge Computing in Smart CityACM Transactions on Internet Technology10.1145/361737323:4(1-15)Online publication date: 17-Nov-2023
        • (2023)Unpaired Self-supervised Learning for Industrial Cyber-Manufacturing Spectrum Blind DeconvolutionACM Transactions on Internet Technology10.1145/359096323:4(1-18)Online publication date: 17-Nov-2023
        • (2023)Tolerance Analysis of Cyber-Manufacturing Systems to Cascading FailuresACM Transactions on Internet Technology10.1145/357984723:4(1-23)Online publication date: 17-Nov-2023
        • (2022)Aquaculture Prediction Model Based on Improved Water Quality Parameter Data Prediction Algorithm under the Background of Big DataJournal of Applied Mathematics10.1155/2022/20713602022(1-12)Online publication date: 25-Nov-2022
        • (2022)[Retracted] Translation of Japanese Literature Language and Natural Language Environment Understanding Based on Artificial Neural NetworkJournal of Environmental and Public Health10.1155/2022/20157632022:1Online publication date: 16-Sep-2022
        • (2022)Eigenvector-based Graph Neural Network Embeddings and Trust Rating Prediction in Bitcoin NetworksProceedings of the Third ACM International Conference on AI in Finance10.1145/3533271.3561793(27-35)Online publication date: 2-Nov-2022
        • (2022)StaResGRU-CNN with CMedLMsApplied Soft Computing10.1016/j.asoc.2021.107975113:PBOnline publication date: 3-Jan-2022
        • (2022)Natural Language-Based Automatic Programming for Industrial RobotsJournal of Grid Computing10.1007/s10723-022-09618-x20:3Online publication date: 1-Sep-2022
        • Show More Cited By

        View Options

        Get Access

        Login options

        Full Access

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        HTML Format

        View this article in HTML Format.

        HTML Format

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media