Abstract
Automatic text generation is the generation of natural language text by machines. Enabling machines to generate readable and coherent text is one of the most vital yet challenging tasks. Traditionally, text generation has been implemented either by using production rules of a predefined grammar or performing statistical analysis of existing human-written texts to predict sequences of words. Recently a paradigm change has emerged in text generation, induced by technological advancements, including deep learning methods and pre-trained transformers. However, many open challenges in text generation need to be addressed, including the generation of fluent, coherent, diverse, controllable, and consistent human-like text. This survey aims to provide a comprehensive overview of current advancements in automated text generation and introduce the topic to researchers by offering pointers and synthesis to pertinent studies. This paper studied the relevant twelve years of articles from 2011 onwards in the field of text generation and observed a total of 146 prime studies relevant to the objective of this survey that has been thoroughly reviewed and discussed. It covers core text generation applications, including text summarization, question–answer generation, story generation, machine translation, dialogue response generation, paraphrase generation, and image/video captioning. The most commonly used datasets for text generation and existing tools with their application domain have also been mentioned. Various text decoding and optimization methods have been provided with their strengths and weaknesses. For evaluating the effectiveness of the generated text, automatic evaluation metrices have been discussed. Finally, the article discusses the main challenges and notable future directions in the field of automated text generation for potential researchers.
Similar content being viewed by others
References
Abrishami M, Rashti MJ, Naderan M (2020) Machine Translation Using Improved Attention-based Transformer with Hybrid Input. In: 2020 6th International Conference on Web Research (ICWR). IEEE, pp 52–57
Acharya M, Kafle K, Kanan C (2018) TallyQA: Answering complex counting questions. arXiv. https://doi.org/10.1609/aaai.v33i01.33018076
Agrawal R, Sharma DM (2017) Building an Effective MT System for English-Hindi Using RNN’s. Int J Artif Intell Appl 8:45–58. https://doi.org/10.5121/ijaia.2017.8504
Alloatti F, Di Caro L, Sportelli G (2019) Real Life Application of a Question Answering System Using BERT Language Model. In: Proceedings of the 20th Annual SIGdial Meeting on Discourse and Dialogue. Association for Computational Linguistics, Stroudsburg, PA, USA, pp 250–253
Alomari A, Idris N, Sabri AQM, Alsmadi I (2022) Deep reinforcement and transfer learning for abstractive text summarization: A review. Comput Speech Lang 71:101276. https://doi.org/10.1016/j.csl.2021.101276
Alsaleh A, Althabiti S, Alshammari I, et al (2022) LK2022 at Qur’an QA 2022: Simple Transformers Model for Finding Answers to Questions from Qur’an. In: Proceedings ofthe OSACT 2022 Workshop @LREC2022. Eur Lang Res Assoc (ELRA), Marseille, pp 120–125
Ammanabrolu P, Tien E, Cheung W, et al (2019) Guided Neural Language Generation for Automated Storytelling. 46–55.https://doi.org/10.18653/v1/w19-3405
Anderson P, Fernando B, Johnson M, Gould S (2016) SPICE: Semantic propositional image caption evaluation. Lect Notes ComputSci (including Subser Lect Notes Artif Intell Lect Notes Bioinformatics) 9909 LNCS:382–398. https://doi.org/10.1007/978-3-319-46454-1_24
Anderson P, He X, Buehler C, et al (2018) Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering. Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit 6077–6086.https://doi.org/10.1109/CVPR.2018.00636
Asghar N, Poupart P, Hoey J, et al (2018) Affective neural response generation. Lect Notes Comput Sci (including Subser Lect Notes Artif Intell Lect Notes Bioinformatics) 10772 LNCS:154–166. https://doi.org/10.1007/978-3-319-76941-7_12
Bahdanau D, Cho K, Bengio Y (2015) Neural Machine Translation by Jointly Learning to Align and Translate. 3rd Int Conf Learn Represent ICLR 2015 - Conf Track Proc 1–15
Bapna A, Chen MX, Firat O, et al (2020) Training deeper neural machine translation models with transparent attention. Proc 2018 Conf Empir Methods Nat Lang Process EMNLP 2018 3028–3033. https://doi.org/10.18653/v1/d18-1338
Barrull R, Kalita J (2020) Abstractive and mixed summarization for long-single documents. http://arxiv.org/abs/200701918 1–9
Basu S, Ramachandran GS, Keskar NS, Varshney LR (2021) Mirostat: A Neural Text Decoding Algorithm that Directly Controls Perplexity. ArXiv 200714966:1–25
Baumel T, Eyal M, Elhadad M (2018) Query Focused Abstractive Summarization: Incorporating Query Relevance, Multi-Document Coverage, and Summary Length Constraints into seq2seq Models. arXiv:180107704. arXiv:1801.07704
Bengio Y, Simard P, Frasconi P (1994) Learning long-term dependencies with gradient descent is difficult. IEEE Trans Neural Networks 5:157–166. https://doi.org/10.1109/72.279181
Bott S, Saggion H, Figueroa D (2012) A hybrid system for spanish text simplification. 3rd Work Speech Lang Process Assist Technol SLPAT 2012 2012 Conf North Am Chapter Assoc Comput Linguist Hum Lang Technol NAACL-HLT 2012 - Proc 75–84
Bowman SR, Vilnis L, Vinyals O, et al (2016) Generating sentences from a continuous space. CoNLL 2016 - 20th SIGNLL Conf Comput Nat Lang Learn Proc 10–21. https://doi.org/10.18653/v1/k16-1002
Bradbury J, Merity S, Xiong C, Socher R (2016) Quasi-Recurrent Neural Networks. 5th Int Conf Learn Represent 1–11
Brown TB, Mann B, Ryder N, et al (2020) Language Models are Few-Shot Learners. Adv Neural Inf Process Syst
Buck C, Bulian J, Ciaramita M, et al (2018) Ask the Right Questions: Active Question Reformulation with Reinforcement Learning. In: 6th International Conference on Learning Representations, ICLR 2018. Conference Track Proceedings (2018), pp 1–15
Cao Z, Luo C, Li W, Li S (2017) Joint Copying and Restricted Generation for Paraphrase. In: 31st AAAI Conference on Artificial Intelligence, AAAI 2017. AAAI, pp 3152–3158
Cao S, Wang L (2021) Controllable Open-ended Question Generation with A New Question Type Ontology. arXiv 6424–6439. http://arxiv.org/abs/2107.00152
Celikyilmaz A, Clark E, Gao J (2020) Evaluation of Text Generation: A Survey. 1–75.http://arxiv.org/abs/2006.14799
Chen S, Beeferman D, Rosenfeld R (1998) Evaluation metrics for language models. Proc DARPA Broadcast News Transcr Underst Work 275– 280
Chen J, Xiao G, Han X, Chen H (2021) Controllable and Editable Neural Story Plot Generation via Control-and-Edit Transformer. IEEE Access 9:96692–96699. https://doi.org/10.1109/ACCESS.2021.3094263
Chen Y, Xu L, Liu K, et al (2015) Event extraction via dynamic multi-pooling convolutional neural networks. ACL-IJCNLP 2015 - 53rd Annu Meet Assoc Comput Linguist 7th Int Jt Conf Nat Lang Process Asian Fed Nat Lang Process Proc Conf 1:167–176. https://doi.org/10.3115/v1/p15-1017
Cheng J, Lapata M (2016) Neural Summarization by Extracting Sentences and Words. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Stroudsburg, PA, USA, pp 484–494
Cho K, van Merriënboer B, Gulcehre C, et al (2014) Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation. Empir Methods Nat Lang Process (EMNLP), Assoc Comput Linguist 1724–1734. https://doi.org/10.1128/jcm.28.9.2159-.1990
Cho WS, Zhang Y, Rao S, et al (2021) Contrastive Multi-document Question Generation. In: Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume. Association for Computational Linguistics, Stroudsburg, PA, USA, pp 12–30
Chung J, Gulcehre C, Cho K, Bengio Y (2015) Gated Feedback Recurrent Neural Networks. In: 32nd International Conference on Machine Learning, ICML 2015. ICML
Chung J, Gulcehre C, Cho K, Bengio Y (2014) Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling. ArXiv 14123555:1–9
Clark E, Celikyilmaz A, Smith NA (2020) Sentence mover’s similarity: Automatic evaluation for multi-sentence texts. ACL 2019 - 57th Annu Meet Assoc Comput Linguist Proc Conf 2748–2760. https://doi.org/10.18653/v1/p19-1264
Clark E, Ji Y, Smith NA (2018) Neural Text Generation in Stories Using Entity Representations as Context. In: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers). Association for Computational Linguistics, Stroudsburg, PA, USA, pp 2250–2260
Clinchant S, Jung KW, Nikoulina V (2019) On the use of BERT for Neural Machine Translation. In: Proceedings of the 3rd Workshop on Neural Generation and Translation. Association for Computational Linguistics, Stroudsburg, PA, USA, pp 108–117
Cui Q, Wu S, Liu Q et al (2020) MV-RNN: A Multi-View Recurrent Neural Network for Sequential Recommendation. IEEE Trans Knowl Data Eng 32:317–331. https://doi.org/10.1109/TKDE.2018.2881260
Dai B, Fidler S, Urtasun R, Lin D (2017) Towards Diverse and Natural Image Descriptions via a Conditional GAN. Proc IEEE Int Conf Comput Vis 2017-Octob:2989–2998. https://doi.org/10.1109/ICCV.2017.323
Dauphin YN, Fan A, Auli M, Grangier D (2017) Language modeling with gated convolutional networks. 34th Int Conf Mach Learn ICML 2017 2:1551–1559
Denil M, Demiraj A, Kalchbrenner N et al (2014) Modelling, Visualising and Summarising Documents with a Single Convolutional Neural Network. ArXiv 14063830:1–10
Devlin J, Chang M, Lee K, Toutanova K (2019) BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv:181004805v2 4171–4186. https://doi.org/10.18653/v1/N19-1423
Dinan E, Roller S, Shuster K, et al (2019) Wizard of Wikipedia: Knowledge-Powered Conversational agents. In: ICLR. pp 1–18
Donahue J, Hendricks LA, Rohrbach M et al (2017) Long-Term Recurrent Convolutional Networks for Visual Recognition and Description. IEEE Trans Pattern Anal Mach Intell 39:677–691. https://doi.org/10.1109/TPAMI.2016.2599174
Dong L, Mallinson J, Reddy S, Lapata M (2017) Learning to paraphrase for question answering. arXiv 875–886
Dong L, Wei F, Zhou M, Xu K (2015) Question Answering over Freebase with Multi-Column Convolutional Neural Networks. In: Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, Stroudsburg, PA, USA, pp 260–269
Dozat T (2016) INCORPORATING NESTEROV MOMENTUM INTO ADAM. In: ICLR Workshop. ICLR, pp 2013–2016
Du X, Cardie C (2017) Identifying Where to Focus in Reading Comprehension for Neural Question Generation. In: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Stroudsburg, PA, USA, pp 2067–2073
Du X, Shao J, Cardie C (2017) Learning to Ask: Neural Question Generation for Reading Comprehension. arXiv:170500106v1
Duan N, Tang D, Chen P, Zhou M (2017) Question Generation for Question Answering. In: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Stroudsburg, PA, USA, pp 866–874
Duchi JC, Bartlett PL, Wainwright MJ (2012) Randomized smoothing for (parallel) stochastic optimization. In: 2012 IEEE 51st IEEE Conference on Decision and Control (CDC). IEEE, pp 5442–5444
Dwivedi SK, Singh V (2013) Research and Reviews in Question Answering System. Procedia Technol 10:417–424. https://doi.org/10.1016/j.protcy.2013.12.378
Evans R, Grefenstette E (2018) Learning Explanatory Rules from Noisy Data. J Artif Intell Res 61:1–64. https://doi.org/10.1613/jair.5714
Faizan A, Lohmann S (2018) Automatic generation of multiple choice questions from slide content using linked data. ACM Int Conf Proceeding Ser doi 10(1145/3227609):3227656
Fan A, Lewis M, Dauphin Y (2018) Hierarchical neural story generation. ACL 2018 - 56th Annu Meet AssocComput Linguist Proc Conf (Long Pap 1:889–898. https://doi.org/10.18653/v1/p18-1082
Feng B, Liu D, Sun Y (2021) Evolving transformer architecture for neural machine translation. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. ACM, New York, NY, USA, pp 273–274
Frome A, Corrado GS, Shelens J et al (2018) DeViSE: A Deep Visual-Semantic Embedding Model Andrea. Phys C Supercond its Appl. https://doi.org/10.1016/0921-4534(95)00110-7
Fung P, Bertero D, Xu P, et al (2014) Empathetic Dialog Systems. In: The International Conference on Language Resources and Evaluation. European Language Resources Association. European Language Resources Association
Gambhir M, Gupta V (2017) Recent automatic text summarization techniques. Artif Intell Rev 47:1–66. https://doi.org/10.1007/s10462-016-9475-9
Gao P, Li H, Li S, et al (2018) Question-Guided Hybrid Convolution for Visual Question Answering. Lect Notes Comput Sci (including Subser Lect Notes Artif Intell Lect Notes Bioinformatics) 11205 LNCS:485–501. https://doi.org/10.1007/978-3-030-01246-5_29
Garbacea C, Mei Q (2020) Neural Language Generation: Formulation, Methods, and Evaluation. http://arxiv.org/abs/200715780
Gardent C, Kow E (2007) A symbolic approach to near-deterministic surface realisation using tree adjoining grammar. In: ACL 2007 - Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics. pp 328–335
Gehring J, Auli M, Grangier D, et al (2017) Convolutional sequence to sequence learning. 34th Int Conf Mach Learn ICML 2017 3:2029–2042
Goldberg Y (2016) A Primer on Neural Network Models for Natural Language Processing. J Artif Intell Res 57:345–420. https://doi.org/10.1613/jair.4992
Goyal T, Li JJ, Durrett G (2022) News Summarization and Evaluation in the Era of GPT-3. arXiv. http://arxiv.org/abs/2209.12356
Grechishnikova D (2021) Transformer neural network for protein-specific de novo drug generation as a machine translation problem. Sci Rep 11:321. https://doi.org/10.1038/s41598-020-79682-4
Gu J, Bradbury J, Xiong C, et al (2017) Non-Autoregressive Neural Machine Translation. Proc 2018 Conf Empir Methods Nat Lang Process 479–488
Guan J, Wang Y, Huang M (2019) Story Ending Generation with Incremental Encoding and Commonsense Knowledge. Proc AAAI Conf Artif Intell 33:6473–6480. https://doi.org/10.1609/aaai.v33i01.33016473
Gupta A, Agarwal A, Singh P, Rai P (2018) A deep generative framework for paraphrase generation. 32nd AAAI Conf Artif Intell AAAI 2018 5149–5156
Harrison B, Purdy C, Riedl MO (2021) Toward Automated Story Generation with Markov Chain Monte Carlo Methods and Deep Neural Networks. In: AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment. AAAI, pp 191–197
Harrison V, Walker M (2018) Neural generation of diverse questions using answer focus, contextual and linguistic features. In: Proceedings ofThe 11th International Natural Language Generation Conference. Association for Computational Linguistics, Tilburg, The Netherlands, pp 296–306
Hashimoto TB, Zhang H, Liang P (2019) Unifying human and statistical evaluation for natural language generation. NAACL HLT 2019 - 2019 Conf North Am Chapter Assoc Comput Linguist Hum Lang Technol - Proc Conf 1:1689–1701. https://doi.org/10.18653/v1/n19-1169
He X, Deng L (2017) Deep Learning for VisuaLunDerstanDing Deep Learning for Image-to-Text Generation. IEEE Signal Process Mag 109–116. https://doi.org/10.1109/MSP.2017.2741510
Helcl J, Haddow B, Birch A (2022) Non-Autoregressive Machine Translation: It’s Not as Fast as it Seems. In: Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, Stroudsburg, PA, USA, pp 1780–1790
Hidasi B, Quadrana M, Karatzoglou A, Tikk D (2016) Parallel Recurrent Neural Network Architectures for Feature-rich Session-based Recommendations. In: Proceedings of the 10th ACM Conference on Recommender Systems. ACM, New York, NY, USA, pp 241–248
Hochreiter S, Schmidhuber J (21997) Long Short-Term Memory. Neural Comput. https://doi.org/10.17582/journal.pjz/2018.50.6.2199.2207
Holtzman A, Buys J, Du L, et al (2019) The Curious Case of Neural Text Degeneration. CEUR Workshop Proc 2540:
Huang C, Zaïane OR, Trabelsi A, Dziri N (2018) Automatic dialogue generation with expressed emotions. NAACL HLT 2018 - 2018 Conf North Am Chapter AssocComput Linguist Hum Lang Technol - Proc Conf 2:49–54. https://doi.org/10.18653/v1/n18-2008
Iyyer M, Wieting J, Gimpel K, Zettlemoyer L (2018) Adversarial example generation with syntactically controlled paraphrase networks. NAACL HLT 2018 - 2018 Conf North Am Chapter Assoc Comput Linguist Hum Lang Technol - Proc Conf 1:1875–1885. https://doi.org/10.18653/v1/n18-1170
Jain P, Agrawal P, Mishra A, et al (2017) Story Generation from Sequence of Independent Short Descriptions. ArXiv.https://doi.org/10.48550/arXiv.1707.05501
Jha S, Sudhakar A, Singh AK (2018) Learning cross-lingual phonological and orthagraphic adaptations: A case study in improving neural machine translation between low-resource languages. arXiv 1–48. https://doi.org/10.15398/jlm.v7i2.214
Jin J, Fu K, Cui R, et al (2015) Aligning where to see and what to tell: image caption with region-based attention and scene factorization. 1–20
Jozefowicz R, Vinyals O, Schuster M, et al (2016) Exploring the Limits of Language Modeling. arXiv:160202410
Kalchbrenner N, Blunsom P (2013) Recurrent continuous translation models. EMNLP 2013 - 2013 Conf Empir Methods Nat Lang Process Proc Conf 1700–1709
Kamal Eddine M, Shang G, Tixier A, Vazirgiannis M (2022) FrugalScore: Learning Cheaper, Lighter and Faster Evaluation Metrics for Automatic Text Generation. In: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Stroudsburg, PA, USA, pp 1305–1318
Kannan A, Kurach K, Ravi S, et al (2016) Smart Reply. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, New York, NY, USA, pp 955–964
Karpathy A, Joulin A, Fei-Fei L (2014) Deep Fragment Embeddings for Bidirectional Image Sentence Mapping. 27th International Conference on Neural Information Processing Systems (NIPS’14). MIT Press, Cambridge, MA, USA, pp 1889–1897
Keneshloo Y, Shi T, Ramakrishnan N et al (2020) Deep Reinforcement Learning for Sequence-to-Sequence Models 31:2469–2489
Khamparia A, Pandey B, Tiwari S et al (2020) An Integrated Hybrid CNN–RNN Model for Visual Description and Generation of Captions. Circuits, Syst Signal Process 39:776–788. https://doi.org/10.1007/s00034-019-01306-8
Kim Y, Lee H, Shin J, Jung K (2019) Improving Neural Question Generation Using Answer Separation. Thirty-Third AAAI Conf Artif Intell Improv
Kingma DP, Ba J (2015) Adam: A Method for Stochastic Optimization. In: 3rd International Conference on Learning Representations. ICLR 2015, pp 1–15
Kiros R, Salakhutdinov R, Zemel R (2014) Multimodal neural language models. 31st Int Conf Mach Learn ICML 2014 3:2012–2025
Kiros R, Salakhutdinov R, Zemel RS (2014) Unifying Visual-Semantic Embeddings with Multimodal Neural Language Models. ArXiv 14112539:1–13
Kitchenham B, Pearl Brereton O, Budgen D et al (2009) Systematic literature reviews in software engineering - A systematic literature review. Inf Softw Technol 51:7–15. https://doi.org/10.1016/j.infsof.2008.09.009
Knight K, Marcu D (2000) Statistics-Based Summarization - Step One: Sentence Compression. In: Knight2000StatisticsBasedS. American Association for Artificial Intelligence (www.aaai.org), pp 703–710
Kumar A, Irsoy O, Ondruska P, et al (2016) Ask me anything: Dynamic memory networks for natural language processing. 33rd Int Conf Mach Learn ICML 2016 3:2068–2078
Kumar V, Ramakrishnan G, Li YF (2019) Putting the horse before the cart: A generator-evaluator framework for question generation from text. CoNLL 2019 - 23rd ConfComput Nat Lang Learn Proc Conf 812–821. https://doi.org/10.18653/v1/k19-1076
Lavie A, Agarwal A (2005) METEOR: An automatic metric for MT evaluation with high levels of correlation with human judgments. In: Proceedings of the Second Workshop on Statistical Machine Translation
LeCun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521:436–444. https://doi.org/10.1038/nature14539
Lee J, Liang B, Fong H (2021) Restatement and Question Generation for Counsellor Chatbot. In: Proceedings of the 1st Workshop on NLP for Positive Impact. Association for Computational Linguistics, Stroudsburg, PA, USA, pp 1–7
Lee J, Yoon W, Kim S, et al (2019) BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 1–7. https://doi.org/10.1093/bioinformatics/btz682
Lelkes AD, Tran VQ, Yu C (2021) Quiz-Style Question Generation for News Stories. In: Proceedings of the Web Conference 2021. ACM, New York, NY, USA, pp 2501–2511
Lemberger P (2020) Deep Learning Models for Automatic Summarization. http://arxiv.org/abs/200511988 1–13
Lewis M, Liu Y, Goyal N, et al (2020) BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension. 7871–7880. https://doi.org/10.18653/v1/2020.acl-main.703
Lewis M, Yarats D, Dauphin YN, et al (2017) Deal or no deal? End-to-end learning for negotiation dialogues.EMNLP 2017 - Conf Empir Methods Nat Lang Process Proc 2443–2453. https://doi.org/10.18653/v1/d17-1259
Li J, Galley M, Brockett C, et al (2016) A diversity-promoting objective function for neural conversation models. 2016 Conf North Am Chapter Assoc Comput Linguist Hum Lang Technol NAACL HLT 2016 - Proc Conf 110–119. https://doi.org/10.18653/v1/n16-1014
Li Z, Jiang X, Shang L, Li H (2018) Paraphrase Generation with Deep Reinforcement Learning. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Stroudsburg, PA, USA, pp 3865–3878
Li B, Lee-urban S, Johnston G, Riedl MO (2013) Story Generation with Crowdsourced Plot Graphs. AAAI, pp 598–604
Li Y, Li K, Ning H, et al (2021) Towards an Online Empathetic Chatbot with Emotion Causes. In: Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM, New York, NY, USA, pp 2041–2045
Li J, Luong MT, Jurafsky D (2015) A hierarchical neural Autoencoder for paragraphs and documents. ACL-IJCNLP 2015 - 53rd Annu Meet Assoc Comput Linguist 7th Int Jt Conf Nat Lang Process Asian Fed Nat Lang Process Proc Conf 1:1106–1115. https://doi.org/10.3115/v1/p15-1107
Li J, Monroe W, Jurafsky D (2016) A Simple, Fast Diverse Decoding Algorithm for Neural Generation. ArXiv: 161108562
Li J, Monroe W, Ritter A, et al (2016) Deep reinforcement learning for dialogue generation. EMNLP 2016 - Conf Empir Methods Nat Lang Process Proc 1192–1202. https://doi.org/10.18653/v1/d16-1127
Li J, Monroe W, Shi T, et al (2017) Adversarial learning for neural dialogue generation. EMNLP 2017 - Conf Empir Methods Nat Lang Process Proc 2157–2169.https://doi.org/10.18653/v1/d17-1230
Li S, Tao Z, Li K, Fu Y (2019) Visual to Text: Survey of Image and Video Captioning. IEEE Trans Emerg Top Comput Intell 3:297–312. https://doi.org/10.1109/TETCI.2019.2892755
Liao K, Lebanoff L, Liu F (2018) Abstract Meaning Representation for Multi-Document Summarization. In: International Conference on Computational Linguistics. Santa Fe, New Mexico, USA, pp 1178–1190
Lin C-Y (2004) ROUGE: A Package for Automatic Evaluation of Summaries. In: Text Summarization Branches Out. Association for Computational Linguistics, Barcelona, Spain, pp 74–81
Liu P, Huang C, Mou L (2022) Learning Non-Autoregressive Models from Search for Unsupervised Sentence Summarization. arXiv 7916–7929. https://doi.org/10.18653/v1/2022.acl-long.545
Liu Y, Lapata M (2019) Text Summarization with Pretrained Encoders. arXiv. http://arxiv.org/abs/1908.08345
Liu X, Lei W, Lv J, Zhou J (2022) Abstract Rule Learning for Paraphrase Generation. In: Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence. International Joint Conferences on Artificial Intelligence Organization, California, pp 4273–4279
Liu PJ, Saleh M, Pot E, et al (2018) Generating Wikipedia by Summarizing Long Sequences. In: 6th International Conference on Learning Representations, ICLR 2018 - Conference Track Proceedings. pp 1–18
Liu J, Shen D, Zhang Y, et al (2021) What Makes Good In-Context Examples for GPT-$3$? DeeLIO 2022 - Deep Learn Insid Out 3rd Work Knowl Extr Integr Deep Learn Archit Proc Work 3:100–114. http://arxiv.org/abs/2101.06804
Liu W, Wang Z, Liu X et al (2017) A survey of deep neural network architectures and their applications. Neurocomputing 234:11–26. https://doi.org/10.1016/j.neucom.2016.12.038
Lopyrev K (2015) Generating News Headlines with Recurrent Neural Networks. ArXiv 151201712:1–9
Lu J, Yang J, Batra D, Parikh D (2016) Hierarchical Question-Image Co-Attention for Visual Question Answering. Adv Neural Inf Process Syst 289–297
Lu S, Zhu Y, Zhang W, et al (2018) Neural Text Generation: Past, Present and Beyond. http://arxiv.org/abs/180307133
Luong M-T, Pham H, D. Manning C (2015) Effective Approaches to Attention-based Neural Machine Translation. In: Proceedings ofthe 2015 Conference on Empirical Methods in Natural Language Processing. Lisbon, pp 1412–1421
Ma S, Sun X, Li W, et al (2018) Query and output: Generating words by querying distributed word representations for paraphrase generation. NAACL HLT 2018 - 2018 Conf North Am Chapter Assoc Comput Linguist Hum Lang Technol - Proc Conf 1:196–206. https://doi.org/10.18653/v1/n18-1018
Makav B, Kilic V (2019) A New Image Captioning Approach for Visually Impaired People. ELECO 2019 - 11th Int Conf Electr Electron Eng 945–949. https://doi.org/10.23919/ELECO47770.2019.8990630
Mao J, Xu W, Yang Y, et al (2015) Deep captioning with multimodal recurrent neural networks (m-RNN). 3rd Int Conf Learn Represent ICLR 2015 - Conf Track Proc 1090:1–17
Martin LJ, Ammanabrolu P, Wang X, et al (2018) Event representations for automated story generation with deep neural nets. 32nd AAAI Conf Artif Intell AAAI 2018 868–875
Mehta P, Arora G, Majumder P (2018) Attention based Sentence Extraction from Scientific Articles using Pseudo-Labeled data. Assoc Comput Mach 2–5. https://doi.org/10.48550/arXiv.1802.04675
Michalopoulos G, Chen H, Wong A (2020) Where’s the Question? A Multi-channel Deep Convolutional Neural Network for Question Identification in Textual Data.215–226. https://doi.org/10.18653/v1/2020.clinicalnlp-1.24
Mou L, Song Y, Yan R, et al (2016) Sequence to Backward and Forward Sequences: A Content-Introducing Approach to Generative Short-Text Conversation. In: COLING 2016 - 26th International Conference on Computational Linguistics, Proceedings of COLING 2016: Technical Papers. COLING, pp 3349–3358
Mridha MF, Lima AA, Nur K et al (2021) A Survey of Automatic Text Summarization: Progress, Process and Challenges. IEEE Access 9:156043–156070. https://doi.org/10.1109/ACCESS.2021.3129786
Nag D, Das B, Dash PS, et al (2015) From word embeddings to document distances. In: 32nd International Conference on International Conference on Machine Learning. ICML’15, Lille, France, pp 957–966
Nallapati R, Zhai F, Zhou B (2017) SummaRuNNer: A Recurrent Neural Network based Sequence Model for Extractive Summarization of Documents. In: 31st AAAI Conference on Artificial Intelligence, AAAI 2017. pp 3075–3081
Nallapati R, Zhou B, dos Santos C, et al (2016) Abstractive Text Summarization using Sequence-to-sequence RNNs and Beyond. In: Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning. Association for Computational Linguistics, Stroudsburg, PA, USA, pp 280–290
Narayan S, Cohen SB, Lapata M (2018) Ranking Sentences for Extractive Summarization with Reinforcement Learning. In: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers). Association for Computational Linguistics, Stroudsburg, PA, USA, pp 1747–1759
Narayan S, Gardent C (2012) Structure-Driven Lexicalist Generation. 24th International Confer- ence in Computational Linguistics (COLING). Mumbai, India, pp 100–113
Narayan S, Gardent C (2020) Deep Learning Approaches to Text Production. Synth Lect Hum Lang Technol 13:1–199. https://doi.org/10.2200/S00979ED1V01Y201912HLT044
Narayan S, Gardent C, Narayan S, et al (2015) Hybrid Simplification using Deep Semantics and Machine Translation To cite this version : HAL Id : hal-01109581
Nguyen A (2021) Language Model Evaluation in Open-ended Text Generation. arXiv. http://arxiv.org/abs/2108.03578
Niu T, Bansal M (2018) Polite dialogue generation without parallel data. arXiv. https://doi.org/10.1162/tacl_a_00027
PadmaPriya G, Duraiswamy K (2014) AN APPROACH FOR TEXT SUMMARIZATION USING DEEP LEARNING ALGORITHM. J Comput Sci 10:1–9. https://doi.org/10.3844/jcssp.2014.1.9
Papineni K, Roukos S, Ward T, Zhu W-J (2002) BLEU: a Method for Automatic Evaluation of Machine Translation. In: 40th Annual Meeting of the Association for Computational Linguistics (ACL). ACL, pp 311–318
Park HJ, Lee JS, Ko JG (2020) Achieving Real-Time Sign Language Translation Using a Smartphone’s True Depth Images. In: 12th International Conference on Communication Systems & Networks (COMSNETS). IEEE, pp 622–625
Parveen D, Mesgar M, Strube M (2016) Generating coherent summaries of scientific articles using coherence patterns. EMNLP 2016 - Conf Empir Methods Nat Lang Process Proc 772–783. https://doi.org/10.18653/v1/d16-1074
Pascanu R, Mikolov T, Bengio Y (2018) On the difficulty of training recurrent neural networks. Phylogenetic Divers Appl Challenges Biodivers Sci 41–71. https://doi.org/10.1007/978-3-319-93145-6_3
Paulus R, Xiong C, Socher R (2017) A Deep Reinforced Model for Abstractive Summarization. 6th Int Conf Learn Represent ICLR 2018 - Conf Track Proc 1–12
Pauws S, Gatt A, Krahmer E, Reiter E (2019) Making effective use of healthcare data using data-to-text technology. Data Sci Healthc Methodol Appl 119–145. https://doi.org/10.1007/978-3-030-05249-2_4
Pawade D, Sakhapara A, Jain M et al (2018) Story Scrambler - Automatic Text Generation Using Word Level RNN-LSTM. Int J Inf Technol Comput Sci 10:44–53. https://doi.org/10.5815/ijitcs.2018.06.05
Pedersoli M, Lucas T, Schmid C, Verbeek J (2017) Areas of Attention for Image Captioning. Proc IEEE Int Conf Comput Vis 2017-Octob:1251–1259. https://doi.org/10.1109/ICCV.2017.140
Peng N, Ghazvininejad M, May J, Knight K (2018) Towards Controllable Story Generation. In: Proceedings of the First Workshop on Storytelling. Association for Computational Linguistics, Stroudsburg, PA, USA, pp 43–49
Peng B, Yao K (2015) Recurrent Neural Networks with External Memory for Language Understanding. arXiv:150600195v1
Peng D, Zhou M, Liu C, Ai J (2020) Human–machine dialogue modelling with the fusion of word- and sentence-level emotions. Knowledge-Based Syst 192:105319. https://doi.org/10.1016/j.knosys.2019.105319
Portet F, Reiter E, Gatt A et al (2009) Automatic generation of textual summaries from neonatal intensive care data. Artif Intell 173:789–816. https://doi.org/10.1016/j.artint.2008.12.002
Prakash A, Hasan SA, Lee K, et al (2016) Neural paraphrase generation with stacked residual LSTM Networks. COLING 2016 - 26th Int Conf Comput Linguist Proc COLING 2016 Tech Pap 2923–2934
Przybocki M, Peterson K, Bronsart S, Sanders G (2009) The NIST 2008 metrics for machine translation challenge-overview, methodology, metrics, and results. Mach Transl 23:71–103. https://doi.org/10.1007/s10590-009-9065-6
Qi W, Gong Y, Jiao J, et al (2021) BANG: Bridging Autoregressive and Non-autoregressive Generation with Large Scale Pretraining
Qian N (1999) On the momentum term in gradient descent learning algorithms. Neural Netw 12:145–151. https://doi.org/10.1016/S0893-6080(98)00116-6
Qian Q, Huang M, Zhao H, et al (2018) Assigning personality/identity to a chatting machine for coherent conversation generation. Proc Twenty-Seventh Int Jt Conf Artif Intell 4279–4285
Qian L, Qiu L, Zhang W, et al (2019) Exploring Diverse Expressions for Paraphrase Generation. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Association for Computational Linguistics, Stroudsburg, PA, USA, pp 3171–3180
Qian L, Zhou H, Bao Y, et al (2021) Glancing Transformer for Non-Autoregressive Neural Machine Translation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, Stroudsburg, PA, USA, pp 1993–2003
Radford A, Narasimhan K (2018) Improving Language Understanding by Generative Pre-Training
Raffel C, Shazeer N, Roberts A, et al (2020) Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. arXiv:191010683v2 1–67
Rajasekar AA, Garera N (2021) Answer Generation for Questions With Multiple Information Sources in E-Commerce. Proc Flip DS Conf 1:
Rajpurkar P, Zhang J, Lopyrev K, Liang P (2016) SQuAD: 100,000+ Questions for Machine Comprehension of Text. Proc 2016 Conf Empir Methods Nat Lang Process 2383–2392.https://doi.org/10.18653/v1/D16-1264
Ranzato M, Chopra S, Auli M, Zaremba W (2016) Sequence Level Training with Recurrent Neural Networks. In: 4th International Conference on Learning Representations, ICLR. ICLR, pp 1–16
Rashkin H, Smith EM, Li M, Boureau YL (2020) Towards empathetic open-domain conversation models: A new benchmark and dataset. ACL 2019 - 57th Annu Meet Assoc Comput Linguist Proc Conf 5370–5381. https://doi.org/10.18653/v1/p19-1534
Reiter E, Dale R (1997) Building applied natural language generation systems. Nat Lang Eng 3:57–87. https://doi.org/10.1017/S1351324997001502
Ren Z, Wang X, Zhang N, et al (2017) Deep reinforcement learning-based image captioning with embedding reward. Proc - 30th IEEE Conf Comput Vis Pattern Recognition, CVPR 2017 2017-Janua:1151–1159. https://doi.org/10.1109/CVPR.2017.128
Roemmele M (2016) Writing Stories with Help from Recurrent Neural Networks. In: Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence (AAAI-16) Writing. AAAI, pp 4311–4312
Roemmele M, Gordon AS (2015) Interactive Storytelling. Springer International Publishing, Cham
Rush AM, Chopra S, Weston J (2015) A Neural Attention Model for Abstractive Sentence Summarization. In: Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Stroudsburg, PA, USA, pp 379–389
Santhanam S, Shaikh S (2019) A Survey of Natural Language Generation Techniques with a Focus on Dialogue Systems - Past. Present and Future Directions A Survey of Natural Language Generation Techniques with a Focus on Dialogue Systems - Past, Present and Future Directions. https://doi.org/10.5087/dad.DOINUMBER
Saxena SS, Saranya G, Aggarwal D (2020) A Convolutional Recurrent Neural Network ( CRNN ) Based Approach for Text Recognition and Conversion of Text To Speech in Various Indian Languages. Int J Adv Sci Technol 29:2770–2776
Schuster M, Paliwal KK (1997) Bidirectional recurrent neural networks. IEEE Trans Signal Process 45:2673–2681. https://doi.org/10.1109/78.650093
Scialom T, Hill F (2021) BEAMetrics: A Benchmark for Language Generation Evaluation Evaluation. arXiv 1–20
See A, Liu PJ, Manning CD (2017) Get To The Point: Summarization with Pointer-Generator Networks. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Stroudsburg, PA, USA, pp 1073–1083
Serban I V., Sordoni A, Bengio Y, et al (2016) Building End-To-End Dialogue Systems Using Generative Hierarchical Neural Network Models. In: 30th AAAI Conference on Artificial Intelligence, AAAI 2016. AAAI Press, pp 3776–3783
Shetty R, Rohrbach M, Hendricks LA, et al (2017) Speaking the Same Language: Matching Machine to Human Captions by Adversarial Training. Proc IEEE Int Conf Comput Vis 2017-Octob:4155–4164. https://doi.org/10.1109/ICCV.2017.445
Song L, Wang Z, Hamza W, et al (2018) Leveraging Context Information for Natural Question Generation. In: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers). Association for Computational Linguistics, Stroudsburg, PA, USA, pp 569–574
Sordoni A, Galley M, Auli M, et al (2015) A Neural Network Approach to Context-Sensitive Generation of Conversational Responses. In: Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, Stroudsburg, PA, USA, pp 196–205
Sriram A, Jun H, Satheesh S, Coates A (2018) Cold fusion: Training Seq2seq models together with language models. Proc AnnuConf Int Speech Commun Assoc INTERSPEECH 2018-Septe:387–391. https://doi.org/10.21437/Interspeech.2018-1392
Stasaski K, Rathod M, Tu T, et al (2021) Automatically Generating Cause-and-Effect Questions from Passages. Proc 16th Work Innov Use NLP Build Educ Appl BEA 2021 - held conjunction with 16th Conf Eur Chapter Assoc Comput Linguist EACL 2021 158–170
Su Y, Wang Y, Cai D et al (2021) PROTOTYPE-TO-STYLE: Dialogue Generation with Style-Aware Editing on Retrieval Memory. IEEE/ACM Trans Audio Speech Lang Process 29:2152–2161. https://doi.org/10.1109/TASLP.2021.3087948
Subramanian S, Wang T, Yuan X, et al (2018) Neural Models for Key Phrase Extraction and Question Generation. In: Proceedings of the Workshop on Machine Reading for Question Answering. Association for Computational Linguistics, Stroudsburg, PA, USA, pp 78–88
Sun X, Liu J, Lyu Y, et al (2018) Answer-focused and Position-aware Neural Question Generation. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Stroudsburg, PA, USA, pp 3930–3939
Sundermeyer M, Alkhouli T, Wuebker J, Ney H (2014) Translation Modeling with Bidirectional Recurrent Neural Networks Human Language Technology and Pattern Recognition Group. In: Emnlp2014. ACL, pp 14–25
Sutskever I, Vinyals O, Le QV (2014) Sequence to Sequence Learning with Neural Networks. Adv Neural Inf Process Syst 4:3104–3112
Tambwekar P, Dhuliawala M, Martin LJ, et al (2019) Controllable Neural Story Plot Generation via Reward Shaping. In: Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence. International Joint Conferences on Artificial Intelligence Organization, California, pp 5982–5988
Tian C, Wang Y, Cheng H, et al (2020) Train Once, and Decode As You Like. In: Proceedings of the 28th International Conference on Computational Linguistics. International Committee on Computational Linguistics, Stroudsburg, PA, USA, pp 280–293
Tian Z, Yan R, Mou L, et al (2017) How to make context more useful? An empirical study on context-Aware neural conversational models. ACL 2017 - 55th Annu Meet Assoc Comput Linguist Proc Conf (Long Pap 2:231–236. https://doi.org/10.18653/v1/P17-2036
Tu Z, Lu Z, Yang L, et al (2016) Modeling coverage for neural machine translation. 54th Annu Meet Assoc Comput Linguist ACL 2016 - Long Pap 1:76–85. https://doi.org/10.18653/v1/p16-1008
Upadhya BA, Udupa S, Kamath SS (2019) Deep Neural Network Models for Question Classification in Community Question-Answering Forums. 2019 10th Int Conf Comput Commun Netw Technol ICCCNT 2019 6–11. https://doi.org/10.1109/ICCCNT45670.2019.8944861
Vasisht S, Tirthani V, Eppa A, et al (2022) Automatic FAQ Generation Using Text-to-Text Transformer Model. 2022 3rd Int Conf Emerg Technol INCET 2022 1–7. https://doi.org/10.1109/INCET54531.2022.9823967
Vaswani A, Shazeer N, Parmar N, et al (2017) Attention Is All You Need. In: NIPS’17: Proceedings of the 31st International Conference on Neural Information Processing Systems, arXiv:1706.03762v5. CA, USEA
Vedantam R, Zitnick CL, Parikh D (2015) CIDEr: Consensus-based image description evaluation. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, pp 4566–4575
Vijayakumar AK, Cogswell M, Selvaraju RR, et al (2018) Diverse Beam Search: Decoding Diverse Solutions from Neural Sequence Models. In: 32nd AAAI Conference on Artificial Intelligence, AAAI 2018. pp 1–16
Vijayakumar AK, Cogswell M, Selvaraju RR, et al (2016) Diverse Beam Search: Decoding Diverse Solutions from Neural Sequence Models. In: 32nd AAAI Conference on Artificial Intelligence, AAAI 2018. pp 7371–7379
Vinyals O, Toshev A, Bengio S, Erhan D (2015) Show and tell: A neural image caption generator. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, pp 3156–3164
Wang Q, Li B, Xiao T, et al (2019) Learning Deep Transformer Models for Machine Translation. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Stroudsburg, PA, USA, pp 1810–1822
Wang C, Yang H, Bartz C, Meinel C (2016) Image captioning with deep bidirectional LSTMs. MM 2016 - Proc 2016 ACM Multimed Conf 988–997. https://doi.org/10.1145/2964284.2964299
Wang P, Yang A, Men R, et al (2022) OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework. arXiv. http://arxiv.org/abs/2202.03052
Wang W, Yang N, Wei F, et al (2017) Gated Self-Matching Networks for Reading Comprehension and Question Answering. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Stroudsburg, PA, USA, pp 189–198
Wang T, Yuan X, Trischler A (2017) A Joint Model for Question Answering and Question Generation. arXiv:170601450v1
Welleck S, Kulikov I, Kim J, et al (2020) Consistency of a recurrent language model with respect to incomplete decoding. EMNLP 2020 - 2020 Conf Empir Methods Nat Lang Process Proc Conf 5553–5568.https://doi.org/10.18653/v1/2020.emnlp-main.448
Weston J, Chopra S, Bordes A (2015) Memory Networks. 3rd Int Conf Learn Represent ICLR 2015 - Conf Track Proc 1–15
Wilt C, Thayer J, Ruml W (2010) A comparison of greedy search algorithms. In: Proceedings of the 3rd Annual Symposium on Combinatorial Search, SoCS 2010. SoCS 2010, pp 129–136
Wiseman S, Shieber SM, Rush AM (2018) Learning Neural Templates for Text Generation. Proc 2018 Conf Empir Methods Nat Lang Process EMNLP 2018 3174–3187. https://doi.org/10.18653/v1/d18-1356
Wolf T, Sanh V, Chaumond J, Delangue C (2019) TransferTransfo: A Transfer Learning Approach for Neural Network Based Conversational Agents. arXiv
Wołk K, Koržinek D (2017) Comparison and adaptation of automatic evaluation metrics for quality assessment of re-speaking. Comput Sci 18:129–144. https://doi.org/10.7494/csci.2017.18.2.129
Woodsend K, Lapata M (2010) Automatic generation of story highlights. In: Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, pp 565–574
Wu Y, Hu B (2018) Learning to Extract Coherent Summary via Deep Reinforcement Learning. In: The Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18) Learning. Association for the Advancement of Artificial Intelligence, pp 5602–5609
Wu J, Ouyang L, Ziegler DM, et al (2021) Recursively Summarizing Books with Human Feedback
Wu Y, Schuster M, Chen Z, et al (2016) Google’s Neural Machine Translation System: Bridging the Gap between Human and Machine Translation. arXiv:160908144 1–23
Xiao Y, Wu L, Guo J, et al (2022) A Survey on Non-Autoregressive Generation for Neural Machine Translation and Beyond. arXiv 00:1–25. http://arxiv.org/abs/2204.09269
Xie Z (2017) Neural Text Generation: A Practical Guide. http://arxiv.org/abs/180307133 1–21
Xie Y, Le L, Zhou Y, Raghavan VV (2018) Deep Learning for Natural Language Processing. Handb Stat 38:317–328. https://doi.org/10.1016/bs.host.2018.05.001
Xing C, Wu W, Wu Y, et al (2017) Topic aware neural response generation. 31st AAAI Conf Artif Intell AAAI 2017 3351–3357
Xiong C, Merity S, Socher R (2016) Dynamic memory networks for visual and textual question answering. 33rd Int Conf Mach Learn ICML 2016 5:3574–3583
Xu K, Ba JL, Kiros R, et al (2015) Show, Attend and Tell: Neural Image Caption Generation with Visual Attention. In: International Conference on Machine Learning. JMLR: W&CP
Xu W, Li C, Lee M, Zhang C (2020) Multi-task learning for abstractive text summarization with key information guide network. EURASIP J Adv Signal Process 2020:16. https://doi.org/10.1186/s13634-020-00674-7
Yamada K, Knight K (2001) A syntax-based statistical translation model. 523–530.https://doi.org/10.3115/1073012.1073079
Yan Z, Duan N, Bao J, et al (2016) DocChat: An information retrieval approach for chatbot engines using unstructured documents. 54th Annu Meet Assoc Comput Linguist ACL 2016 - Long Pap 1:516–525. https://doi.org/10.18653/v1/p16-1049
Yang Q, Huo Z, Shen D, et al (2019) An End-to-End Generative Architecture for Paraphrase Generation. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Association for Computational Linguistics, Stroudsburg, PA, USA, pp 3130–3140
Yang W, Xie Y, Lin A, et al (2019) End-to-End Open-Domain Question Answering with BERTserini. https://doi.org/10.18653/v1/N19-4013
Yao T, Pan Y, Li Y, Mei T (2017) Incorporating copying mechanism in image captioning for learning novel objects. Proc - 30th IEEE ConfComput Vis Pattern Recognition, CVPR 2017 2017-Janua:5263–5271. https://doi.org/10.1109/CVPR.2017.559
Yao L, Peng N, Weischedel R, et al (2019) Plan-and-Write: Towards Better Automatic Storytelling. In: Proceedings of the AAAI Conference on Artificial Intelligence. AAAI, pp 7378–7385
Yao K, Zweig G, Peng B (2015) Attention with Intention for a Neural Network Conversation Model. ArXiv, abs/151008565 1–7
Yin C, Qian B, Wei J, et al (2019) Automatic Generation of Medical Imaging Diagnostic Report with Hierarchical Recurrent Neural Network. In: 2019 IEEE International Conference on Data Mining (ICDM). IEEE, pp 728–737
You Q, Jin H, Wang Z, et al (2016) Image captioning with semantic attention. Proc IEEE ComputSoc Conf Comput Vis Pattern Recognit 2016-Decem:4651–4659. https://doi.org/10.1109/CVPR.2016.503
Yu L, Zhang W, Wang J, Yu Y (2017) SeqGAN : Sequence Generative Adversarial Nets with Policy Gradient. In: 31st AAAI Conference on Artificial Intelligence. AAAI, pp 2852–2858
Yu W, Zhu C, Li Z et al (2022) A Survey of Knowledge-Enhanced Text Generation. ACM Comput Surv 1:1–44. https://doi.org/10.1145/3512467
Yuan X, Wang T, Gulcehre C, et al (2017) Machine comprehension by text-to-text neural question generation. arXiv 15–25. https://doi.org/10.18653/v1/w17-2603
Zeiler MD (2012) ADADELTA: An Adaptive Learning Rate Method. ArXiv: 12125701
Zhang S, Dinan E, Urbanek J, et al (2018) Personalizing dialogue agents: I have a dog, do you have pets too? ACL 2018 - 56th Annu Meet Assoc Comput Linguist Proc Conf (Long Pap 1:2204–2213. https://doi.org/10.18653/v1/p18-1205
Zhang T, Kishore V, Wu F, et al (2020) BERTScore: Evaluating Text Generation with BERT. arXiv:190409675 1–41
Zhang X, Lapata M (2017) Sentence Simplification with Deep Reinforcement Learning. In: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Stroudsburg, PA, USA, pp 584–594
Zhang L, Sung F, Liu F, et al (2017) Actor-Critic Sequence Training for Image Captioning. ArXiv: 170609601
Zhang J, Tan J, Wan X (2018) Towards a Neural Network Approach to Abstractive Multi-Document Summarization. arXiv:180107704
Zhang J, Zhao Y, Saleh M, Liu PJ (2019) PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization. 37th IntConf Mach Learn ICML 2020 PartF16814:11265–11276
Zhao Y, Ni X, Ding Y, Ke Q (2018) Paragraph-level neural question generation with maxout pointer and gated self-attention networks. Proc 2018 Conf Empir Methods Nat Lang Process EMNLP 2018 3901–3910. https://doi.org/10.18653/v1/d18-1424
Zhou H, Huang M, Zhang T, et al (2018) Emotional chatting machine: Emotional conversation generation with internal and external memory. 32nd AAAI Conf Artif Intell AAAI 2018 730–738
Zhou H, Huang M, Zhang T, et al (2018) Emotional chatting machine: Emotional conversation generation with internal and external memory. Thirty-Second AAAI Conf Artif Intell 730–738
Zhou X, Wang WY (2018) MOJITALK: Generating Emotional Responses at Scale. In: Proceedings of the 56th Annual Meeting ofthe Association for Computational Linguistics (Long Papers). Assoc Comput Ling. Melbourne, Australia, pp 1128–1137
Zhou Q, Yang N, Wei F, et al (2017) Neural Question Generation from Text: A Preliminary Study. arXiv:170401792v3 [csCL]
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Declarations
I hereby confirm that this work is original and has not been published elsewhere, nor is it currently under consideration for publication elsewhere. No potential conflict of interest was reported.
Conflict of interest
On behalf of all authors, the corresponding author states that there is no conflict of interest.
Ethical approval
The author declares that this article complies with the ethical standard.
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Goyal, R., Kumar, P. & Singh, V.P. A Systematic survey on automated text generation tools and techniques: application, evaluation, and challenges. Multimed Tools Appl 82, 43089–43144 (2023). https://doi.org/10.1007/s11042-023-15224-0
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11042-023-15224-0