default search action
6th RepL4NLP@ACL-IJCNLP 2021: Online
- Anna Rogers, Iacer Calixto, Ivan Vulic, Naomi Saphra, Nora Kassner, Oana-Maria Camburu, Trapit Bansal, Vered Shwartz:
Proceedings of the 6th Workshop on Representation Learning for NLP, RepL4NLP@ACL-IJCNLP 2021, Online, August 6, 2021. Association for Computational Linguistics 2021, ISBN 978-1-954085-72-5 - Irene Li, Prithviraj Sen, Huaiyu Zhu, Yunyao Li, Dragomir R. Radev:
Improving Cross-lingual Text Classification with Zero-shot Instance-Weighting. 1-7 - Murathan Kurfali, Robert Östling:
Probing Multilingual Language Models for Discourse. 8-19 - Pritish Sahu, Michael Cogswell, Ajay Divakaran, Sara Rutherford-Quach:
Comprehension Based Question Answering using Bloom's Taxonomy. 20-28 - Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, Alexis Conneau:
Larger-Scale Transformers for Multilingual Masked Language Modeling. 29-33 - Victor Prokhorov, Yingzhen Li, Ehsan Shareghi, Nigel Collier:
Learning Sparse Sentence Encoding without Supervision: An Exploration of Sparsity in Variational Autoencoders. 34-46 - Yang Hao, Xiao Zhai, Wenbiao Ding, Zitao Liu:
Temporal-aware Language Representation Learning From Crowdsourced Labels. 47-56 - Qiwei Peng, David J. Weir, Julie Weeds:
Structure-aware Sentence Encoder in Bert-Based Siamese Network. 57-63 - Zihan Liu, Genta Indra Winata, Andrea Madotto, Pascale Fung:
Preserving Cross-Linguality of Pre-trained Models via Continual Learning. 64-71 - Xiaoyan Li, Sun Sun, Yunli Wang:
Text Style Transfer: Leveraging a Style Classifier on Entangled Latent Representations. 72-82 - Damai Dai, Hua Zheng, Fuli Luo, Pengcheng Yang, Tianyu Liu, Zhifang Sui, Baobao Chang:
Inductively Representing Out-of-Knowledge-Graph Entities by Optimal Estimation Under Translational Assumptions. 83-89 - Seungwon Kim, Alex Shum, Nathan Susanj, Jonathan Hilgart:
Revisiting Pretraining with Adapters. 90-99 - Anastasiia Sedova, Andreas Stephan, Marina Speranskaya, Benjamin Roth:
Knodle: Modular Weakly Supervised Learning with PyTorch. 100-111 - Zihan Liu, Genta Indra Winata, Peng Xu, Pascale Fung:
X2Parser: Cross-Lingual and Cross-Domain Framework for Task-Oriented Compositional Semantic Parsing. 112-127 - Lan Zhang, Victor Prokhorov, Ehsan Shareghi:
Unsupervised Representation Disentanglement of Text: An Evaluation on Synthetic Datasets. 128-140 - Sumanta Kashyapi, Laura Dietz:
Learn The Big Picture: Representation Learning for Clustering. 141-151 - Iuliia Parfenova, Desmond Elliott, Raquel Fernández, Sandro Pezzelle:
Probing Cross-Modal Representations in Multi-Step Relational Reasoning. 152-162 - Sheng-Chieh Lin, Jheng-Hong Yang, Jimmy Lin:
In-Batch Negatives for Knowledge Distillation with Tightly-Coupled Teachers for Dense Retrieval. 163-173 - Pravesh Koirala, Nobal B. Niraula:
NPVec1: Word Embeddings for Nepali - Construction and Evaluation. 174-184 - Yixiao Wang, Zied Bouraoui, Luis Espinosa Anke, Steven Schockaert:
Deriving Word Vectors from Contextualized Language Models using Topic-Aware Mention Selection. 185-194 - Kamil Bujel, Helen Yannakoudakis, Marek Rei:
Zero-shot Sequence Labeling for Transformer-based Sentence Classifiers. 195-205 - Nicolai Pogrebnyakov, Shohreh Shaghaghian:
Predicting the Success of Domain Adaptation in Text Similarity. 206-212 - Renjith P. Ravindran, Akshay Badola, Kavi Narayana Murthy:
Syntagmatic Word Embeddings for Unsupervised Learning of Selectional Preferences. 213-222 - Abiola Obamuyide, Marina Fomicheva, Lucia Specia:
Bayesian Model-Agnostic Meta-Learning with Matrix-Valued Kernels for Quality Estimation. 223-230 - Raghuveer Thirukovalluru, Mukund Sridhar, Dung Thai, Shruti Chanumolu, Nicholas Monath, Sankaranarayanan Ananthakrishnan, Andrew McCallum:
Knowledge Informed Semantic Parsing for Conversational Question Answering. 231-240 - Dung Thai, Raghuveer Thirukovalluru, Trapit Bansal, Andrew McCallum:
Simultaneously Self-Attending to Text and Entities for Knowledge-Informed Text Representations. 241-247 - Jacob Turton, Robert Elliott Smith, David P. Vinson:
Deriving Contextualised Semantic Features from BERT (and Other Transformer Model) Embeddings. 248-262 - Matteo Alleman, Jonathan Mamou, Miguel A. Del Rio, Hanlin Tang, Yoon Kim, SueYeon Chung:
Syntactic Perturbations Reveal Representational Correlates of Hierarchical Phrase Structure in Pretrained Language Models. 263-276 - Shib Sankar Dasgupta, Xiang Lorraine Li, Michael Boratko, Dongxu Zhang, Andrew McCallum:
Box-To-Box Transformations for Modeling Joint Hierarchies. 277-288 - Han Guo, Ramakanth Pasunuru, Mohit Bansal:
An Overview of Uncertainty Calibration for Text Classification and the Role of Distillation. 289-306 - Kevin Huang, Peng Qi, Guangtao Wang, Tengyu Ma, Jing Huang:
Entity and Evidence Guided Document-Level Relation Extraction. 307-315 - Luyu Gao, Yunyi Zhang, Jiawei Han, Jamie Callan:
Scaling Deep Contrastive Learning Batch Size under Memory Limited Setup. 316-321 - Klaudia Balazy, Mohammadreza Banaei, Rémi Lebret, Jacek Tabor, Karl Aberer:
Direction is what you need: Improving Word Embedding Compression in Large Language Models. 322-330
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.