Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

News

2024

PhD fellowship on Interpretable Machine Learning available. The successfull candidate will be supervised by Pepa Atanasova and Isabelle Augenstein.

Starting in September 2024, Pepa is taking on a new role as Tenure-Track Assistant Professor in the NLP Section at the University of Copenhagen.

We are recruiting professional fact checkers to take part in an interview and/or a survey about their experiences of fact checking and fact checking technologies.

Our paper on measuring the fragility of natural language inference models has won an outstanding paper award at EACL 2024!

A PhD and two postdoc positions on natural language understanding are available. The positions are funded by the Pioneer Centre for AI.

2023

5 papers by CopeNLU authors are accepted to appear at EMNLP 2024, on topics including factuality and probing for bias.

5 papers by CopeNLU authors are accepted to appear at EMNLP 2023, ranging from explainability to language modelling.

A PhD position on explainable natural language understanding is available in CopeNLU. The positions is funded by the ERC Starting Grant project ExplainYourself, and applications are due by 1 February 2024.

On 1 September 2023, the ERC Starting Grant project ExplainYourself on ‘Explainable and Robust Automatic Fact Checking’ is kicking off, and two new PhD students are joining our group.

4 papers by CopeNLU authors are accepted to appear at ACL 2023, on faithfulness of explanations, measuring intersectional biases, event extraction and few-shot stance detection.

2022

PhD and postdoctoral fellowships on explainable fact checking are available in CopeNLU. The positions are funded by the ERC Starting Grant project ExplainYourself.

Isabelle Augenstein was promoted to full professor, making her the youngest ever female full professor in Denmark.

2 papers by CopeNLU authors are accepted to appear at Coling 2022, on probing of question answering models.

2 papers by CopeNLU authors are accepted to appear at EMNLP 2022, on scientific document understanding.

3 papers by CopeNLU authors are accepted to appear at NAACL 2022, on hatespeech detection, misinformation detection and multilingual probing.

2021

2 papers by CopeNLU authors are accepted to appear at AAAI 2022, on explanation generation, as well as on cross-lingual stance detection.

On 1 September 2021, the DFF Sapere Aude project EXPANSE on ‘Learning to Explain Attitudes on Social Media’ is kicking off, and four new members are joining our group.

3 papers by CopeNLU authors are accepted to appear at EMNLP 2021, on stance detection, exaggeration detection, and on counterfactually augmented data.

A paper by CopeNLU authors on multi-hop fact checking is accepted to appear at IJCAI.

2 papers by CopeNLU authors are accepted to appear at ACL 2021, on interpretability, as well as on scientific document understanding.

A paper by CopeNLU authors on typological blinding of cross-lingual models is accepted to appear at EACL.

2020

2 PhD fellowships and 2 postdoctoral positions on explainable stance detection are available in CopeNLU. The positions are funded by a DFF Sapere Aude research leader fellowship.

7 CopeNLU papers are accepted to appear at EMNLP 2020, on fact checking, explainability, domain adaptation, and more.

2 papers by CopeNLU authors are accepted to appear at ACL 2020, on explainable fact checking, as as well as on script conversion.

2019

4 papers by CopeNLU authors are to be presented at EMNLP 2019 and co-located events, on fact checking and disinformation, as well as on multi-task and multi-lingual learning.

2 papers by CopeNLU authors are accepted to appear at ACL 2019, on discovering probabilistic implications in typological knowledge bases as well as gendered language

3 papers by CopeNLU authors are accepted to appear at NAACL 2019. Topics span from population of typological knowledge bases and weak supervision from disparate lexica to frame detection in online fora.