Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Jai Riley


2025

pdf bib
Semi-Automated Construction of Sense-Annotated Datasets for Practically Any Language
Jai Riley | Bradley M. Hauer | Nafisa Sadaf Hriti | Guoqing Luo | Amir Reza Mirzaei | Ali Rafiei | Hadi Sheikhi | Mahvash Siavashpour | Mohammad Tavakoli | Ning Shi | Grzegorz Kondrak
Proceedings of the 31st International Conference on Computational Linguistics

High-quality sense-annotated datasets are vital for evaluating and comparing WSD systems. We present a novel approach to creating parallel sense-annotated datasets, which can be applied to any language that English can be translated into. The method incorporates machine translation, word alignment, sense projection, and sense filtering to produce silver annotations, which can then be revised manually to obtain gold datasets. By applying our method to Farsi, Chinese, and Bengali, we produce new parallel benchmark datasets, which are vetted by native speakers of each language. Our automatically-generated silver datasets are of higher quality than the annotations obtained with recent multilingual WSD systems, particularly on non-European languages.

2024

pdf bib
UAlberta at SemEval-2024 Task 1: A Potpourri of Methods for Quantifying Multilingual Semantic Textual Relatedness and Similarity
Ning Shi | Senyu Li | Guoqing Luo | Amirreza Mirzaei | Ali Rafiei | Jai Riley | Hadi Sheikhi | Mahvash Siavashpour | Mohammad Tavakoli | Bradley Hauer | Grzegorz Kondrak
Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)

We describe our systems for SemEval-2024 Task 1: Semantic Textual Relatedness. We investigate the correlation between semantic relatedness and semantic similarity. Specifically, we test two hypotheses: (1) similarity is a special case of relatedness, and (2) semantic relatedness is preserved under translation. We experiment with a variety of approaches which are based on explicit semantics, downstream applications, contextual embeddings, large language models (LLMs), as well as ensembles of methods. We find empirical support for our theoretical insights. In addition, our best ensemble system yields highly competitive results in a number of diverse categories. Our code and data are available on GitHub.

pdf bib
Paraphrase Identification via Textual Inference
Ning Shi | Bradley Hauer | Jai Riley | Grzegorz Kondrak
Proceedings of the 13th Joint Conference on Lexical and Computational Semantics (*SEM 2024)

Paraphrase identification (PI) and natural language inference (NLI) are two important tasks in natural language processing. Despite their distinct objectives, an underlying connection exists, which has been notably under-explored in empirical investigations. We formalize the relationship between these semantic tasks and introduce a method for solving PI using an NLI system, including the adaptation of PI datasets for fine-tuning NLI models. Through extensive evaluations on six PI benchmarks, across both zero-shot and fine-tuned settings, we showcase the efficacy of NLI models for PI through our proposed reduction. Remarkably, our fine-tuning procedure enables NLI models to outperform dedicated PI models on PI datasets. In addition, our findings provide insights into the limitations of current PI benchmarks.