Edward Kim
2023
Elo Uncovered: Robustness and Best Practices in Language Model Evaluation
Meriem Boubdir
|
Edward Kim
|
Beyza Ermis
|
Sara Hooker
|
Marzieh Fadaee
Proceedings of the Third Workshop on Natural Language Generation, Evaluation, and Metrics (GEM)
In Natural Language Processing (NLP), the Elo rating system, well-established for ranking dynamic competitors in games like chess, has seen increasing adoption for evaluating Large Language Models (LLMs) through “A vs B” paired comparisons. However, while popular, the system’s suitability for assessing entities with constant skill levels, such as LLMs, remains relatively unexplored. Our study investigates the sensitivity and reproducibility of Elo scores for LLMs, integrating both synthetic and human feedback. We show that Elo ratings for LLMs stabilize with 100 or more comparison permutations. A lower K-factor is preferable for closely matched models, whereas a higher K-factor better distinguishes models with clear performance differences. We also report that transitivity (A B and B C implies A C) does not consistently hold, particularly when models demonstrate similar performance. Our empirical findings provide guidelines for more reliable LLM evaluation.
2019
The Materials Science Procedural Text Corpus: Annotating Materials Synthesis Procedures with Shallow Semantic Structures
Sheshera Mysore
|
Zachary Jensen
|
Edward Kim
|
Kevin Huang
|
Haw-Shiuan Chang
|
Emma Strubell
|
Jeffrey Flanigan
|
Andrew McCallum
|
Elsa Olivetti
Proceedings of the 13th Linguistic Annotation Workshop
Materials science literature contains millions of materials synthesis procedures described in unstructured natural language text. Large-scale analysis of these synthesis procedures would facilitate deeper scientific understanding of materials synthesis and enable automated synthesis planning. Such analysis requires extracting structured representations of synthesis procedures from the raw text as a first step. To facilitate the training and evaluation of synthesis extraction models, we introduce a dataset of 230 synthesis procedures annotated by domain experts with labeled graphs that express the semantics of the synthesis sentences. The nodes in this graph are synthesis operations and their typed arguments, and labeled edges specify relations between the nodes. We describe this new resource in detail and highlight some specific challenges to annotating scientific text with shallow semantic structure. We make the corpus available to the community to promote further research and development of scientific information extraction systems.
Search
Co-authors
- Meriem Boubdir 1
- Beyza Ermis 1
- Sara Hooker 1
- Marzieh Fadaee 1
- Sheshera Mysore 1
- show all...