About

I am Alexandra, a Research Scientist at Google in the Gemini team working on Natural Language Processing (Machine Learning).

Before that, I was a PhD student at the University of Munich supervised by Alex Fraser. My PhD research was mostly on combining information from various languages and domains to enable positive transfer during parameter-efficient fine-tuning of language models, especially under resource constraints.

During my PhD, I interned at Google DeepΜind in Berlin, hosted by Sebastian Ruder and Priyanka Agrawal. Prior to that, I interned (twice) at the Allen Institute for AI with Jesse Dodge and Matt Peters; I was part of the AllenNLP team. I also spent a few months at Amazon AI in Santa Clara, CA , working with Brian Thompson, Prashant Mathur and Marcello Federico as an intern in the AI human language technology group.

Before starting the PhD, I graduated with a diploma (Bachelor and MEng) in Electrical & Computer Engineering from the National Technical University of Athens (NTUA).

News

October 2024: New preprint on Model Merging of Large Language Models.

October 2024: Our paper Language and Task Arithmetic with Parameter-Efficient Layers for Zero-Shot Summarization has been accepted to appear at the MRL workshop in EMNLP 2024! See you in Miami! 🌴

October 2024: It’s a wrap! 🎓 I successfully defended my PhD thesis on Efficient Multilingual and Domain Adaptation of Language Models under Resource Constraints. My thesis will (hopefully) be online soon!

April 2024: Check out the Gemini 1.5 Pro API, a top-tier LLM according to the LMSys Leaderboard (technical report)

January 2024: Excited to share that I have joined Google Bard in NYC as a Research Scientist!

November 2023: New preprint is out on Language and Task Arithmetic with Parameter-Efficient Layers for Zero-Shot Summarization from my internship at Google DeepMind!

October 2023: Happy to share that our paper Mitigating Data Imbalance and Representation Degeneration in Multilingual Machine Translation has been accepted to the Findings of EMNLP 2023!

May 2023: Our paper Mitigating Data Imbalance and Representation Degeneration in Multilingual Machine Translation is out.

May 2023: Our paper On the Copying Problem of Unsupervised NMT: A Training Schedule with a Language Discriminator Loss was accepted to IWSLT 2023 and our paper Improving Isochronous Machine Translation with Target Factors and Auxiliary Counters (from my Amazon internship) was accepted to Interspeech 2023!

May 2023: Jesse gave a talk at the LTI CMU Colloquium, discussing our recent papers on efficient domain adaptation of pretrained language models (1, 2); you can check it out here.

April 2023: Very happy to start a research internship in Google Berlin, as part of Google DeepMind!

March 2023: Our paper Language-Family Adapters for Low-Resource Multilingual NMT was accepted to LoResMT, EACL 2023!

February 2023: Check out or work on Isochronous Automatic Dubbing (from my internship at Amazon last fall)!

February 2023: I am co-organizing a shared task on dubbing in IWSLT 2023 (co-located with ACL next summer) along with former teammates from Amazon.

January 2023: Our paper AdapterSoup: Weight Averaging to Improve Generalization of Pretrained Language Models from my internship at Allen AI was accepted to EACL 2023 (findings)!

Selected Publications

More