Location via proxy:   
[Report a bug]   [Manage cookies]                

David Alvarez-Melis

(he/him/his)

Assistant Professor, Harvard University (SEAS)

150 Western Av. Room 2-332, Allston MA 02134

[three initials]@seas.harvard.edu

About

I'm an Assistant Professor of Computer Science at Harvard SEAS where I lead the Data-Centric Machine Learning (DCML) group. I'm also an Associate Faculty at the Kempner Institute, and have affiliations with the Center for Research on Computation and Society and the Harvard Data Science Initiative. I am also a researcher at Microsoft Research New England.

My research seeks to make machine learning more broadly applicable (especially to data-poor applications) and trustworthy (e.g., robust and interpretable). I am particularly interested in the implications of these two directions for applications in the natural and medical sciences. My approach to the first of these goals draws on ideas from statistics, optimization, and applied mathematics, especially optimal transport, which I have used to develop methods to mitigate data scarcity by various types of geometric dataset manipulations: alignment, comparison, generation, and transformation. This talk provides a high-level overview of this part of my work. As for trustworthy machine learning, I have worked on methods for explaining predictions of black box models, showed their lack of robustness, proposed methods to robustify them, and sought inspiration in the social sciences to make them human-centered. In the past, I worked on various aspects of learning with highly-structured data such as text or graphs, ranging from learning representations of structured objects, to generating them, to interpreting models that operate on them.

Prospective lab members: If you are interested in joining my group at Harvard, please read this.

Bio

I obtained a PhD in computer science from MIT, where I worked at CSAIL on various topics in machine learning and natural language processing. I also hold BSc (Licenciatura) and MS degrees in mathematics from ITAM and Courant Institute (NYU), respectively. During the latter, I worked on semidefinite programming for domain adaptation under the supervision of Mehryar Mohri. Between Master's and PhD, I spent a year at IBM's T.J. Watson Research Center, working with Ken Church and others in the Speech Recognition Group.

News

Projects

  • Selected
  • All
Dataset Distances and Dynamics
A principled framework to compare and transform labeled datasets
Word Translation with Optimal Transport
OT-based approaches to fully unsupervised bilingual lexical induction
Optimal Transport with Local and Global Structure
Generalizing the OT problem to include local structure (or ignore global invariances)
Robustly Interpretable Machine Learning
Bridging the gap between model expressiveness and transparency
Dataset Distances and Dynamics
A principled framework to compare and transform labeled datasets
Word Translation with Optimal Transport
OT-based approaches to fully unsupervised bilingual lexical induction
Optimal Transport with Local and Global Structure
Generalizing the OT problem to include local structure (or ignore global invariances)
Robustly Interpretable Machine Learning
Bridging the gap between model expressiveness and transparency
Towards a Theory of Word Embeddings
A theoretical framework to understand the semantic properties of word embeddings

Publications

Most recent publications on Google Scholar.

  • Select
  • Preprints
  • ML
  • NLP
  • OT
  • Interpretabilty
  • Theses
  • All

Tag-LLM: Repurposing General-Purpose LLMs for Specialized Domains

Junhong Shen, Neil Tenenholtz, James Brian Hall,David Alvarez-Melis, Nicolo Fusi

ICML'24: International Conference on Machine Learning. 2024.

Generating Synthetic Datasets by Interpolating along Generalized Geodesics

Jiaojiao Fan, David Alvarez-Melis

UAI'23: Uncertainty in Artificial Intelligence. 2023

InfoOT: Information Maximizing Optimal Transport

Ching-Yao Chuang, Stefanie Jegelka, David Alvarez-Melis

ICML'23: International Conference on Machine Learning. 2023.

Optimizing Functionals on the Space of Probabilities with Input Convex Neural Networks

David Alvarez-Melis, Yair Schiff, Youssef Mroueh

Transactions of Machine Learning Research (TMLR). 2022.

Earlier version at OTML: NeurIPS'21 Workshop on Optimal Transport in Machine Learning .

From Human Explanation to Model Interpretabilty: A Framework Based on Weight of Evidence

David Alvarez-Melis, Harmanpreet Kaur, Hal Daumé III, Hanna Wallach, Jennifer Wortman Vaughan

HCOMP '21: The 9th AAAI Conference on Human Computation and Crowdsourcing. 2021.

Dataset Dynamics via Gradient Flows in Probability Space

David Alvarez-Melis, Nicolò Fusi

ICML'21: International Conference on Machine Learning. 2021.

Geometric Dataset Distances via Optimal Transport

David Alvarez-Melis, Nicolò Fusi

NeurIPS'20: Neural Information Processing Systems. 2020.

Earlier version at AutoML @ ICML 2020.

Unsupervised Hierarchy Matching with Optimal Transport over Hyperbolic spaces

David Alvarez-Melis, Youssef Mroueh, Tommi S. Jaakkola

AISTATS'20: Artificial Intelligence and Statistics. 2020.

Earlier version at OTML: NeurIPS'18 Workshop on Optimal Transport for Machine Learning . Spotlight.

Optimal Transport in Structured Domains: Algorithms and Applications

David Alvarez-Melis (advisor: Tommi S. Jaakkola)

PhD Thesis, MIT. 2019.

pdf

Learning Generative Models across Incomparable Spaces

Charlotte Bunne, David Alvarez-Melis, Andreas Krause, Stefanie Jegelka

ICML'19: International Conference on Machine Learning.

Earlier version at R2L: NeurIPS'18 Workshop on Relational Representation Learning. Best Paper Award.

Towards Optimal Transport with Global Invariances

David Alvarez-Melis, Stefanie Jegelka, Tommi S. Jaakkola

AISTATS'19: Artificial Intelligence and Statistics. 2019.

Towards Robust Interpretability with Self-Explaining Neural Networks

David Alvarez-Melis, Tommi S. Jaakkola

NeurIPS'18: Neural Information Processing Systems. 2018.

Gromov-Wasserstein Alignment of Word Embedding Spaces

David Alvarez-Melis, Tommi S. Jaakkola

EMNLP'18: Empirical Methods in Natural Language Processing. 2018. Oral Presentation.

Structured Optimal Transport

David Alvarez-Melis, Tommi S. Jaakkola, Stefanie Jegelka

AISTATS'18: Artificial Intelligence and Statistics. 2018. Oral Presentation.

Earlier version at NIPS Workshop on Optimal Transport for Machine Learning, 2017, as Extended Oral.

A Causal Framework for Explaining the Predictions of Black-Box Sequence-to-Sequence Models

David Alvarez-Melis, Tommi S. Jaakkola

EMNLP'17: Empirical Methods in Natural Language Processing. 2017.

Tree-structured Decoding with Doubly-recurrent Neural Networks

David Alvarez-Melis, Tommi S. Jaakkola

ICLR'17: International Conference on Learning Representations. 2017.

Word Embeddings as Metric Recovery in Semantic Spaces

Tatsunori B. Hashimoto, David Alvarez-Melis, Tommi S. Jaakkola

TACL: Transactions of the Association for Computational Linguistics. 2016. (presented at ACL'16).

Neural Unbalanced Optimal Transport via Cycle-Consistent Semi-Couplings

Frederike Lübeck*, Charlotte Bunne*, Gabriele Gut, Jacobo Sarabia del Castillo, Lucas Pelkmans, David Alvarez-Melis

Tag-LLM: Repurposing General-Purpose LLMs for Specialized Domains

Junhong Shen, Neil Tenenholtz, James Brian Hall,David Alvarez-Melis, Nicolo Fusi

ICML'24: International Conference on Machine Learning. 2024.

Generating Synthetic Datasets by Interpolating along Generalized Geodesics

Jiaojiao Fan, David Alvarez-Melis

UAI'23: Uncertainty in Artificial Intelligence. 2023

InfoOT: Information Maximizing Optimal Transport

Ching-Yao Chuang, Stefanie Jegelka, David Alvarez-Melis

ICML'23: International Conference on Machine Learning. 2023.

Domain Adaptation using Optimal Transport for Invariant Larning using Histopathology datasets

Kianoush Falahkheirkhah, Alex Lu, David Alvarez-Melis, and Grace Huynh

Medical Imaging in Deep Learning (MIDL). 2023.

Are GANs overkill for NLP?

David Alvarez-Melis*, Vikas Garg*, Adam Tauman Kalai*

NeurIPS 2022 (forthcoming)

pdf

Hierarchical Optimal Transport for Comparing Histopathology Datasets

Anna Yeaton, Rahul G. Krishnan, Rebecca Mieloszyk, David Alvarez-Melis, Grace Huynh

Medical Imaging in Deep Learning (MIDL). 2022.

Interpretable Distribution Shift Detection using Optimal Transport

Neha Hulkund, Nicolo Fusi, Jennifer Wortman Vaughan, David Alvarez-Melis

DataPerf Workshop at ICML 2022

Optimizing Functionals on the Space of Probabilities with Input Convex Neural Networks

David Alvarez-Melis, Yair Schiff, Youssef Mroueh

Transactions of Machine Learning Research (TMLR). 2022.

Earlier version at OTML: NeurIPS'21 Workshop on Optimal Transport in Machine Learning .

Dataset Dynamics via Gradient Flows in Probability Space

David Alvarez-Melis, Nicolò Fusi

ICML'21: International Conference on Machine Learning. 2021.

Geometric Dataset Distances via Optimal Transport

David Alvarez-Melis, Nicolò Fusi

NeurIPS'20: Neural Information Processing Systems. 2020.

Earlier version at AutoML @ ICML 2020.

Unsupervised Hierarchy Matching with Optimal Transport over Hyperbolic spaces

David Alvarez-Melis, Youssef Mroueh, Tommi S. Jaakkola

AISTATS'20: Artificial Intelligence and Statistics. 2020.

Earlier version at OTML: NeurIPS'18 Workshop on Optimal Transport for Machine Learning . Spotlight.

Probabilistic Bias Mitigation in Word Embeddings

Hailey James-Sorenson, David Alvarez-Melis

HCML @ NeurIPS2019

Optimal Transport in Structured Domains: Algorithms and Applications

David Alvarez-Melis (advisor: Tommi S. Jaakkola)

PhD Thesis, MIT. 2019.

pdf

Functional Transparency for Structured Data: a Game-Theoretic Approach,

Guang-He Lee, Wengong Jin, David Alvarez-Melis, Tommi S. Jaakkola

ICML'19: International Conference on Machine Learning.

Learning Generative Models across Incomparable Spaces

Charlotte Bunne, David Alvarez-Melis, Andreas Krause, Stefanie Jegelka

ICML'19: International Conference on Machine Learning.

Earlier version at R2L: NeurIPS'18 Workshop on Relational Representation Learning. Best Paper Award.

Towards Robust, Locally Linear Deep Networks

Guang-He Lee, David Alvarez-Melis, Tommi S. Jaakkola

ICLR'19: International Conference on Learning Representations. 2019.

Towards Optimal Transport with Global Invariances

David Alvarez-Melis, Stefanie Jegelka, Tommi S. Jaakkola

AISTATS'19: Artificial Intelligence and Statistics. 2019.

Towards Robust Interpretability with Self-Explaining Neural Networks

David Alvarez-Melis, Tommi S. Jaakkola

NeurIPS'18: Neural Information Processing Systems. 2018.

Gromov-Wasserstein Alignment of Word Embedding Spaces

David Alvarez-Melis, Tommi S. Jaakkola

EMNLP'18: Empirical Methods in Natural Language Processing. 2018. Oral Presentation.

Game-theoretic Interpretability for Temporal Modeling

Guang-He Lee, David Alvarez-Melis, Tommi S. Jaakkola

Fairness, Accountability, and Transparency in Machine Learning (@ICML 2018).

On the Robustness of Interpretability Methods

David Alvarez-Melis, Tommi S. Jaakkola

Workshop on Human Interpretability in Machine Learning (@ICML 2018).

Structured Optimal Transport

David Alvarez-Melis, Tommi S. Jaakkola, Stefanie Jegelka

AISTATS'18: Artificial Intelligence and Statistics. 2018. Oral Presentation.

Earlier version at NIPS Workshop on Optimal Transport for Machine Learning, 2017, as Extended Oral.

The Emotional GAN: Priming Adversarial Generation of Art with Emotion.

David Alvarez-Melis, Judith Amores

NIPS Workshop on Machine Learning for Creativity and Design. 2017.

Distributional Adversarial Networks

Chengtao Li*, David Alvarez-Melis*, Keyulu Xu, Stefanie Jegelka, Suvrit Sra

ICLR'17: International Conference on Learning Representations (Workshop track). 2017.

A Causal Framework for Explaining the Predictions of Black-Box Sequence-to-Sequence Models

David Alvarez-Melis, Tommi S. Jaakkola

EMNLP'17: Empirical Methods in Natural Language Processing. 2017.

Tree-structured Decoding with Doubly-recurrent Neural Networks

David Alvarez-Melis, Tommi S. Jaakkola

ICLR'17: International Conference on Learning Representations. 2017.

Topic Modeling in Twitter: Aggregating Tweets by Conversations

David Alvarez-Melis*, Martin Saveski*

ICWSM'16: International AAAI Conference on Web and Social Media. 2016. (Short Paper)

Word, graph and manifold embedding from Markov processes

Tatsunori B. Hashimoto, David Alvarez-Melis, Tommi S. Jaakkola

NIPS 2015 Workshop on Nonparametric Methods for Large Scale Representation Learning. Oral presentation.

A translation of 'The characteristic function of a random phenomenon' by Bruno de Finetti

David Alvarez-Melis, Tamara Broderick

Translation. 2015

The Matrix Multiplicative Weights Algorithm for Domain Adaptation

David Alvarez-Melis (advisor: Mehryar Mohri)

MS Thesis, Courant Institute. 2013.

pdf

Are GANs overkill for NLP?

David Alvarez-Melis*, Vikas Garg*, Adam Tauman Kalai*

NeurIPS 2022 (forthcoming)

pdf

Probabilistic Bias Mitigation in Word Embeddings

Hailey James-Sorenson, David Alvarez-Melis

HCML @ NeurIPS2019

Gromov-Wasserstein Alignment of Word Embedding Spaces

David Alvarez-Melis, Tommi S. Jaakkola

EMNLP'18: Empirical Methods in Natural Language Processing. 2018. Oral Presentation.

A Causal Framework for Explaining the Predictions of Black-Box Sequence-to-Sequence Models

David Alvarez-Melis, Tommi S. Jaakkola

EMNLP'17: Empirical Methods in Natural Language Processing. 2017.

Tree-structured Decoding with Doubly-recurrent Neural Networks

David Alvarez-Melis, Tommi S. Jaakkola

ICLR'17: International Conference on Learning Representations. 2017.

Word Embeddings as Metric Recovery in Semantic Spaces

Tatsunori B. Hashimoto, David Alvarez-Melis, Tommi S. Jaakkola

TACL: Transactions of the Association for Computational Linguistics. 2016. (presented at ACL'16).

Word, graph and manifold embedding from Markov processes

Tatsunori B. Hashimoto, David Alvarez-Melis, Tommi S. Jaakkola

NIPS 2015 Workshop on Nonparametric Methods for Large Scale Representation Learning. Oral presentation.

Generating Synthetic Datasets by Interpolating along Generalized Geodesics

Jiaojiao Fan, David Alvarez-Melis

UAI'23: Uncertainty in Artificial Intelligence. 2023

InfoOT: Information Maximizing Optimal Transport

Ching-Yao Chuang, Stefanie Jegelka, David Alvarez-Melis

ICML'23: International Conference on Machine Learning. 2023.

Domain Adaptation using Optimal Transport for Invariant Larning using Histopathology datasets

Kianoush Falahkheirkhah, Alex Lu, David Alvarez-Melis, and Grace Huynh

Medical Imaging in Deep Learning (MIDL). 2023.

Neural Unbalanced Optimal Transport via Cycle-Consistent Semi-Couplings

Frederike Lübeck*, Charlotte Bunne*, Gabriele Gut, Jacobo Sarabia del Castillo, Lucas Pelkmans, David Alvarez-Melis

Hierarchical Optimal Transport for Comparing Histopathology Datasets

Anna Yeaton, Rahul G. Krishnan, Rebecca Mieloszyk, David Alvarez-Melis, Grace Huynh

Medical Imaging in Deep Learning (MIDL). 2022.

Interpretable Distribution Shift Detection using Optimal Transport

Neha Hulkund, Nicolo Fusi, Jennifer Wortman Vaughan, David Alvarez-Melis

DataPerf Workshop at ICML 2022

Optimizing Functionals on the Space of Probabilities with Input Convex Neural Networks

David Alvarez-Melis, Yair Schiff, Youssef Mroueh

Transactions of Machine Learning Research (TMLR). 2022.

Earlier version at OTML: NeurIPS'21 Workshop on Optimal Transport in Machine Learning .

Dataset Dynamics via Gradient Flows in Probability Space

David Alvarez-Melis, Nicolò Fusi

ICML'21: International Conference on Machine Learning. 2021.

Geometric Dataset Distances via Optimal Transport

David Alvarez-Melis, Nicolò Fusi

NeurIPS'20: Neural Information Processing Systems. 2020.

Earlier version at AutoML @ ICML 2020.

Unsupervised Hierarchy Matching with Optimal Transport over Hyperbolic spaces

David Alvarez-Melis, Youssef Mroueh, Tommi S. Jaakkola

AISTATS'20: Artificial Intelligence and Statistics. 2020.

Earlier version at OTML: NeurIPS'18 Workshop on Optimal Transport for Machine Learning . Spotlight.

Optimal Transport in Structured Domains: Algorithms and Applications

David Alvarez-Melis (advisor: Tommi S. Jaakkola)

PhD Thesis, MIT. 2019.

pdf

Learning Generative Models across Incomparable Spaces

Charlotte Bunne, David Alvarez-Melis, Andreas Krause, Stefanie Jegelka

ICML'19: International Conference on Machine Learning.

Earlier version at R2L: NeurIPS'18 Workshop on Relational Representation Learning. Best Paper Award.

Towards Optimal Transport with Global Invariances

David Alvarez-Melis, Stefanie Jegelka, Tommi S. Jaakkola

AISTATS'19: Artificial Intelligence and Statistics. 2019.

Gromov-Wasserstein Alignment of Word Embedding Spaces

David Alvarez-Melis, Tommi S. Jaakkola

EMNLP'18: Empirical Methods in Natural Language Processing. 2018. Oral Presentation.

Structured Optimal Transport

David Alvarez-Melis, Tommi S. Jaakkola, Stefanie Jegelka

AISTATS'18: Artificial Intelligence and Statistics. 2018. Oral Presentation.

Earlier version at NIPS Workshop on Optimal Transport for Machine Learning, 2017, as Extended Oral.

Interpretable Distribution Shift Detection using Optimal Transport

Neha Hulkund, Nicolo Fusi, Jennifer Wortman Vaughan, David Alvarez-Melis

DataPerf Workshop at ICML 2022

From Human Explanation to Model Interpretabilty: A Framework Based on Weight of Evidence

David Alvarez-Melis, Harmanpreet Kaur, Hal Daumé III, Hanna Wallach, Jennifer Wortman Vaughan

HCOMP '21: The 9th AAAI Conference on Human Computation and Crowdsourcing. 2021.

Weight of Evidence as a Basis for Human-Oriented Explanations

David Alvarez-Melis, Hal Daumé III, Jennifer Wortman Vaughan, Hanna Wallach

HCML @ NeurIPS2019

Functional Transparency for Structured Data: a Game-Theoretic Approach,

Guang-He Lee, Wengong Jin, David Alvarez-Melis, Tommi S. Jaakkola

ICML'19: International Conference on Machine Learning.

Towards Robust Interpretability with Self-Explaining Neural Networks

David Alvarez-Melis, Tommi S. Jaakkola

NeurIPS'18: Neural Information Processing Systems. 2018.

Game-theoretic Interpretability for Temporal Modeling

Guang-He Lee, David Alvarez-Melis, Tommi S. Jaakkola

Fairness, Accountability, and Transparency in Machine Learning (@ICML 2018).

On the Robustness of Interpretability Methods

David Alvarez-Melis, Tommi S. Jaakkola

Workshop on Human Interpretability in Machine Learning (@ICML 2018).

A Causal Framework for Explaining the Predictions of Black-Box Sequence-to-Sequence Models

David Alvarez-Melis, Tommi S. Jaakkola

EMNLP'17: Empirical Methods in Natural Language Processing. 2017.

Optimal Transport in Structured Domains: Algorithms and Applications

David Alvarez-Melis (advisor: Tommi S. Jaakkola)

PhD Thesis, MIT. 2019.

pdf

The Matrix Multiplicative Weights Algorithm for Domain Adaptation

David Alvarez-Melis (advisor: Mehryar Mohri)

MS Thesis, Courant Institute. 2013.

pdf

Lax-Milgram's Theorem: Generalizations and Applications

David Alvarez-Melis (advisor: Carlos Bosch Giral)

BSc Thesis, ITAM. 2011.

pdf

Tag-LLM: Repurposing General-Purpose LLMs for Specialized Domains

Junhong Shen, Neil Tenenholtz, James Brian Hall,David Alvarez-Melis, Nicolo Fusi

ICML'24: International Conference on Machine Learning. 2024.

Generating Synthetic Datasets by Interpolating along Generalized Geodesics

Jiaojiao Fan, David Alvarez-Melis

UAI'23: Uncertainty in Artificial Intelligence. 2023

InfoOT: Information Maximizing Optimal Transport

Ching-Yao Chuang, Stefanie Jegelka, David Alvarez-Melis

ICML'23: International Conference on Machine Learning. 2023.

Domain Adaptation using Optimal Transport for Invariant Larning using Histopathology datasets

Kianoush Falahkheirkhah, Alex Lu, David Alvarez-Melis, and Grace Huynh

Medical Imaging in Deep Learning (MIDL). 2023.

Neural Unbalanced Optimal Transport via Cycle-Consistent Semi-Couplings

Frederike Lübeck*, Charlotte Bunne*, Gabriele Gut, Jacobo Sarabia del Castillo, Lucas Pelkmans, David Alvarez-Melis

Are GANs overkill for NLP?

David Alvarez-Melis*, Vikas Garg*, Adam Tauman Kalai*

NeurIPS 2022 (forthcoming)

pdf

Hierarchical Optimal Transport for Comparing Histopathology Datasets

Anna Yeaton, Rahul G. Krishnan, Rebecca Mieloszyk, David Alvarez-Melis, Grace Huynh

Medical Imaging in Deep Learning (MIDL). 2022.

Interpretable Distribution Shift Detection using Optimal Transport

Neha Hulkund, Nicolo Fusi, Jennifer Wortman Vaughan, David Alvarez-Melis

DataPerf Workshop at ICML 2022

Optimizing Functionals on the Space of Probabilities with Input Convex Neural Networks

David Alvarez-Melis, Yair Schiff, Youssef Mroueh

Transactions of Machine Learning Research (TMLR). 2022.

Earlier version at OTML: NeurIPS'21 Workshop on Optimal Transport in Machine Learning .

From Human Explanation to Model Interpretabilty: A Framework Based on Weight of Evidence

David Alvarez-Melis, Harmanpreet Kaur, Hal Daumé III, Hanna Wallach, Jennifer Wortman Vaughan

HCOMP '21: The 9th AAAI Conference on Human Computation and Crowdsourcing. 2021.

Dataset Dynamics via Gradient Flows in Probability Space

David Alvarez-Melis, Nicolò Fusi

ICML'21: International Conference on Machine Learning. 2021.

Geometric Dataset Distances via Optimal Transport

David Alvarez-Melis, Nicolò Fusi

NeurIPS'20: Neural Information Processing Systems. 2020.

Earlier version at AutoML @ ICML 2020.

Unsupervised Hierarchy Matching with Optimal Transport over Hyperbolic spaces

David Alvarez-Melis, Youssef Mroueh, Tommi S. Jaakkola

AISTATS'20: Artificial Intelligence and Statistics. 2020.

Earlier version at OTML: NeurIPS'18 Workshop on Optimal Transport for Machine Learning . Spotlight.

Probabilistic Bias Mitigation in Word Embeddings

Hailey James-Sorenson, David Alvarez-Melis

HCML @ NeurIPS2019

Weight of Evidence as a Basis for Human-Oriented Explanations

David Alvarez-Melis, Hal Daumé III, Jennifer Wortman Vaughan, Hanna Wallach

HCML @ NeurIPS2019

Optimal Transport in Structured Domains: Algorithms and Applications

David Alvarez-Melis (advisor: Tommi S. Jaakkola)

PhD Thesis, MIT. 2019.

pdf

Functional Transparency for Structured Data: a Game-Theoretic Approach,

Guang-He Lee, Wengong Jin, David Alvarez-Melis, Tommi S. Jaakkola

ICML'19: International Conference on Machine Learning.

Learning Generative Models across Incomparable Spaces

Charlotte Bunne, David Alvarez-Melis, Andreas Krause, Stefanie Jegelka

ICML'19: International Conference on Machine Learning.

Earlier version at R2L: NeurIPS'18 Workshop on Relational Representation Learning. Best Paper Award.

Towards Robust, Locally Linear Deep Networks

Guang-He Lee, David Alvarez-Melis, Tommi S. Jaakkola

ICLR'19: International Conference on Learning Representations. 2019.

Towards Optimal Transport with Global Invariances

David Alvarez-Melis, Stefanie Jegelka, Tommi S. Jaakkola

AISTATS'19: Artificial Intelligence and Statistics. 2019.

Towards Robust Interpretability with Self-Explaining Neural Networks

David Alvarez-Melis, Tommi S. Jaakkola

NeurIPS'18: Neural Information Processing Systems. 2018.

Gromov-Wasserstein Alignment of Word Embedding Spaces

David Alvarez-Melis, Tommi S. Jaakkola

EMNLP'18: Empirical Methods in Natural Language Processing. 2018. Oral Presentation.

Game-theoretic Interpretability for Temporal Modeling

Guang-He Lee, David Alvarez-Melis, Tommi S. Jaakkola

Fairness, Accountability, and Transparency in Machine Learning (@ICML 2018).

On the Robustness of Interpretability Methods

David Alvarez-Melis, Tommi S. Jaakkola

Workshop on Human Interpretability in Machine Learning (@ICML 2018).

Structured Optimal Transport

David Alvarez-Melis, Tommi S. Jaakkola, Stefanie Jegelka

AISTATS'18: Artificial Intelligence and Statistics. 2018. Oral Presentation.

Earlier version at NIPS Workshop on Optimal Transport for Machine Learning, 2017, as Extended Oral.

The Emotional GAN: Priming Adversarial Generation of Art with Emotion.

David Alvarez-Melis, Judith Amores

NIPS Workshop on Machine Learning for Creativity and Design. 2017.

Distributional Adversarial Networks

Chengtao Li*, David Alvarez-Melis*, Keyulu Xu, Stefanie Jegelka, Suvrit Sra

ICLR'17: International Conference on Learning Representations (Workshop track). 2017.

A Causal Framework for Explaining the Predictions of Black-Box Sequence-to-Sequence Models

David Alvarez-Melis, Tommi S. Jaakkola

EMNLP'17: Empirical Methods in Natural Language Processing. 2017.

Tree-structured Decoding with Doubly-recurrent Neural Networks

David Alvarez-Melis, Tommi S. Jaakkola

ICLR'17: International Conference on Learning Representations. 2017.

Word Embeddings as Metric Recovery in Semantic Spaces

Tatsunori B. Hashimoto, David Alvarez-Melis, Tommi S. Jaakkola

TACL: Transactions of the Association for Computational Linguistics. 2016. (presented at ACL'16).

Topic Modeling in Twitter: Aggregating Tweets by Conversations

David Alvarez-Melis*, Martin Saveski*

ICWSM'16: International AAAI Conference on Web and Social Media. 2016. (Short Paper)

Word, graph and manifold embedding from Markov processes

Tatsunori B. Hashimoto, David Alvarez-Melis, Tommi S. Jaakkola

NIPS 2015 Workshop on Nonparametric Methods for Large Scale Representation Learning. Oral presentation.

A translation of 'The characteristic function of a random phenomenon' by Bruno de Finetti

David Alvarez-Melis, Tamara Broderick

Translation. 2015

The Matrix Multiplicative Weights Algorithm for Domain Adaptation

David Alvarez-Melis (advisor: Mehryar Mohri)

MS Thesis, Courant Institute. 2013.

pdf

Lax-Milgram's Theorem: Generalizations and Applications

David Alvarez-Melis (advisor: Carlos Bosch Giral)

BSc Thesis, ITAM. 2011.

pdf

Research Group and Advisees

Current Group Members
Former Students, Interns, and Advisees

Teaching

Current and past courses I have taught or TA'd:

Teaching Philosophy
Explaining is understanding.

The following extract is from David Goldstein's book on Feynman:

"Feynman was a truly great teacher. He prided himself on being able to devise ways to explain even the most profound ideas to beginning students. Once, I said to him, "Dick, explain to me, so that I can understand it, why spin one-half particles obey Fermi-Dirac statistics." Sizing up his audience perfectly, Feynman said, "I'll prepare a freshman lecture on it." But he came back a few days later to say, "I couldn't do it. I couldn't reduce it to the freshman level. That means we don't really understand it."

Vitæ

Full CV in PDF (or a shorter Resumé).

  • Harvard University 2023 --
    Assistant Professor

  • Microsoft Research, New England 2021 -- now
    Senior Researcher

  • Microsoft Research, New England 2019 -- 2021
    Postdoctoral Researcher

  • MIT CSAIL 2014 - 2019
    Ph.D. in Computer Science
    Minor: Mathematical Optimization
    Thesis Advisor: Tommi Jaakkola
  • Microsoft Research, NYC Summer 2018
    Research Intern
    Mentors: H. Wallach, J.W. Vaughan, H. Daume III
  • Microsoft Research, Redmond Summer 2016
    Research Intern
    Mentors: S. Yih, M.W. Chang, K. Toutanova, C. Meek
  • IBM Research 2013 - 2014
    Supplemental Researcher
    Speech Recognition Group
  • Courant Institute, NYU 2011 - 2013
    MS in Mathematics
    Thesis Advisor: Mehryar Mohri
  • ITAM 2006 - 2011
    BSc in Applied Mathematics
    Thesis Advisor: Carlos Bosch

Prospective Students/Postdocs/Interns

I am always looking for motivated students and postdocs to join my group. Unfortunately, I am not able to respond to all emails. So, depending on your situation, please follow one of the follwing routes:

  • You are a prospective PhD student: please apply directly to our PhD program and list me as a faculty of interest. Applying to CS or Applied Math are both fine. Admissions are centralized and made by committee.
  • You are a student already at Harvard: send me an email with the subject line "[Prospect] Interested in joining your group (Harvard Student)" and a brief description of your interests and background.
  • You are a student interested in an internship or visiting position : send me an email with the subject line "[Prospect] Interested in joining your group (External Student)" and a brief description of your interests and background. Update: I am not currently taking new visiting students for Fall 2024. Please check back later!
  • You are a prospective Postdoc: send me an email with the subject line "[Prospect] Interested in joining your group (Postdoc)" with a CV, link to your publications, and a brief description of your interests and background.
In any of the situations above, please include the word "Monge" anywhere in your the main body of your email, to show me you've read this page and are not just sending a generic email to all faculty (I learnt this trick from Finale).

If your email is not formatted as above, my filters won't catch it so I will almost certainly not see it.

Misc

Outside of research, I enjoy running, brewing beer and playing guitar. I also like quotes. Here's a few more:

"We cannot solve our problems with the same thinking we used when we created them." - A. Einstein
"The real danger is not that computers will begin to think like men, but that men will begin to think like computers" - Syndey J. Harris

Meta

This website was built with jekyll based on a template by by the one and only Martin Saveski.