Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                



Home Page

Papers

Submissions

News

Editorial Board

Action Editors

MLOSS Editors

Board of Reviewers

Advisory Board

Special Issues

Open Source Software

Proceedings (PMLR)

Data (DMLR)

Transactions (TMLR)

Search

Statistics

Login

Frequently Asked Questions

Contact Us



RSS Feed

JMLR Editorial Board

Editors-in-Chief

Managing Editors

Editorial Assistant

Production Editor

Web Master

JMLR Action Editors

  • Alekh Agarwal, Google Research, USA. Reinforcement Learning, Online Learning, Bandits, Learning Theory.
  • Edo Airoldi, Harvard University, USA Statistics, approximate inference, causal inference, network data analysis, computational biology.
  • Genevera Allen, Columbia University Statistical machine learning, high-dimensional statistics, modern multivariate analysis, graphical models, data integration, tensor decompositions.
  • Pierre Alquier, ESSEC Asia-Pacific Statistical Learning theory, PAC-Bayes learning, Approximate Bayesian inference, Variational inference, High-dimensional statistics
  • Animashree Anandkumar, California Institute of Technology, USA Tensor decomposition, non-convex optimization, probabilistic models, reinforcement learning.
  • Krishnakumar Balasubramanian, University of California, Davis Sampling and stochastic optimization, Kernel methods, Geometric data analysis, Statistical learning theory.
  • Arindam Banerjee, UIUC bandits, generative models, deep learning, optimization, learning theory, federated learning
  • Elias Bareinboim, Columbia University causal Inference, generalizability, fairness, reinforcement learning
  • Yoshua Bengio, University of Montreal, Canada / Mila Deep learning, learning to reason
  • Samy Bengio, Apple, USA Deep learning, representation learning
  • Quentin Berthet, Google DeepMind Convex optimization, optimal transport, differentiable programming, high-dimensional statistics
  • Alexandre Bouchard, UBC mcmc, smc, phylogenetics
  • Joan Bruna, NYU, USA deep learning theory, signal processing, statistics
  • Miguel Carreira-Perpinan, University of California, Merced, USA Decision trees and forests, neural network compression, optimization in deep learning
  • Kai-Wei Chang,
  • Silvia Chiappa, DeepMind Causal inference, Approximate Bayesian inference, variational inference, ML fairness
  • Kyle Cranmer, University of Wisconsin-Madison AI/ML for Science, Probabilistic ML, Approximate Inference, Geometric Deep Learning
  • Marco Cuturi, Apple Optimal transport, geometric methods
  • Florence d'Alche-Buc, Telecom Paris, Institut Polytechnique de Paris Kernel methods, complex output prediction, robustness, explainability, bioinformatics
  • Luc De Raedt, Katholieke Universiteit Leuven, Belgium (statistical) relational learning, inductive logic programming, symbolic machine learning, probabilistic programming, learning from structured data, pattern mining
  • Gal Elidan, Hebrew University, Israel
  • Barbara Engelhardt, Stanford University, USA Latent factor models, computational biology, statistical inference, hierarchical models
  • Kenji Fukumizu, The Institute of Statistical Mathematics, Japan Kernel methods, dimension reduction
  • Christophe Giraud, Universite Paris-Saclay Clustering, network analysis, algorithmic fairness, active learning, theory of neural networks, high-dimensional statistics
  • Manuel Gomez-Rodriguez, Max Planck Institute for Software Systems Fairness, interpretability, accountability, strategic behavior, human-ai collaboration, temporal point processes
  • Russ Greiner, University of Alberta, Canada Medical informatics, active/budgeted Learning
  • Quanquan Gu, UCLA optimization, theory of deep learning, reinforcement learning, LLMs, deep generative models, high-dimensional statistics
  • Benjamin Guedj, Inria and University College London, France and UK Learning theory, PAC-Bayes, computational statistics, high-dimensional statistics, theory of deep learning, probabilistic models, Bayesian inference
  • Rajarshi Guhaniyogi, Texas A & M University Spatial and spatio-temporal Bayesian methods for large data, Bayes theory and methods for high dimensional regressions, tensor and network-valued regressions, functional data analysis, approximate Bayesian inference, graphical models, applications in neuroimaging and environmental sciences
  • Maya Gupta, University of Washington fairness, interpretability, societal issues, safety, regresssion, ensembles, shape constraints, immunology, information theory
  • Matthew Hoffman, Google Bayesian inference, Markov chain Monte Carlo, Sequential Monte Carlo,Variational inference
  • Aapo Hyvarinen, University of Helsinki, Finland Unsupervised learning, natural image statistics, neuroimaging data analysis
  • Tommi Jaakkola, Massachusetts Institute of Technology, USA Approximate inference, structured prediction, deep learning
  • Prateek Jain, Microsoft Research, India Non-convex Optimization, Stochastic Optimization, Large-scale Optimization, Resource-constrained Machine Learning
  • Kevin Jamieson, University of Washington Multi-armed bandits, active learning, experimental design
  • Nan Jiang, University of Illinois Urbana-Champaign reinforcement learning theory
  • Varun Kanade, University of Oxford learning theory; online learning; computational complexity; optimization
  • Samuel Kaski, Aalto University, Finland Probabilistic modelling, multiple data sources (multi-view, multi-task, multimodal, retrieval); applications in bioinformatics, user interaction, brain signal analysis
  • Sathiya Keerthi, Microsoft Research, USA optimization, large margin methods, structured prediction, large scale learning, distributed training
  • Mohammad Emtiyaz Khan, RIKEN Center for Advanced Intelligence, Japan Variational Inference, Approximate Bayesian inference, Bayesian Deep Learning
  • Mladen Kolar, University of Southern California, USA high-dimensional statistics, graphical models
  • George Konidaris, Duke University, USA Reinforcement Learning, artificial intelligence, robotics
  • Aryeh Kontorovich, Ben-Gurion University metric spaces, nearest neighbors, Markov chains, statistics
  • Wouter Koolen, CWI, Amsterdam Online Learning, Bandits, Pure Exploration, e-values
  • Alp Kucukelbir, Columbia University & Fero Labs variational inference, statistical machine learning, approximate inference, diffusion, probabilistic programming
  • Brian Kulis, Boston University Deep Learning, Clustering, Kernel Methods, Metric Learning, Self-Supervised Learning, Audio Applications, Vision Applications
  • Sanjiv Kumar, Google Research representation learning, optimization, deep learning, hashing, nearest neighbor search
  • Eric Laber, Duke University reinforcement learning, precision medicine, treatment regimes, causal inference
  • Christoph Lampert, Institute of Science and Technology, Austria (IST Austria) transfer learning, trustworthy learning, computer vision
  • Tor Lattimore, DeepMind Bandits, reinforcement learning, online learning
  • Nicolas Le Roux, Microsoft Research, Montreal optimization, reinforcement learning
  • Anthony Lee, University of Bristol Markov chain Monte Carlo, sequential Monte Carlo
  • Honglak Lee, Google and University of Michigan, Ann Arbor Deep Learning, Deep Generative Models, Representation Learning, Reinforcement Learning, Unsupervised Learning
  • Qiang Liu, Dartmouth College, USA Probablistic graphical models, inference and learning, computational models for crowdsourcing
  • Jianfeng Lu, Duke University Monte Carlo sampling, scientific machine learning, generative models
  • Gabor Lugosi, Pompeu Fabra University, Spain statistical learning theory, online prediction, concentration inequalities
  • Shiqian Ma, Rice University first-order methods, stochastic algorithms, bilevel optimization, Riemannian optimization
  • Michael Mahoney, University of California at Berkeley, USA randomized linear algebra; stochastic optimization; neural networks; matrix algorithms; graph algorithms; scientific machine learning
  • Stephan Mandt, variational inference, deep latent variable models, machine learning and physics, neural data compression
  • Vikash Mansinghka, Massachusetts Institute of Technology, USA Probabilistic programming, Bayesian structure learning, large-scale sequential Monte Carlo
  • Rahul Mazumder, Massachusetts Institute of Technology mathematical optimization, high-dimensional statistics, sparsity, Boosting, nonparametric statistics, shape constrained estimation, decision tree ensembles, compressing large neural networks
  • Qiaozhu Mei, University of Michigan, USA Learning from text, network, and behavioral data, representation learning, interactive learning
  • Vahab Mirrokni, Google Research Mechanism Desgin and Internet Economics, Algorithmic Game Theory, Distributed Optimization, Submodular Optimization, Large-scale Graph Mining
  • Mehryar Mohri, New York University, USA Learning theory (all aspects, including auctioning, ensemble methods, structured prediction, time series, on-line learning, games, adaptation, learning kernels, spectral learning, ranking, low-rank approximation)
  • Sayan Mukherjee, Duke University, USA; University of Leipzig; Max Planck Institute for Mathematics in the Sciences Bayesian, Time series, Geometry, Topology, Deep learning
  • Gergely Neu, reinforcement learning, learning theory, online learning, bandit theory
  • Lam Nguyen, IBM Research, Thomas J. Watson Research Center Stochastic Gradient Algorithms, Non-convex Optimization, Stochastic Optimization, Convex Optimization
  • Chris Oates, Newcastle University Bayesian computation, kernel methods, uncertainty quantification
  • Laurent Orseau, Deepmind Reinforcement Learning, Artificial General Intelligence
  • Debdeep Pati, Texas A&M University Bayes theory and methods in high dimensions; Approximate Bayesian methods; high dimensional network analysis, graphical models, hierarchical modeling of complex shapes, point pattern data modeling, real-time tracking algorithms
  • Jie Peng, University of California, Davis, USA High dimensional statistical inference, graphical models, functional data analysis
  • Vianney Perchet, ENSAE & Criteo AI Lab bandits, online learning, matching
  • Massimiliano Pontil, Istituto Italiano di Tecnologia (Italy), University College London (UK) Multitask and transfer learning, convex optimization, kernel methods, sparsity regularization
  • Alexandre Proutiere, KTH Royal Institute of Technology Reinforcement learning, statistical learning in control systems, bandits, clustering and community detection
  • Maxim Raginsky, University of Illinois at Urbana-Champaign Theory of deep learning, statistical learning, optimization, applied probability, concentration of measure, dynamical systems and control
  • Peter Richtarik, King Abdullah University of Science and Technology (KAUST) convex and nonconvex optimization, stochastic zero, first and second-order methods, distributed training, federated learning, communication compression, operator splitting, efficient ML
  • Lorenzo Rosasco, University of Genova, Italy and Massachusetts Institute of Technology, USA Statistical learning theory, Optimization, Regularization, Inverse problems
  • Daniel Roy, University of Toronto generalization, learning theory, deep learning, pac-bayes, nonparametric bayes, online learning, nonvacuous bounds
  • Sivan Sabato, Ben Gurion University of the Negev statistical learning theory, active learning, interactive learning
  • Ruslan Salakhutdinov, Carnegie Mellon University Deep Learning, Probabilistic Graphical Models, and Large-scale Optimization.
  • Joseph Salmon, Inria High-dimensional statistics, convex optimization, crowdsourcing
  • Christian Shelton, UC Riverside, USA Time series, temporal and spatial processes, point processes
  • Xiaotong Shen, University of Minnesota, USA Learning, Graphical models, Recommenders
  • Ali Shojaie, University of Washington High-dimensional statistics; Statistical learning; graphical models; network analysis
  • Ilya Shpitser, Johns Hopkins University causal inference, missing data, algorithmic fairness, semi-parametric statistics
  • Mahdi Soltanolkotabi,
  • David Sontag, Massachusetts Institute of Technology Graphical models, approximate inference, structured prediction, unsupervised learning, applications to health care
  • Bharath Sriperumbudur, Pennsylvania State University Kernel Methods, Regularization, Theory of Functions and Spaces, Statistical Learning Theory, Nonparametric Estimation and Testing, Functional Data Analysis, Topological Data Analysis
  • Ingo Steinwart, University of Stuttgart, Germany Statistical learning theory, Kernel-based learning methods (support vector machines), Cluster Analysis, Loss functions
  • Weijie Su, University of Pennsylvania Differential privacy, deep learning theory, LLMs, high-dimensional statistics, optimization
  • Csaba Szepesvari, University of Alberta, Canada reinforcement learning, sequential decision making, learning theory
  • Ambuj Tewari, University of Michigan, USA learning theory, online learning, bandit problems, reinforcement learning, optimization, high-dimensional statistics
  • Jin Tian, Mohamed bin Zayed University of Artificial Intelligence causal inference, Bayesian networks, probabilistic graphical models
  • Koji Tsuda, National Institute of Advanced Industrial Science and Technology, Japan.
  • Jean-Philippe Vert, Google, France kernel methods, computational biology, statistical learning theory
  • Silvia Villa, Genova University, Italy optimization, convex optimization, first order methods, regularization
  • Manfred Warmuth, Google Research
  • Kilian Weinberger, Cornell University, USA Deep Learning, Representation Learning, Ranking, Computer Vision
  • Martha White, University of Alberta reinforcement learning, representation learning
  • Zhihua Zhang, Peking University, China Bayesian Analysis and Computations, Numerical Algebra and Optimization
  • Mingyuan Zhou, The University of Texas at Austin Approximate inference, Bayesian methods, deep generative models, discrete data analysis
  • Zhengyuan Zhou, contexutal bandits, online learning, game theory
  • Ji Zhu, University of Michigan, Ann Arbor Network data analysis, latent variable models, graphical models, high-dimensional data, health analytics.

JMLR-MLOSS Editors

JMLR Editorial board of reviewers

The Editorial board of reviewers is a collection of trusted reviewers, which commit to review at least 2 papers per year. Please reach out to us at editor@jmlr.org if you'd like to volunteer to be in this list of trusted reviewers:

JMLR Advisory Board

© JMLR 2024.
Mastodon