default search action
Max Vladymyrov
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [j2]Max Vladymyrov, Andrey Zhmoginov, Mark Sandler:
Continual HyperTransformer: A Meta-Learner for Continual Few-Shot Learning. Trans. Mach. Learn. Res. 2024 (2024) - [i14]Max Vladymyrov, Johannes von Oswald, Mark Sandler, Rong Ge:
Linear Transformers are Versatile In-Context Learners. CoRR abs/2402.14180 (2024) - [i13]Gus Kristiansen, Mark Sandler, Andrey Zhmoginov, Nolan Miller, Anirudh Goyal, Jihwan Lee, Max Vladymyrov:
Narrowing the Focus: Learned Optimizers for Pretrained Models. CoRR abs/2408.09310 (2024) - 2023
- [c14]Andrey Zhmoginov, Mark Sandler, Nolan Miller, Gus Kristiansen, Max Vladymyrov:
Decentralized Learning with Multi-Headed Distillation. CVPR 2023: 8053-8063 - [c13]Johannes von Oswald, Eyvind Niklasson, Ettore Randazzo, João Sacramento, Alexander Mordvintsev, Andrey Zhmoginov, Max Vladymyrov:
Transformers Learn In-Context by Gradient Descent. ICML 2023: 35151-35174 - [i12]Mark Sandler, Andrey Zhmoginov, Max Vladymyrov, Nolan Miller:
Training trajectories, mini-batch losses and the curious role of the learning rate. CoRR abs/2301.02312 (2023) - [i11]Max Vladymyrov, Andrey Zhmoginov, Mark Sandler:
Continual Few-Shot Learning Using HyperTransformers. CoRR abs/2301.04584 (2023) - [i10]Johannes von Oswald, Eyvind Niklasson, Maximilian Schlegel, Seijin Kobayashi, Nicolas Zucchet, Nino Scherrer, Nolan Miller, Mark Sandler, Blaise Agüera y Arcas, Max Vladymyrov, Razvan Pascanu, João Sacramento:
Uncovering mesa-optimization algorithms in Transformers. CoRR abs/2309.05858 (2023) - 2022
- [j1]Alexander D'Amour, Katherine A. Heller, Dan Moldovan, Ben Adlam, Babak Alipanahi, Alex Beutel, Christina Chen, Jonathan Deaton, Jacob Eisenstein, Matthew D. Hoffman, Farhad Hormozdiari, Neil Houlsby, Shaobo Hou, Ghassen Jerfel, Alan Karthikesalingam, Mario Lucic, Yi-An Ma, Cory Y. McLean, Diana Mincu, Akinori Mitani, Andrea Montanari, Zachary Nado, Vivek Natarajan, Christopher Nielson, Thomas F. Osborne, Rajiv Raman, Kim Ramasamy, Rory Sayres, Jessica Schrouff, Martin Seneviratne, Shannon Sequeira, Harini Suresh, Victor Veitch, Max Vladymyrov, Xuezhi Wang, Kellie Webster, Steve Yadlowsky, Taedong Yun, Xiaohua Zhai, D. Sculley:
Underspecification Presents Challenges for Credibility in Modern Machine Learning. J. Mach. Learn. Res. 23: 226:1-226:61 (2022) - [c12]Mark Sandler, Andrey Zhmoginov, Max Vladymyrov, Andrew Jackson:
Fine-tuning Image Transformers using Learnable Memory. CVPR 2022: 12145-12154 - [c11]Utku Evci, Bart van Merrienboer, Thomas Unterthiner, Fabian Pedregosa, Max Vladymyrov:
GradMax: Growing Neural Networks using Gradient Information. ICLR 2022 - [c10]Andrey Zhmoginov, Mark Sandler, Maksym Vladymyrov:
HyperTransformer: Model Generation for Supervised and Semi-Supervised Few-Shot Learning. ICML 2022: 27075-27098 - [i9]Andrey Zhmoginov, Mark Sandler, Max Vladymyrov:
HyperTransformer: Model Generation for Supervised and Semi-Supervised Few-Shot Learning. CoRR abs/2201.04182 (2022) - [i8]Utku Evci, Max Vladymyrov, Thomas Unterthiner, Bart van Merriënboer, Fabian Pedregosa:
GradMax: Growing Neural Networks using Gradient Information. CoRR abs/2201.05125 (2022) - [i7]Mark Sandler, Andrey Zhmoginov, Max Vladymyrov, Andrew Jackson:
Fine-tuning Image Transformers using Learnable Memory. CoRR abs/2203.15243 (2022) - [i6]Andrey Zhmoginov, Mark Sandler, Nolan Miller, Gus Kristiansen, Max Vladymyrov:
Decentralized Learning with Multi-Headed Distillation. CoRR abs/2211.15774 (2022) - [i5]Johannes von Oswald, Eyvind Niklasson, Ettore Randazzo, João Sacramento, Alexander Mordvintsev, Andrey Zhmoginov, Max Vladymyrov:
Transformers learn in-context by gradient descent. CoRR abs/2212.07677 (2022) - 2021
- [c9]Mark Sandler, Max Vladymyrov, Andrey Zhmoginov, Nolan Miller, Tom Madams, Andrew Jackson, Blaise Agüera y Arcas:
Meta-Learning Bidirectional Update Rules. ICML 2021: 9288-9300 - [i4]Mark Sandler, Max Vladymyrov, Andrey Zhmoginov, Nolan Miller, Andrew Jackson, Tom Madams, Blaise Agüera y Arcas:
Meta-Learning Bidirectional Update Rules. CoRR abs/2104.04657 (2021) - 2020
- [i3]Alexander D'Amour, Katherine A. Heller, Dan Moldovan, Ben Adlam, Babak Alipanahi, Alex Beutel, Christina Chen, Jonathan Deaton, Jacob Eisenstein, Matthew D. Hoffman, Farhad Hormozdiari, Neil Houlsby, Shaobo Hou, Ghassen Jerfel, Alan Karthikesalingam, Mario Lucic, Yi-An Ma, Cory Y. McLean, Diana Mincu, Akinori Mitani, Andrea Montanari, Zachary Nado, Vivek Natarajan, Christopher Nielson, Thomas F. Osborne, Rajiv Raman, Kim Ramasamy, Rory Sayres, Jessica Schrouff, Martin Seneviratne, Shannon Sequeira, Harini Suresh, Victor Veitch, Max Vladymyrov, Xuezhi Wang, Kellie Webster, Steve Yadlowsky, Taedong Yun, Xiaohua Zhai, D. Sculley:
Underspecification Presents Challenges for Credibility in Modern Machine Learning. CoRR abs/2011.03395 (2020)
2010 – 2019
- 2019
- [c8]Max Vladymyrov:
No Pressure! Addressing the Problem of Local Minima in Manifold Learning Algorithms. NeurIPS 2019: 678-687 - [i2]Max Vladymyrov:
No Pressure! Addressing the Problem of Local Minima in Manifold Learning Algorithms. CoRR abs/1906.11389 (2019) - 2017
- [c7]Max Vladymyrov, Miguel Á. Carreira-Perpiñán:
Fast, accurate spectral clustering using locally linear landmarks. IJCNN 2017: 3870-3879 - 2016
- [c6]Max Vladymyrov, Miguel Á. Carreira-Perpiñán:
The Variational Nystrom method for large-scale spectral problems. ICML 2016: 211-220 - 2015
- [c5]Miguel Á. Carreira-Perpiñán, Max Vladymyrov:
A fast, universal algorithm to learn parametric nonlinear embeddings. NIPS 2015: 253-261 - 2014
- [b1]Maksym Vladymyrov:
Large-Scale Methods for Nonlinear Manifold Learning. University of California, Merced, USA, 2014 - [c4]Max Vladymyrov, Miguel Á. Carreira-Perpiñán:
Linear-time training of nonlinear low-dimensional embeddings. AISTATS 2014: 968-977 - 2013
- [c3]Max Vladymyrov, Miguel Á. Carreira-Perpiñán:
Entropic Affinities: Properties and Efficient Numerical Computation. ICML (3) 2013: 477-485 - [c2]Max Vladymyrov, Miguel Á. Carreira-Perpiñán:
Locally Linear Landmarks for Large-Scale Manifold Learning. ECML/PKDD (3) 2013: 256-271 - 2012
- [c1]Max Vladymyrov, Miguel Á. Carreira-Perpiñán:
Fast Training of Nonlinear Embedding Algorithms. ICML 2012 - [i1]Max Vladymyrov, Miguel Á. Carreira-Perpiñán:
Partial-Hessian Strategies for Fast Learning of Nonlinear Embeddings. CoRR abs/1206.4646 (2012)
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-09-26 00:58 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint