default search action
Benjamin Eysenbach
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [c41]Raj Ghugare, Matthieu Geist, Glen Berseth, Benjamin Eysenbach:
Closing the Gap between TD Learning and Supervised Learning - A Generalisation Point of View. ICLR 2024 - [c40]Yongyuan Liang, Yanchao Sun, Ruijie Zheng, Xiangyu Liu, Benjamin Eysenbach, Tuomas Sandholm, Furong Huang, Stephen Marcus McAleer:
Game-Theoretic Robust Reinforcement Learning Handles Temporally-Coupled Perturbations. ICLR 2024 - [c39]Tianwei Ni, Benjamin Eysenbach, Erfan Seyedsalehi, Michel Ma, Clement Gehring, Aditya Mahajan, Pierre-Luc Bacon:
Bridging State and History Representations: Understanding Self-Predictive RL. ICLR 2024 - [c38]Chongyi Zheng, Benjamin Eysenbach, Homer Rich Walke, Patrick Yin, Kuan Fang, Ruslan Salakhutdinov, Sergey Levine:
Stabilizing Contrastive RL: Techniques for Robotic Goal Reaching from Offline Data. ICLR 2024 - [c37]Chongyi Zheng, Ruslan Salakhutdinov, Benjamin Eysenbach:
Contrastive Difference Predictive Coding. ICLR 2024 - [c36]Ifigeneia Apostolopoulou, Benjamin Eysenbach, Frank Nielsen, Artur Dubrawski:
A Rate-Distortion View of Uncertainty Quantification. ICML 2024 - [c35]Vivek Myers, Chongyi Zheng, Anca D. Dragan, Sergey Levine, Benjamin Eysenbach:
Learning Temporal Distances: Contrastive Successor Features Can Provide a Metric Structure for Decision-Making. ICML 2024 - [i49]Tianwei Ni, Benjamin Eysenbach, Erfan Seyedsalehi, Michel Ma, Clement Gehring, Aditya Mahajan, Pierre-Luc Bacon:
Bridging State and History Representations: Understanding Self-Predictive RL. CoRR abs/2401.08898 (2024) - [i48]Raj Ghugare, Matthieu Geist, Glen Berseth, Benjamin Eysenbach:
Closing the Gap between TD Learning and Supervised Learning - A Generalisation Point of View. CoRR abs/2401.11237 (2024) - [i47]Benjamin Eysenbach, Vivek Myers, Ruslan Salakhutdinov, Sergey Levine:
Inference via Interpolation: Contrastive Representations Provably Enable Planning and Inference. CoRR abs/2403.04082 (2024) - [i46]Ifigeneia Apostolopoulou, Benjamin Eysenbach, Frank Nielsen, Artur Dubrawski:
A Rate-Distortion View of Uncertainty Quantification. CoRR abs/2406.10775 (2024) - [i45]Vivek Myers, Chongyi Zheng, Anca D. Dragan, Sergey Levine, Benjamin Eysenbach:
Learning Temporal Distances: Contrastive Successor Features Can Provide a Metric Structure for Decision-Making. CoRR abs/2406.17098 (2024) - [i44]Grace Liu, Michael Tang, Benjamin Eysenbach:
A Single Goal is All You Need: Skills and Exploration Emerge from Contrastive RL without Rewards, Demonstrations, or Subgoals. CoRR abs/2408.05804 (2024) - [i43]Michal Bortkiewicz, Wladek Palucki, Vivek Myers, Tadeusz Dziarmaga, Tomasz Arczewski, Lukasz Kucinski, Benjamin Eysenbach:
Accelerating Goal-Conditioned RL Algorithms and Research. CoRR abs/2408.11052 (2024) - 2023
- [c34]Bogdan Mazoure, Benjamin Eysenbach, Ofir Nachum, Jonathan Tompson:
Contrastive Value Learning: Implicit Models for Simple Offline RL. CoRL 2023: 1257-1267 - [c33]Raj Ghugare, Homanga Bharadhwaj, Benjamin Eysenbach, Sergey Levine, Russ Salakhutdinov:
Simplifying Model-based RL: Learning Representations, Latent-space Models, and Policies with One Objective. ICLR 2023 - [c32]Amrith Setlur, Don Kurian Dennis, Benjamin Eysenbach, Aditi Raghunathan, Chelsea Finn, Virginia Smith, Sergey Levine:
Bitrate-Constrained DRO: Beyond Worst Case Robustness To Unknown Group Shifts. ICLR 2023 - [c31]Benjamin Eysenbach, Matthieu Geist, Sergey Levine, Ruslan Salakhutdinov:
A Connection between One-Step RL and Critic Regularization in Reinforcement Learning. ICML 2023: 9485-9507 - [c30]Kyle Beltran Hatch, Benjamin Eysenbach, Rafael Rafailov, Tianhe Yu, Ruslan Salakhutdinov, Sergey Levine, Chelsea Finn:
Contrastive Example-Based Control. L4DC 2023: 155-169 - [c29]Tianwei Ni, Michel Ma, Benjamin Eysenbach, Pierre-Luc Bacon:
When Do Transformers Shine in RL? Decoupling Memory from Credit Assignment. NeurIPS 2023 - [c28]Seohong Park, Dibya Ghosh, Benjamin Eysenbach, Sergey Levine:
HIQL: Offline Goal-Conditioned RL with Latent States as Actions. NeurIPS 2023 - [i42]Amrith Setlur, Don Kurian Dennis, Benjamin Eysenbach, Aditi Raghunathan, Chelsea Finn, Virginia Smith, Sergey Levine:
Bitrate-Constrained DRO: Beyond Worst Case Robustness To Unknown Group Shifts. CoRR abs/2302.02931 (2023) - [i41]Chongyi Zheng, Benjamin Eysenbach, Homer Walke, Patrick Yin, Kuan Fang, Ruslan Salakhutdinov, Sergey Levine:
Stabilizing Contrastive RL: Techniques for Offline Goal Reaching. CoRR abs/2306.03346 (2023) - [i40]Tianwei Ni, Michel Ma, Benjamin Eysenbach, Pierre-Luc Bacon:
When Do Transformers Shine in RL? Decoupling Memory from Credit Assignment. CoRR abs/2307.03864 (2023) - [i39]Seohong Park, Dibya Ghosh, Benjamin Eysenbach, Sergey Levine:
HIQL: Offline Goal-Conditioned RL with Latent States as Actions. CoRR abs/2307.11949 (2023) - [i38]Benjamin Eysenbach, Matthieu Geist, Sergey Levine, Ruslan Salakhutdinov:
A Connection between One-Step Regularization and Critic Regularization in Reinforcement Learning. CoRR abs/2307.12968 (2023) - [i37]Kyle Hatch, Benjamin Eysenbach, Rafael Rafailov, Tianhe Yu, Ruslan Salakhutdinov, Sergey Levine, Chelsea Finn:
Contrastive Example-Based Control. CoRR abs/2307.13101 (2023) - [i36]Chongyi Zheng, Ruslan Salakhutdinov, Benjamin Eysenbach:
Contrastive Difference Predictive Coding. CoRR abs/2310.20141 (2023) - 2022
- [c27]Scott Emmons, Benjamin Eysenbach, Ilya Kostrikov, Sergey Levine:
RvS: What is Essential for Offline RL via Supervised Learning? ICLR 2022 - [c26]Benjamin Eysenbach, Sergey Levine:
Maximum Entropy RL (Provably) Solves Some Robust RL Problems. ICLR 2022 - [c25]Benjamin Eysenbach, Ruslan Salakhutdinov, Sergey Levine:
The Information Geometry of Unsupervised Reinforcement Learning. ICLR 2022 - [c24]Tianjun Zhang, Benjamin Eysenbach, Ruslan Salakhutdinov, Sergey Levine, Joseph E. Gonzalez:
C-Planning: An Automatic Curriculum for Learning Goal-Reaching Tasks. ICLR 2022 - [c23]Tianwei Ni, Benjamin Eysenbach, Ruslan Salakhutdinov:
Recurrent Model-Free RL Can Be a Strong Baseline for Many POMDPs. ICML 2022: 16691-16723 - [c22]Benjamin Eysenbach, Alexander Khazatsky, Sergey Levine, Ruslan Salakhutdinov:
Mismatched No More: Joint Model-Policy Optimization for Model-Based RL. NeurIPS 2022 - [c21]Benjamin Eysenbach, Soumith Udatha, Russ Salakhutdinov, Sergey Levine:
Imitating Past Successes can be Very Suboptimal. NeurIPS 2022 - [c20]Benjamin Eysenbach, Tianjun Zhang, Sergey Levine, Ruslan Salakhutdinov:
Contrastive Learning as Goal-Conditioned Reinforcement Learning. NeurIPS 2022 - [c19]Yiding Jiang, Evan Zheran Liu, Benjamin Eysenbach, J. Zico Kolter, Chelsea Finn:
Learning Options via Compression. NeurIPS 2022 - [c18]Amrith Setlur, Benjamin Eysenbach, Virginia Smith, Sergey Levine:
Adversarial Unlearning: Reducing Confidence Along Adversarial Directions. NeurIPS 2022 - [i35]Amrith Setlur, Benjamin Eysenbach, Virginia Smith, Sergey Levine:
Adversarial Unlearning: Reducing Confidence Along Adversarial Directions. CoRR abs/2206.01367 (2022) - [i34]Benjamin Eysenbach, Soumith Udatha, Sergey Levine, Ruslan Salakhutdinov:
Imitating Past Successes can be Very Suboptimal. CoRR abs/2206.03378 (2022) - [i33]Benjamin Eysenbach, Tianjun Zhang, Ruslan Salakhutdinov, Sergey Levine:
Contrastive Learning as Goal-Conditioned Reinforcement Learning. CoRR abs/2206.07568 (2022) - [i32]Raj Ghugare, Homanga Bharadhwaj, Benjamin Eysenbach, Sergey Levine, Ruslan Salakhutdinov:
Simplifying Model-based RL: Learning Representations, Latent-space Models, and Policies with One Objective. CoRR abs/2209.08466 (2022) - [i31]Bogdan Mazoure, Benjamin Eysenbach, Ofir Nachum, Jonathan Tompson:
Contrastive Value Learning: Implicit Models for Simple Offline RL. CoRR abs/2211.02100 (2022) - [i30]Yiding Jiang, Evan Zheran Liu, Benjamin Eysenbach, Zico Kolter, Chelsea Finn:
Learning Options via Compression. CoRR abs/2212.04590 (2022) - 2021
- [c17]Dhruv Shah, Benjamin Eysenbach, Nicholas Rhinehart, Sergey Levine:
Rapid Exploration for Open-World Navigation with Latent Goal Models. CoRL 2021: 674-684 - [c16]Benjamin Eysenbach, Shreyas Chaudhari, Swapnil Asawa, Sergey Levine, Ruslan Salakhutdinov:
Off-Dynamics Reinforcement Learning: Training for Transfer with Domain Classifiers. ICLR 2021 - [c15]Benjamin Eysenbach, Ruslan Salakhutdinov, Sergey Levine:
C-Learning: Learning to Achieve Goals via Recursive Classification. ICLR 2021 - [c14]Dibya Ghosh, Abhishek Gupta, Ashwin Reddy, Justin Fu, Coline Manon Devin, Benjamin Eysenbach, Sergey Levine:
Learning to Reach Goals via Iterated Supervised Learning. ICLR 2021 - [c13]Stephen Tian, Suraj Nair, Frederik Ebert, Sudeep Dasari, Benjamin Eysenbach, Chelsea Finn, Sergey Levine:
Model-Based Visual Planning with Self-Supervised Functional Distances. ICLR 2021 - [c12]Yevgen Chebotar, Karol Hausman, Yao Lu, Ted Xiao, Dmitry Kalashnikov, Jacob Varley, Alex Irpan, Benjamin Eysenbach, Ryan Julian, Chelsea Finn, Sergey Levine:
Actionable Models: Unsupervised Offline Reinforcement Learning of Robotic Skills. ICML 2021: 1518-1528 - [c11]Dhruv Shah, Benjamin Eysenbach, Gregory Kahn, Nicholas Rhinehart, Sergey Levine:
ViNG: Learning Open-World Navigation with Visual Goals. ICRA 2021: 13215-13222 - [c10]Ben Eysenbach, Sergey Levine, Ruslan Salakhutdinov:
Replacing Rewards with Examples: Example-Based Policy Search via Recursive Classification. NeurIPS 2021: 11541-11552 - [c9]Ben Eysenbach, Ruslan Salakhutdinov, Sergey Levine:
Robust Predictable Control. NeurIPS 2021: 27813-27825 - [i29]Benjamin Eysenbach, Sergey Levine:
Maximum Entropy RL (Provably) Solves Some Robust RL Problems. CoRR abs/2103.06257 (2021) - [i28]Benjamin Eysenbach, Sergey Levine, Ruslan Salakhutdinov:
Replacing Rewards with Examples: Example-Based Policy Search via Recursive Classification. CoRR abs/2103.12656 (2021) - [i27]Dhruv Shah, Benjamin Eysenbach, Nicholas Rhinehart, Sergey Levine:
RECON: Rapid Exploration for Open-World Navigation with Latent Goal Models. CoRR abs/2104.05859 (2021) - [i26]Yevgen Chebotar, Karol Hausman, Yao Lu, Ted Xiao, Dmitry Kalashnikov, Jake Varley, Alex Irpan, Benjamin Eysenbach, Ryan Julian, Chelsea Finn, Sergey Levine:
Actionable Models: Unsupervised Offline Reinforcement Learning of Robotic Skills. CoRR abs/2104.07749 (2021) - [i25]Benjamin Eysenbach, Ruslan Salakhutdinov, Sergey Levine:
Robust Predictable Control. CoRR abs/2109.03214 (2021) - [i24]Benjamin Eysenbach, Ruslan Salakhutdinov, Sergey Levine:
The Information Geometry of Unsupervised Reinforcement Learning. CoRR abs/2110.02719 (2021) - [i23]Benjamin Eysenbach, Alexander Khazatsky, Sergey Levine, Ruslan Salakhutdinov:
Mismatched No More: Joint Model-Policy Optimization for Model-Based RL. CoRR abs/2110.02758 (2021) - [i22]Tianwei Ni, Benjamin Eysenbach, Ruslan Salakhutdinov:
Recurrent Model-Free RL is a Strong Baseline for Many POMDPs. CoRR abs/2110.05038 (2021) - [i21]Tianjun Zhang, Benjamin Eysenbach, Ruslan Salakhutdinov, Sergey Levine, Joseph E. Gonzalez:
C-Planning: An Automatic Curriculum for Learning Goal-Reaching Tasks. CoRR abs/2110.12080 (2021) - [i20]Scott Emmons, Benjamin Eysenbach, Ilya Kostrikov, Sergey Levine:
RvS: What is Essential for Offline RL via Supervised Learning? CoRR abs/2112.10751 (2021) - 2020
- [c8]Tianwei Ni, Harshit S. Sikchi, Yufei Wang, Tejus Gupta, Lisa Lee, Ben Eysenbach:
f-IRL: Inverse Reinforcement Learning via State Marginal Matching. CoRL 2020: 529-551 - [c7]Ben Eysenbach, Xinyang Geng, Sergey Levine, Ruslan Salakhutdinov:
Rewriting History with Inverse RL: Hindsight Inference for Policy Improvement. NeurIPS 2020 - [c6]Lisa Lee, Ben Eysenbach, Ruslan Salakhutdinov, Shixiang Shane Gu, Chelsea Finn:
Weakly-Supervised Reinforcement Learning for Controllable Behavior. NeurIPS 2020 - [i19]Benjamin Eysenbach, Xinyang Geng, Sergey Levine, Ruslan Salakhutdinov:
Rewriting History with Inverse RL: Hindsight Inference for Policy Improvement. CoRR abs/2002.11089 (2020) - [i18]Lisa Lee, Benjamin Eysenbach, Ruslan Salakhutdinov, Shixiang Gu, Chelsea Finn:
Weakly-Supervised Reinforcement Learning for Controllable Behavior. CoRR abs/2004.02860 (2020) - [i17]Benjamin Eysenbach, Swapnil Asawa, Shreyas Chaudhari, Ruslan Salakhutdinov, Sergey Levine:
Off-Dynamics Reinforcement Learning: Training for Transfer with Domain Classifiers. CoRR abs/2006.13916 (2020) - [i16]Shuby Deshpande, Benjamin Eysenbach, Jeff Schneider:
Interactive Visualization for Debugging RL. CoRR abs/2008.07331 (2020) - [i15]Krishnan Srinivasan, Benjamin Eysenbach, Sehoon Ha, Jie Tan, Chelsea Finn:
Learning to be Safe: Deep RL with a Safety Critic. CoRR abs/2010.14603 (2020) - [i14]Tianwei Ni, Harshit S. Sikchi, Yufei Wang, Tejus Gupta, Lisa Lee, Benjamin Eysenbach:
f-IRL: Inverse Reinforcement Learning via State Marginal Matching. CoRR abs/2011.04709 (2020) - [i13]Benjamin Eysenbach, Ruslan Salakhutdinov, Sergey Levine:
C-Learning: Learning to Achieve Goals via Recursive Classification. CoRR abs/2011.08909 (2020) - [i12]Dhruv Shah, Benjamin Eysenbach, Gregory Kahn, Nicholas Rhinehart, Sergey Levine:
ViNG: Learning Open-World Navigation with Visual Goals. CoRR abs/2012.09812 (2020) - [i11]Stephen Tian, Suraj Nair, Frederik Ebert, Sudeep Dasari, Benjamin Eysenbach, Chelsea Finn, Sergey Levine:
Model-Based Visual Planning with Self-Supervised Functional Distances. CoRR abs/2012.15373 (2020)
2010 – 2019
- 2019
- [c5]Benjamin Eysenbach, Abhishek Gupta, Julian Ibarz, Sergey Levine:
Diversity is All You Need: Learning Skills without a Reward Function. ICLR (Poster) 2019 - [c4]Allan Jabri, Kyle Hsu, Abhishek Gupta, Ben Eysenbach, Sergey Levine, Chelsea Finn:
Unsupervised Curricula for Visual Meta-Reinforcement Learning. NeurIPS 2019: 10519-10530 - [c3]Ben Eysenbach, Ruslan Salakhutdinov, Sergey Levine:
Search on the Replay Buffer: Bridging Planning and Reinforcement Learning. NeurIPS 2019: 15220-15231 - [i10]Benjamin Eysenbach, Ruslan Salakhutdinov, Sergey Levine:
Search on the Replay Buffer: Bridging Planning and Reinforcement Learning. CoRR abs/1906.05253 (2019) - [i9]Lisa Lee, Benjamin Eysenbach, Emilio Parisotto, Eric P. Xing, Sergey Levine, Ruslan Salakhutdinov:
Efficient Exploration via State Marginal Matching. CoRR abs/1906.05274 (2019) - [i8]Benjamin Eysenbach, Sergey Levine:
If MaxEnt RL is the Answer, What is the Question? CoRR abs/1910.01913 (2019) - [i7]Allan Jabri, Kyle Hsu, Ben Eysenbach, Abhishek Gupta, Sergey Levine, Chelsea Finn:
Unsupervised Curricula for Visual Meta-Reinforcement Learning. CoRR abs/1912.04226 (2019) - [i6]Dibya Ghosh, Abhishek Gupta, Justin Fu, Ashwin Reddy, Coline Devin, Benjamin Eysenbach, Sergey Levine:
Learning To Reach Goals Without Reinforcement Learning. CoRR abs/1912.06088 (2019) - 2018
- [j1]Bum Chul Kwon, Ben Eysenbach, Janu Verma, Kenney Ng, Christopher deFilippi, Walter F. Stewart, Adam Perer:
Clustervision: Visual Supervision of Unsupervised Clustering. IEEE Trans. Vis. Comput. Graph. 24(1): 142-151 (2018) - [c2]Benjamin Eysenbach, Shixiang Gu, Julian Ibarz, Sergey Levine:
Leave no Trace: Learning to Reset for Safe and Autonomous Reinforcement Learning. ICLR (Poster) 2018 - [c1]John D. Co-Reyes, Yuxuan Liu, Abhishek Gupta, Benjamin Eysenbach, Pieter Abbeel, Sergey Levine:
Self-Consistent Trajectory Autoencoder: Hierarchical Reinforcement Learning with Trajectory Embeddings. ICML 2018: 1008-1017 - [i5]Benjamin Eysenbach, Abhishek Gupta, Julian Ibarz, Sergey Levine:
Diversity is All You Need: Learning Skills without a Reward Function. CoRR abs/1802.06070 (2018) - [i4]John D. Co-Reyes, Yuxuan Liu, Abhishek Gupta, Benjamin Eysenbach, Pieter Abbeel, Sergey Levine:
Self-Consistent Trajectory Autoencoder: Hierarchical Reinforcement Learning with Trajectory Embeddings. CoRR abs/1806.02813 (2018) - [i3]Abhishek Gupta, Benjamin Eysenbach, Chelsea Finn, Sergey Levine:
Unsupervised Meta-Learning for Reinforcement Learning. CoRR abs/1806.04640 (2018) - 2017
- [i2]Benjamin Eysenbach, Shixiang Gu, Julian Ibarz, Sergey Levine:
Leave no Trace: Learning to Reset for Safe and Autonomous Reinforcement Learning. CoRR abs/1711.06782 (2017) - 2016
- [i1]Benjamin Eysenbach, Carl Vondrick, Antonio Torralba:
Who is Mistaken? CoRR abs/1612.01175 (2016)
Coauthor Index
aka: Russ Salakhutdinov
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-10-07 21:19 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint