APReL: A Library for Active Preference-based Reward Learning Algorithms
Abstract
Supplemental Material
- Download
- 34.61 MB
References
Index Terms
- APReL: A Library for Active Preference-based Reward Learning Algorithms
Recommendations
Batch Active Learning of Reward Functions from Human Preferences
Data generation and labeling are often expensive in robot learning. Preference-based learning is a concept that enables reliable labeling by querying users with preference questions. Active querying methods are commonly employed in preference-based ...
Active preference-based Gaussian process regression for reward learning and optimization
Designing reward functions is a difficult task in AI and robotics. The complex task of directly specifying all the desirable behaviors a robot needs to optimize often proves challenging for humans. A popular solution is to learn reward functions using ...
Learning reward functions from diverse sources of human feedback: Optimally integrating demonstrations and preferences
Reward functions are a common way to specify the objective of a robot. As designing reward functions can be extremely challenging, a more promising approach is to directly learn reward functions from human teachers. Importantly, data from human teachers ...
Comments
Information & Contributors
Information
Published In
- General Chairs:
- Daisuke Sakamoto,
- Astrid Weiss,
- Program Chairs:
- Laura M Hiatt,
- Masahiro Shiomi
Sponsors
Publisher
IEEE Press
Publication History
Check for updates
Author Tags
Qualifiers
- Research-article
Data Availability
Funding Sources
- FLI
- NSF
- DARPA
Conference
Acceptance Rates
Contributors
Other Metrics
Bibliometrics & Citations
Bibliometrics
Article Metrics
- 0Total Citations
- 185Total Downloads
- Downloads (Last 12 months)65
- Downloads (Last 6 weeks)6
Other Metrics
Citations
View Options
Login options
Check if you have access through your login credentials or your institution to get full access on this article.
Sign in