Falcon: Fair Active Learning Using Multi-Armed Bandits
Abstract
References
Recommendations
Thompson sampling for budgeted multi-armed bandits
IJCAI'15: Proceedings of the 24th International Conference on Artificial IntelligenceThompson sampling is one of the earliest randomized algorithms for multi-armed bandits (MAB). In this paper, we extend the Thompson sampling to Budgeted MAB, where there is random cost for pulling an arm and the total cost is constrained by a budget. We ...
Budgeted Combinatorial Multi-Armed Bandits
AAMAS '22: Proceedings of the 21st International Conference on Autonomous Agents and Multiagent SystemsWe consider a budgeted combinatorial multi-armed bandit setting where, in every round, the algorithm selects a super-arm consisting of one or more arms. The goal is to minimize the total expected regret after all rounds within a limited budget. Existing ...
Fair active learning
AbstractMachine learning (ML) is increasingly being used in high-stakes applications impacting society. Therefore, it is of critical importance that ML models do not propagate discrimination. Collecting accurate labeled data in societal ...
Highlights- We introduce fair active learning to mitigate bias in limited labeled data problems.
Comments
Information & Contributors
Information
Published In
Publisher
VLDB Endowment
Publication History
Check for updates
Badges
Qualifiers
- Research-article
Contributors
Other Metrics
Bibliometrics & Citations
Bibliometrics
Article Metrics
- 0Total Citations
- 16Total Downloads
- Downloads (Last 12 months)16
- Downloads (Last 6 weeks)3
Other Metrics
Citations
View Options
Get Access
Login options
Check if you have access through your login credentials or your institution to get full access on this article.
Sign in