Export Citations
Save this search
Please login to be able to save your searches and receive alerts for new content matching your search criteria.
- discussionOctober 2024
When Should Algorithms Resign? A Proposal for AI Governance
Algorithmic resignation is a strategic approach for managing the use of artificial intelligence (AI) by embedding governance directly into AI systems. Organizations can thus balance the benefits of automation with the need for human oversight.
- research-articleOctober 2023
FeedbackLogs: Recording and Incorporating Stakeholder Feedback into Machine Learning Pipelines
- Matthew Barker,
- Emma Kallina,
- Dhananjay Ashok,
- Katherine Collins,
- Ashley Casovan,
- Adrian Weller,
- Ameet Talwalkar,
- Valerie Chen,
- Umang Bhatt
EAAMO '23: Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and OptimizationArticle No.: 19, Pages 1–15https://doi.org/10.1145/3617694.3623239As machine learning (ML) pipelines affect an increasing array of stakeholders, there is a growing need for documenting how input from stakeholders is recorded and incorporated. We propose FeedbackLogs, addenda to existing documentation of ML pipelines, ...
- research-articleAugust 2023
Human Uncertainty in Concept-Based AI Systems
- Katherine Maeve Collins,
- Matthew Barker,
- Mateo Espinosa Zarlenga,
- Naveen Raman,
- Umang Bhatt,
- Mateja Jamnik,
- Ilia Sucholutsky,
- Adrian Weller,
- Krishnamurthy Dvijotham
AIES '23: Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and SocietyPages 869–889https://doi.org/10.1145/3600211.3604692Placing a human in the loop may help abate the risks of deploying AI systems in safety-critical settings (e.g., a clinician working with a medical AI system). However, mitigating risks arising from human error and uncertainty within such human-AI ...
- research-articleJuly 2023
On the informativeness of supervision signals
- Ilia Sucholutsky,
- Ruairidh M. Battleday,
- Katherine M. Collins,
- Raja Marjieh,
- Joshua C. Peterson,
- Pulkit Singh,
- Umang Bhatt,
- Nori Jacoby,
- Adrian Weller,
- Thomas L. Griffiths
UAI '23: Proceedings of the Thirty-Ninth Conference on Uncertainty in Artificial IntelligenceArticle No.: 191, Pages 2036–2046Supervised learning typically focuses on learning transferable representations from training examples annotated by humans. While rich annotations (like soft labels) carry more information than sparse annotations (like hard labels), they are also more ...
- research-articleJuly 2023
Human-in-the-loop mixup
- Katherine M. Collins,
- Umang Bhatt,
- Weiyang Liu,
- Vihari Piratla,
- Ilia Sucholutsky,
- Bradley Love,
- Adrian Weller
UAI '23: Proceedings of the Thirty-Ninth Conference on Uncertainty in Artificial IntelligenceArticle No.: 43, Pages 454–464Aligning model representations to humans has been found to improve robustness and generalization. However, such methods often focus on standard observational data. Synthetic data is proliferating and powering many advances in machine learning; yet, it is ...
- research-articleJune 2023
Harms from Increasingly Agentic Algorithmic Systems
- Alan Chan,
- Rebecca Salganik,
- Alva Markelius,
- Chris Pang,
- Nitarshan Rajkumar,
- Dmitrii Krasheninnikov,
- Lauro Langosco,
- Zhonghao He,
- Yawen Duan,
- Micah Carroll,
- Michelle Lin,
- Alex Mayhew,
- Katherine Collins,
- Maryam Molamohammadi,
- John Burden,
- Wanru Zhao,
- Shalaleh Rismani,
- Konstantinos Voudouris,
- Umang Bhatt,
- Adrian Weller,
- David Krueger,
- Tegan Maharaj
FAccT '23: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and TransparencyPages 651–666https://doi.org/10.1145/3593013.3594033Research in Fairness, Accountability, Transparency, and Ethics (FATE)1 has established many sources and forms of algorithmic harm, in domains as diverse as health care, finance, policing, and recommendations. Much work remains to be done to mitigate the ...
- research-articleFebruary 2023
Approximating full conformal prediction at scale via influence functions
AAAI'23/IAAI'23/EAAI'23: Proceedings of the Thirty-Seventh AAAI Conference on Artificial Intelligence and Thirty-Fifth Conference on Innovative Applications of Artificial Intelligence and Thirteenth Symposium on Educational Advances in Artificial IntelligenceArticle No.: 745, Pages 6631–6639https://doi.org/10.1609/aaai.v37i6.25814Conformal prediction (CP) is a wrapper around traditional machine learning models, giving coverage guarantees under the sole assumption of exchangeability; in classification problems, a CP guarantees that the error rate is at most a chosen significance ...
- research-articleFebruary 2023
Towards robust metrics for concept representation evaluation
- Mateo Espinosa Zarlenga,
- Pietro Barbiero,
- Zohreh Shams,
- Dmitry Kazhdan,
- Umang Bhatt,
- Adrian Weller,
- Mateja Jamnik
AAAI'23/IAAI'23/EAAI'23: Proceedings of the Thirty-Seventh AAAI Conference on Artificial Intelligence and Thirty-Fifth Conference on Innovative Applications of Artificial Intelligence and Thirteenth Symposium on Educational Advances in Artificial IntelligenceArticle No.: 1323, Pages 11791–11799https://doi.org/10.1609/aaai.v37i10.26392Recent work on interpretability has focused on concept-based explanations, where deep learning models are explained in terms of high-level units of information, referred to as concepts. Concept learning models, however, have been shown to be prone to ...
- research-articleJuly 2021
Uncertainty as a Form of Transparency: Measuring, Communicating, and Using Uncertainty
- Umang Bhatt,
- Javier Antorán,
- Yunfeng Zhang,
- Q. Vera Liao,
- Prasanna Sattigeri,
- Riccardo Fogliato,
- Gabrielle Melançon,
- Ranganath Krishnan,
- Jason Stanley,
- Omesh Tickoo,
- Lama Nachman,
- Rumi Chunara,
- Madhulika Srikumar,
- Adrian Weller,
- Alice Xiang
AIES '21: Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and SocietyPages 401–413https://doi.org/10.1145/3461702.3462571Algorithmic transparency entails exposing system properties to various stakeholders for purposes that include understanding, improving, and contesting predictions. Until now, most research into algorithmic transparency has predominantly focused on ...
- research-articleJanuary 2021
Evaluating and aggregating feature-based model explanations
IJCAI'20: Proceedings of the Twenty-Ninth International Joint Conference on Artificial IntelligenceArticle No.: 417, Pages 3016–3022A feature-based model explanation denotes how much each input feature contributes to a model's output for a given data point. As the number of proposed explanation functions grows, we lack quantitative evaluation criteria to help practitioners know when ...
- research-articleJanuary 2020
Explainable machine learning in deployment
- Umang Bhatt,
- Alice Xiang,
- Shubham Sharma,
- Adrian Weller,
- Ankur Taly,
- Yunhan Jia,
- Joydeep Ghosh,
- Ruchir Puri,
- José M. F. Moura,
- Peter Eckersley
FAT* '20: Proceedings of the 2020 Conference on Fairness, Accountability, and TransparencyPages 648–657https://doi.org/10.1145/3351095.3375624Explainable machine learning offers the potential to provide stakeholders with insights into model behavior by using various methods such as feature importance scores, counterfactual explanations, or influential training data. Yet there is little ...
- research-articleOctober 2019
A Robot’s Expressive Language Affects Human Strategy and Perceptions in a Competitive Game
- Aaron M. Roth,
- Samantha Reig,
- Umang Bhatt,
- Jonathan Shulgach,
- Tamara Amin,
- Afsaneh Doryab,
- Fei Fang,
- Manuela Veloso
2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)Pages 1–8https://doi.org/10.1109/RO-MAN46459.2019.8956412As robots are increasingly endowed with social and communicative capabilities, they will interact with humans in more settings, both collaborative and competitive. We explore human-robot relationships in the context of a competitive Stackelberg Security ...
- research-articleJanuary 2019
Building human-machine trust via interpretability
AAAI'19/IAAI'19/EAAI'19: Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence and Thirty-First Innovative Applications of Artificial Intelligence Conference and Ninth AAAI Symposium on Educational Advances in Artificial IntelligenceArticle No.: 1258, Pages 9919–9920https://doi.org/10.1609/aaai.v33i01.33019919Developing human-machine trust is a prerequisite for adoption of machine learning systems in decision critical settings (e.g healthcare and governance). Users develop appropriate trust in these systems when they understand how the systems make their ...