default search action
Qingzi Vera Liao
Person information
- affiliation: Microsoft Research, Canada
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [j15]Q. Vera Liao, Mihaela Vorvoreanu, Hari Subramonyam, Lauren Wilcox:
UX Matters: The Critical Role of UX in Responsible AI. Interactions 31(4): 22-27 (2024) - [j14]Upol Ehsan, Q. Vera Liao, Samir Passi, Mark O. Riedl, Hal Daumé III:
Seamful XAI: Operationalizing Seamful Design in Explainable AI. Proc. ACM Hum. Comput. Interact. 8(CSCW1): 1-29 (2024) - [c72]Yu Lu Liu, Su Lin Blodgett, Jackie C. K. Cheung, Vera Liao, Alexandra Olteanu, Ziang Xiao:
ECBD: Evidence-Centered Benchmark Design for NLP. ACL (1) 2024: 16349-16365 - [c71]Upol Ehsan, Samir Passi, Q. Vera Liao, Larry Chan, I-Hsiang Lee, Michael J. Muller, Mark O. Riedl:
The Who in XAI: How AI Background Shapes Perceptions of AI Explanations. CHI 2024: 316:1-316:32 - [c70]Ziang Xiao, Wesley Hanwen Deng, Michelle S. Lam, Motahhare Eslami, Juho Kim, Mina Lee, Q. Vera Liao:
Human-Centered Evaluation and Auditing of Language Models. CHI Extended Abstracts 2024: 476:1-476:6 - [c69]Nikhil Sharma, Q. Vera Liao, Ziang Xiao:
Generative Echo Chamber? Effect of LLM-Powered Search Systems on Diverse Information Seeking. CHI 2024: 1033:1-1033:17 - [c68]Sunnie S. Y. Kim, Q. Vera Liao, Mihaela Vorvoreanu, Stephanie Ballard, Jennifer Wortman Vaughan:
"I'm Not Sure, But...": Examining the Impact of Large Language Models' Uncertainty Expression on User Reliance and Trust. FAccT 2024: 822-835 - [i48]K. J. Kevin Feng, Q. Vera Liao, Ziang Xiao, Jennifer Wortman Vaughan, Amy X. Zhang, David W. McDonald:
Canvil: Designerly Adaptation for LLM-Powered User Experiences. CoRR abs/2401.09051 (2024) - [i47]Nikhil Sharma, Q. Vera Liao, Ziang Xiao:
Generative Echo Chamber? Effects of LLM-Powered Search Systems on Diverse Information Seeking. CoRR abs/2402.05880 (2024) - [i46]Sunnie S. Y. Kim, Q. Vera Liao, Mihaela Vorvoreanu, Stephanie Ballard, Jennifer Wortman Vaughan:
"I'm Not Sure, But...": Examining the Impact of Large Language Models' Uncertainty Expression on User Reliance and Trust. CoRR abs/2405.00623 (2024) - [i45]Yu Lu Liu, Su Lin Blodgett, Jackie Chi Kit Cheung, Q. Vera Liao, Alexandra Olteanu, Ziang Xiao:
ECBD: Evidence-Centered Benchmark Design for NLP. CoRR abs/2406.08723 (2024) - 2023
- [j13]Valerie Chen, Q. Vera Liao, Jennifer Wortman Vaughan, Gagan Bansal:
Understanding the Role of Human Intuition on Reliance in Human-AI Decision-Making with Explanations. Proc. ACM Hum. Comput. Interact. 7(CSCW2): 1-32 (2023) - [j12]Anna Kawakami, Shreya Chowdhary, Shamsi T. Iqbal, Q. Vera Liao, Alexandra Olteanu, Jina Suh, Koustuv Saha:
Sensing Wellbeing in the Workplace, Why and For Whom? Envisioning Impacts with Organizational Stakeholders. Proc. ACM Hum. Comput. Interact. 7(CSCW2): 1-33 (2023) - [j11]Vivian Lai, Yiming Zhang, Chacha Chen, Q. Vera Liao, Chenhao Tan:
Selective Explanations: Leveraging Human Input to Align Explainable AI. Proc. ACM Hum. Comput. Interact. 7(CSCW2): 1-35 (2023) - [c67]Q. Vera Liao, Hariharan Subramonyam, Jennifer Wang, Jennifer Wortman Vaughan:
Designerly Understanding: Information Needs for Model Transparency to Support Design Ideation for AI-Powered User Experience. CHI 2023: 9:1-9:21 - [c66]Steven Moore, Q. Vera Liao, Hariharan Subramonyam:
fAIlureNotes: Supporting Designers in Understanding the Limits of AI Models for Computer Vision Tasks. CHI 2023: 10:1-10:19 - [c65]Michael J. Muller, Lydia B. Chilton, Anna Kantosalo, Q. Vera Liao, Mary Lou Maher, Charles Patrick Martin, Greg Walsh:
GenAICHI 2023: Generative AI and HCI at CHI 2023. CHI Extended Abstracts 2023: 350:1-350:7 - [c64]Mohammad Tahaei, Marios Constantinides, Daniele Quercia, Sean Kennedy, Michael J. Muller, Simone Stumpf, Q. Vera Liao, Ricardo Baeza-Yates, Lora Aroyo, Jess Holbrook, Ewa Luger, Michael Madaio, Ilana Golbin Blumenfeld, Maria De-Arteaga, Jessica Vitak, Alexandra Olteanu:
Human-Centered Responsible Artificial Intelligence: Current & Future Trends. CHI Extended Abstracts 2023: 515:1-515:4 - [c63]Daniel M. Russell, Q. Vera Liao, Chinmay Kulkarni, Elena L. Glassman, Nikolas Martelaro:
Human-Computer Interaction and AI: What practitioners need to know to design and build effective AI system from a human perspective. CHI Extended Abstracts 2023: 545:1-545:3 - [c62]Chinmay Kulkarni, Tongshuang Wu, Kenneth Holstein, Q. Vera Liao, Min Kyung Lee, Mina Lee, Hariharan Subramonyam:
LLMs and the Infrastructure of CSCW. CSCW Companion 2023: 408-410 - [c61]Ziang Xiao, Susu Zhang, Vivian Lai, Q. Vera Liao:
Evaluating Evaluation Metrics: A Framework for Analyzing NLG Evaluation Metrics using Measurement Theory. EMNLP 2023: 10967-10982 - [c60]Vivian Lai, Chacha Chen, Alison Smith-Renner, Q. Vera Liao, Chenhao Tan:
Towards a Science of Human-AI Decision Making: An Overview of Design Space in Empirical Human-Subject Studies. FAccT 2023: 1369-1385 - [c59]Ziang Xiao, Q. Vera Liao, Michelle X. Zhou, Tyrone Grandison, Yunyao Li:
Powering an AI Chatbot with Expert Sourcing to Support Credible Health Information Access. IUI 2023: 2-18 - [c58]Ziang Xiao, Xingdi Yuan, Q. Vera Liao, Rania Abdelghani, Pierre-Yves Oudeyer:
Supporting Qualitative Analysis with Large Language Models: Combining Codebook with GPT-3 for Deductive Coding. IUI Companion 2023: 75-78 - [c57]Snehal Prabhudesai, Leyao Yang, Sumit Asthana, Xun Huan, Q. Vera Liao, Nikola Banovic:
Understanding Uncertainty: How Lay Decision-makers Perceive and Interpret Uncertainty in Human-AI Decision Making. IUI 2023: 379-396 - [e2]Casey Fiesler, Loren G. Terveen, Morgan Ames, Susan R. Fussell, Eric Gilbert, Vera Liao, Xiaojuan Ma, Xinru Page, Mark Rouncefield, Vivek Singh, Pamela J. Wisniewski:
Computer Supported Cooperative Work and Social Computing, CSCW 2023, Minneapolis, MN, USA, October 14-18, 2023. ACM 2023 [contents] - [i44]Valerie Chen, Q. Vera Liao, Jennifer Wortman Vaughan, Gagan Bansal:
Understanding the Role of Human Intuition on Reliance in Human-AI Decision-Making with Explanations. CoRR abs/2301.07255 (2023) - [i43]Vivian Lai, Yiming Zhang, Chacha Chen, Q. Vera Liao, Chenhao Tan:
Selective Explanations: Leveraging Human Input to Align Explainable AI. CoRR abs/2301.09656 (2023) - [i42]Ziang Xiao, Q. Vera Liao, Michelle X. Zhou, Tyrone Grandison, Yunyao Li:
Powering an AI Chatbot with Expert Sourcing to Support Credible Health Information Access. CoRR abs/2301.10710 (2023) - [i41]Helena Vasconcelos, Gagan Bansal, Adam Fourney, Q. Vera Liao, Jennifer Wortman Vaughan:
Generation Probabilities Are Not Enough: Exploring the Effectiveness of Uncertainty Highlighting in AI-Powered Code Completions. CoRR abs/2302.07248 (2023) - [i40]Mohammad Tahaei, Marios Constantinides, Daniele Quercia, Sean Kennedy, Michael J. Muller, Simone Stumpf, Q. Vera Liao, Ricardo Baeza-Yates, Lora Aroyo, Jess Holbrook, Ewa Luger, Michael Madaio, Ilana Golbin Blumenfeld, Maria De-Arteaga, Jessica Vitak, Alexandra Olteanu:
Human-Centered Responsible Artificial Intelligence: Current & Future Trends. CoRR abs/2302.08157 (2023) - [i39]Q. Vera Liao, Hariharan Subramonyam, Jennifer Wang, Jennifer Wortman Vaughan:
Designerly Understanding: Information Needs for Model Transparency to Support Design Ideation for AI-Powered User Experience. CoRR abs/2302.10395 (2023) - [i38]Steven Moore, Q. Vera Liao, Hariharan Subramonyam:
fAIlureNotes: Supporting Designers in Understanding the Limits of AI Models for Computer Vision Tasks. CoRR abs/2302.11703 (2023) - [i37]Anna Kawakami, Shreya Chowdhary, Shamsi T. Iqbal, Q. Vera Liao, Alexandra Olteanu, Jina Suh, Koustuv Saha:
Sensing Wellbeing in the Workplace, Why and For Whom? Envisioning Impacts with Organizational Stakeholders. CoRR abs/2303.06794 (2023) - [i36]Haotian Li, Yun Wang, Q. Vera Liao, Huamin Qu:
Why is AI not a Panacea for Data Workers? An Interview Study on Human-AI Collaboration in Data Storytelling. CoRR abs/2304.08366 (2023) - [i35]Ziang Xiao, Xingdi Yuan, Q. Vera Liao, Rania Abdelghani, Pierre-Yves Oudeyer:
Supporting Qualitative Analysis with Large Language Models: Combining Codebook with GPT-3 for Deductive Coding. CoRR abs/2304.10548 (2023) - [i34]Ziang Xiao, Susu Zhang, Vivian Lai, Q. Vera Liao:
Evaluating NLG Evaluation Metrics: A Measurement Theory Perspective. CoRR abs/2305.14889 (2023) - [i33]Q. Vera Liao, Jennifer Wortman Vaughan:
AI Transparency in the Age of LLMs: A Human-Centered Research Roadmap. CoRR abs/2306.01941 (2023) - [i32]Q. Vera Liao, Ziang Xiao:
Rethinking Model Evaluation as Narrowing the Socio-Technical Gap. CoRR abs/2306.03100 (2023) - 2022
- [j10]Mingming Fan, Xianyou Yang, Tsz Tung Yu, Vera Q. Liao, Jian Zhao:
Human-AI Collaboration for UX Evaluation: Effects of Explanation and Synchronization. Proc. ACM Hum. Comput. Interact. 6(CSCW1): 96:1-96:32 (2022) - [j9]Ahmet Baki Kocaballi, Liliana Laranjo, Leigh Clark, Rafal Kocielnik, Robert J. Moore, Q. Vera Liao, Timothy W. Bickmore:
Special Issue on Conversational Agents for Healthcare and Wellbeing. ACM Trans. Interact. Intell. Syst. 12(2): 9:1-9:3 (2022) - [c56]Vijay Arya, Rachel K. E. Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Q. Vera Liao, Ronny Luss, Aleksandra Mojsilovic, Sami Mourad, Pablo Pedemonte, Ramya Raghavendra, John T. Richards, Prasanna Sattigeri, Karthikeyan Shanmugam, Moninder Singh, Kush R. Varshney, Dennis Wei, Yunfeng Zhang:
AI Explainability 360: Impact and Design. AAAI 2022: 12651-12657 - [c55]Vivian Lai, Samuel Carton, Rajat Bhatnagar, Q. Vera Liao, Yunfeng Zhang, Chenhao Tan:
Human-AI Collaboration via Conditional Delegation: A Case Study of Content Moderation. CHI 2022: 54:1-54:18 - [c54]Upol Ehsan, Philipp Wintersberger, Q. Vera Liao, Elizabeth Anne Watkins, Carina Manger, Hal Daumé III, Andreas Riener, Mark O. Riedl:
Human-Centered Explainable AI (HCXAI): Beyond Opening the Black-Box of AI. CHI Extended Abstracts 2022: 109:1-109:7 - [c53]Su Lin Blodgett, Q. Vera Liao, Alexandra Olteanu, Rada Mihalcea, Michael J. Muller, Morgan Klaus Scheuerman, Chenhao Tan, Qian Yang:
Responsible Language Technologies: Foreseeing and Mitigating Harms. CHI Extended Abstracts 2022: 152:1-152:3 - [c52]Soumya Ghosh, Q. Vera Liao, Karthikeyan Natesan Ramamurthy, Jirí Navrátil, Prasanna Sattigeri, Kush R. Varshney, Yunfeng Zhang:
Uncertainty Quantification 360: A Hands-on Tutorial. COMAD/CODS 2022: 333-335 - [c51]Q. Vera Liao, S. Shyam Sundar:
Designing for Responsible Trust in AI Systems: A Communication Perspective. FAccT 2022: 1257-1268 - [c50]Q. Vera Liao, Yunfeng Zhang, Ronny Luss, Finale Doshi-Velez, Amit Dhurandhar:
Connecting Algorithmic Research and Usage Contexts: A Perspective of Contextualized Evaluation for Explainable AI. HCOMP 2022: 147-159 - [c49]Jiao Sun, Q. Vera Liao, Michael J. Muller, Mayank Agarwal, Stephanie Houde, Kartik Talamadupula, Justin D. Weisz:
Investigating Explainability of Generative AI for Code through Scenario-based Design. IUI 2022: 212-228 - [e1]Gary Hsieh, Anthony Tang, Morgan G. Ames, Sharon Ding, Susan R. Fussell, Vera Liao, Andrés Monroy-Hernández, Sean Munson, Irina Shklovski, John Tang:
Companion Computer Supported Cooperative Work and Social Computing, CSCW 2022, Virtual Event, Taiwan, November 8-22, 2022. ACM 2022, ISBN 978-1-4503-9190-0 [contents] - [i31]Jiao Sun, Q. Vera Liao, Michael J. Muller, Mayank Agarwal, Stephanie Houde, Kartik Talamadupula, Justin D. Weisz:
Investigating Explainability of Generative AI for Code through Scenario-based Design. CoRR abs/2202.04903 (2022) - [i30]Vivian Lai, Samuel Carton, Rajat Bhatnagar, Q. Vera Liao, Yunfeng Zhang, Chenhao Tan:
Human-AI Collaboration via Conditional Delegation: A Case Study of Content Moderation. CoRR abs/2204.11788 (2022) - [i29]Q. Vera Liao, S. Shyam Sundar:
Designing for Responsible Trust in AI Systems: A Communication Perspective. CoRR abs/2204.13828 (2022) - [i28]Q. Vera Liao, Yunfeng Zhang, Ronny Luss, Finale Doshi-Velez, Amit Dhurandhar:
Connecting Algorithmic Research and Usage Contexts: A Perspective of Contextualized Evaluation for Explainable AI. CoRR abs/2206.10847 (2022) - [i27]Ana Lucic, Sheeraz Ahmad, Amanda Furtado Brinhosa, Vera Liao, Himani Agrawal, Umang Bhatt, Krishnaram Kenthapadi, Alice Xiang, Maarten de Rijke, Nicholas Drabowski:
Towards the Use of Saliency Maps for Explaining Low-Quality Electrocardiograms to End Users. CoRR abs/2207.02726 (2022) - [i26]Upol Ehsan, Q. Vera Liao, Samir Passi, Mark O. Riedl, Hal Daumé III:
Seamful XAI: Operationalizing Seamful Design in Explainable AI. CoRR abs/2211.06753 (2022) - 2021
- [j8]Daricia Wilkinson, Öznur Alkan, Q. Vera Liao, Massimiliano Mattetti, Inge Vejsbjerg, Bart P. Knijnenburg, Elizabeth Daly:
Why or Why Not? The Effect of Justification Styles on Chatbot Recommendations. ACM Trans. Inf. Syst. 39(4): 42:1-42:21 (2021) - [c48]Kshitij P. Fadnis, Pankaj Dhoolia, Li Zhu, Q. Vera Liao, Steven Ross, Nathaniel Mills, Sachindra Joshi, Luis A. Lastras:
Doc2Bot: Document grounded Bot Framework. AAAI 2021: 16026-16028 - [c47]Umang Bhatt, Javier Antorán, Yunfeng Zhang, Q. Vera Liao, Prasanna Sattigeri, Riccardo Fogliato, Gabrielle Gauthier Melançon, Ranganath Krishnan, Jason Stanley, Omesh Tickoo, Lama Nachman, Rumi Chunara, Madhulika Srikumar, Adrian Weller, Alice Xiang:
Uncertainty as a Form of Transparency: Measuring, Communicating, and Using Uncertainty. AIES 2021: 401-413 - [c46]Upol Ehsan, Q. Vera Liao, Michael J. Muller, Mark O. Riedl, Justin D. Weisz:
Expanding Explainability: Towards Social Transparency in AI systems. CHI 2021: 82:1-82:19 - [c45]Upol Ehsan, Philipp Wintersberger, Q. Vera Liao, Martina Mara, Marc Streit, Sandra Wachter, Andreas Riener, Mark O. Riedl:
Operationalizing Human-Centered Perspectives in Explainable AI. CHI Extended Abstracts 2021: 94:1-94:6 - [c44]Q. Vera Liao, Moninder Singh, Yunfeng Zhang, Rachel K. E. Bellamy:
Introduction to Explainable AI. CHI Extended Abstracts 2021: 127:1-127:3 - [c43]Yunfeng Zhang, Rachel K. E. Bellamy, Q. Vera Liao, Moninder Singh:
Introduction to AI Fairness. CHI Extended Abstracts 2021: 128:1-128:3 - [c42]Vijay Arya, Rachel K. E. Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Q. Vera Liao, Ronny Luss, Aleksandra Mojsilovic, Sami Mourad, Pablo Pedemonte, Ramya Raghavendra, John T. Richards, Prasanna Sattigeri, Karthikeyan Shanmugam, Moninder Singh, Kush R. Varshney, Dennis Wei, Yunfeng Zhang:
AI Explainability 360 Toolkit. COMAD/CODS 2021: 376-379 - [c41]Shweta Narkar, Yunfeng Zhang, Q. Vera Liao, Dakuo Wang, Justin D. Weisz:
Model LineUpper: Supporting Interactive Model Comparison at Multiple Levels for AutoML. IUI 2021: 170-174 - [c40]Soya Park, April Yi Wang, Ban Kawas, Q. Vera Liao, David Piorkowski, Marina Danilevsky:
Facilitating Knowledge Sharing from Domain Experts to Data Scientists for Building NLP Models. IUI 2021: 585-596 - [c39]Q. Vera Liao:
Question-Driven eXplainable AI: Re-framing the Technical and Design Spaces of XAI. DaSH@KDD 2021 - [i25]Dakuo Wang, Q. Vera Liao, Yunfeng Zhang, Udayan Khurana, Horst Samulowitz, Soya Park, Michael J. Muller, Lisa Amini:
How Much Automation Does a Data Scientist Want? CoRR abs/2101.03970 (2021) - [i24]Upol Ehsan, Q. Vera Liao, Michael J. Muller, Mark O. Riedl, Justin D. Weisz:
Expanding Explainability: Towards Social Transparency in AI systems. CoRR abs/2101.04719 (2021) - [i23]Soya Park, April Yi Wang, Ban Kawas, Q. Vera Liao, David Piorkowski, Marina Danilevsky:
Facilitating Knowledge Sharing from Domain Experts to Data Scientists for Building NLP Models. CoRR abs/2102.00036 (2021) - [i22]Ana Lucic, Madhulika Srikumar, Umang Bhatt, Alice Xiang, Ankur Taly, Q. Vera Liao, Maarten de Rijke:
A Multistakeholder Approach Towards Evaluating AI Transparency Mechanisms. CoRR abs/2103.14976 (2021) - [i21]Q. Vera Liao, Milena Pribic, Jaesik Han, Sarah Miller, Daby Sow:
Question-Driven Design Process for Explainable AI User Experiences. CoRR abs/2104.03483 (2021) - [i20]Shweta Narkar, Yunfeng Zhang, Q. Vera Liao, Dakuo Wang, Justin D. Weisz:
Model LineUpper: Supporting Interactive Model Comparison at Multiple Levels for AutoML. CoRR abs/2104.04375 (2021) - [i19]Soumya Ghosh, Q. Vera Liao, Karthikeyan Natesan Ramamurthy, Jirí Navrátil, Prasanna Sattigeri, Kush R. Varshney, Yunfeng Zhang:
Uncertainty Quantification 360: A Holistic Toolkit for Quantifying and Communicating the Uncertainty of AI. CoRR abs/2106.01410 (2021) - [i18]Upol Ehsan, Samir Passi, Q. Vera Liao, Larry Chan, I-Hsiang Lee, Michael J. Muller, Mark O. Riedl:
The Who in Explainable AI: How AI Background Shapes Perceptions of AI Explanations. CoRR abs/2107.13509 (2021) - [i17]Vijay Arya, Rachel K. E. Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Q. Vera Liao, Ronny Luss, Aleksandra Mojsilovic, Sami Mourad, Pablo Pedemonte, Ramya Raghavendra, John T. Richards, Prasanna Sattigeri, Karthikeyan Shanmugam, Moninder Singh, Kush R. Varshney, Dennis Wei, Yunfeng Zhang:
AI Explainability 360: Impact and Design. CoRR abs/2109.12151 (2021) - [i16]Q. Vera Liao, Kush R. Varshney:
Human-Centered Explainable AI (XAI): From Algorithms to User Experiences. CoRR abs/2110.10790 (2021) - [i15]Vivian Lai, Chacha Chen, Q. Vera Liao, Alison Smith-Renner, Chenhao Tan:
Towards a Science of Human-AI Decision Making: A Survey of Empirical Studies. CoRR abs/2112.11471 (2021) - [i14]Mingming Fan, Xianyou Yang, Tsz Tung Yu, Vera Q. Liao, Jian Zhao:
Human-AI Collaboration for UX Evaluation: Effects of Explanation and Synchronization. CoRR abs/2112.12387 (2021) - 2020
- [j7]Vijay Arya, Rachel K. E. Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Q. Vera Liao, Ronny Luss, Aleksandra Mojsilovic, Sami Mourad, Pablo Pedemonte, Ramya Raghavendra, John T. Richards, Prasanna Sattigeri, Karthikeyan Shanmugam, Moninder Singh, Kush R. Varshney, Dennis Wei, Yunfeng Zhang:
AI Explainability 360: An Extensible Toolkit for Understanding Data and Machine Learning Models. J. Mach. Learn. Res. 21: 130:1-130:6 (2020) - [j6]Zahra Ashktorab, Q. Vera Liao, Casey Dugan, James Johnson, Qian Pan, Wei Zhang, Sadhana Kumaravel, Murray Campbell:
Human-AI Collaboration in a Cooperative Game Setting: Measuring Social Perception and Outcomes. Proc. ACM Hum. Comput. Interact. 4(CSCW2): 96:1-96:20 (2020) - [j5]Bhavya Ghai, Q. Vera Liao, Yunfeng Zhang, Rachel K. E. Bellamy, Klaus Mueller:
Explainable Active Learning (XAL): Toward AI Explanations as Interfaces for Machine Teachers. Proc. ACM Hum. Comput. Interact. 4(CSCW3): 1-28 (2020) - [j4]Ziang Xiao, Michelle X. Zhou, Q. Vera Liao, Gloria Mark, Chang Yan Chi, Wenxi Chen, Huahai Yang:
Tell Me About Yourself: Using an AI-Powered Chatbot to Conduct Conversational Surveys with Open-ended Questions. ACM Trans. Comput. Hum. Interact. 27(3): 15:1-15:37 (2020) - [c38]Song Feng, Kshitij P. Fadnis, Q. Vera Liao, Luis A. Lastras:
Doc2Dial: A Framework for Dialogue Composition Grounded in Documents. AAAI 2020: 13604-13605 - [c37]Ahmet Baki Kocaballi, Juan C. Quiroz, Liliana Laranjo, Dana Rezazadegan, Rafal Kocielnik, Leigh Clark, Q. Vera Liao, Sun Young Park, Robert J. Moore, Adam S. Miner:
Conversational Agents for Health and Wellbeing. CHI Extended Abstracts 2020: 1-8 - [c36]Q. Vera Liao, Daniel M. Gruen, Sarah Miller:
Questioning the AI: Informing Design Practices for Explainable AI User Experiences. CHI 2020: 1-15 - [c35]Q. Vera Liao, Moninder Singh, Yunfeng Zhang, Rachel K. E. Bellamy:
Introduction to Explainable AI. CHI Extended Abstracts 2020: 1-4 - [c34]Yunfeng Zhang, Rachel K. E. Bellamy, Moninder Singh, Q. Vera Liao:
Introduction to AI Fairness. CHI Extended Abstracts 2020: 1-4 - [c33]Kshitij Fadnis, Nathaniel Mills, Jatin Ganhotra, Haggai Roitman, Gaurav Pandey, Doron Cohen, Yosi Mass, Shai Erera, R. Chulaka Gunasekara, Danish Contractor, Siva Sankalp Patel, Q. Vera Liao, Sachindra Joshi, Luis A. Lastras, David Konopnicki:
Agent Assist through Conversation Analysis. EMNLP (Demos) 2020: 151-157 - [c32]Yunfeng Zhang, Q. Vera Liao, Rachel K. E. Bellamy:
Effect of confidence and explanation on accuracy and trust calibration in AI-assisted decision making. FAT* 2020: 295-305 - [c31]Vijay Arya, Rachel K. E. Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Q. Vera Liao, Ronny Luss, Aleksandra Mojsilovic, Sami Mourad, Pablo Pedemonte, Ramya Raghavendra, John T. Richards, Prasanna Sattigeri, Karthikeyan Shanmugam, Moninder Singh, Kush R. Varshney, Dennis Wei, Yunfeng Zhang:
AI explainability 360: hands-on tutorial. FAT* 2020: 696 - [c30]Stephanie Houde, Vera Liao, Jacquelyn Martino, Michael J. Muller, David Piorkowski, John T. Richards, Justin D. Weisz, Yunfeng Zhang:
Business (mis)Use Cases of Generative AI. HAI-GEN+user2agent@IUI 2020 - [c29]Michal Shmueli-Scheuer, Ron Artstein, Yasaman Khazaeni, Hao Fang, Q. Vera Liao:
user2agent: 2nd Workshop on User-Aware Conversational Agents. IUI Companion 2020: 9-10 - [c28]Bhavya Ghai, Q. Vera Liao, Yunfeng Zhang, Klaus Mueller:
Active Learning++: Incorporating Annotator's Rationale using Local Model Explanation. DaSH@KDD 2020 - [i13]Yunfeng Zhang, Q. Vera Liao, Rachel K. E. Bellamy:
Effect of Confidence and Explanation on Accuracy and Trust Calibration in AI-Assisted Decision Making. CoRR abs/2001.02114 (2020) - [i12]Q. Vera Liao, Daniel M. Gruen, Sarah Miller:
Questioning the AI: Informing Design Practices for Explainable AI User Experiences. CoRR abs/2001.02478 (2020) - [i11]Bhavya Ghai, Q. Vera Liao, Yunfeng Zhang, Rachel K. E. Bellamy, Klaus Mueller:
Explainable Active Learning (XAL): An Empirical Study of How Local Explanations Impact Annotator Experience. CoRR abs/2001.09219 (2020) - [i10]Stephanie Houde, Vera Liao, Jacquelyn Martino, Michael J. Muller, David Piorkowski, John T. Richards, Justin D. Weisz, Yunfeng Zhang:
Business (mis)Use Cases of Generative AI. CoRR abs/2003.07679 (2020) - [i9]Bhavya Ghai, Q. Vera Liao, Yunfeng Zhang, Klaus Mueller:
Measuring Social Biases of Crowd Workers using Counterfactual Queries. CoRR abs/2004.02028 (2020) - [i8]Bhavya Ghai, Q. Vera Liao, Yunfeng Zhang, Klaus Mueller:
Active Learning++: Incorporating Annotator's Rationale using Local Model Explanation. CoRR abs/2009.04568 (2020) - [i7]Umang Bhatt, Yunfeng Zhang, Javier Antorán, Q. Vera Liao, Prasanna Sattigeri, Riccardo Fogliato, Gabrielle Gauthier Melançon, Ranganath Krishnan, Jason Stanley, Omesh Tickoo, Lama Nachman, Rumi Chunara, Adrian Weller, Alice Xiang:
Uncertainty as a Form of Transparency: Measuring, Communicating, and Using Uncertainty. CoRR abs/2011.07586 (2020)
2010 – 2019
- 2019
- [c27]Neil Mallinar, Abhishek Shah, Rajendra Ugrani, Ayush Gupta, Manikandan Gurusankar, Tin Kam Ho, Q. Vera Liao, Yunfeng Zhang, Rachel K. E. Bellamy, Robert Yates, Chris Desmarais, Blake McGregor:
Bootstrapping Conversational Agents with Weak Supervision. AAAI 2019: 9528-9533 - [c26]Michael J. Muller, Ingrid Lange, Dakuo Wang, David Piorkowski, Jason Tsay, Q. Vera Liao, Casey Dugan, Thomas Erickson:
How Data Science Workers Work with Data: Discovery, Capture, Curation, Design, Creation. CHI 2019: 126 - [c25]Zahra Ashktorab, Mohit Jain, Q. Vera Liao, Justin D. Weisz:
Resilient Chatbots: Repair Strategy Preferences for Conversational Breakdowns. CHI 2019: 254 - [c24]Q. Vera Liao, Yi-Chia Wang, Timothy W. Bickmore, Pascale Fung, Jonathan Grudin, Zhou Yu, Michelle X. Zhou:
Human-Agent Communication: Connecting Research and Development in HCI and AI. CSCW Companion 2019: 122-126 - [c23]Q. Vera Liao, Michal Shmueli-Scheuer, Tsung-Hsien (Shawn) Wen, Zhou Yu:
User-aware conversational agents. IUI Companion 2019: 133-134 - [c22]Jonathan Dodge, Q. Vera Liao, Yunfeng Zhang, Rachel K. E. Bellamy, Casey Dugan:
Explaining models: an empirical study of how explanations impact fairness judgment. IUI 2019: 275-285 - [i6]Jonathan Dodge, Q. Vera Liao, Yunfeng Zhang, Rachel K. E. Bellamy, Casey Dugan:
Explaining Models: An Empirical Study of How Explanations Impact Fairness Judgment. CoRR abs/1901.07694 (2019) - [i5]Ziang Xiao, Michelle X. Zhou, Q. Vera Liao, Gloria Mark, Chang Yan Chi, Wenxi Chen, Huahai Yang:
Tell Me About Yourself: Using an AI-Powered Chatbot to Conduct Conversational Surveys. CoRR abs/1905.10700 (2019) - [i4]Vijay Arya, Rachel K. E. Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Q. Vera Liao, Ronny Luss, Aleksandra Mojsilovic, Sami Mourad, Pablo Pedemonte, Ramya Raghavendra, John T. Richards, Prasanna Sattigeri, Karthikeyan Shanmugam, Moninder Singh, Kush R. Varshney, Dennis Wei, Yunfeng Zhang:
One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques. CoRR abs/1909.03012 (2019) - [i3]Q. Vera Liao, Michael J. Muller:
Enabling Value Sensitive AI Systems through Participatory Design Fictions. CoRR abs/1912.07381 (2019) - 2018
- [j3]Rajan Vaish, Q. Vera Liao, Victoria Bellotti:
What's in it for me? Self-serving versus other-oriented framing in messages advocating use of prosocial peer-to-peer services. Int. J. Hum. Comput. Stud. 109: 1-12 (2018) - [j2]Mohit Jain, Pratyush Kumar, Ishita Bhansali, Q. Vera Liao, Khai N. Truong, Shwetak N. Patel:
FarmChat: A Conversational Agent to Answer Farmer Queries. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2(4): 170:1-170:22 (2018) - [c21]Q. Vera Liao, Muhammed Mas-ud Hussain, Praveen Chandar, Matthew Davis, Yasaman Khazaeni, Marco Patricio Crasso, Dakuo Wang, Michael J. Muller, N. Sadat Shami, Werner Geyer:
All Work and No Play? CHI 2018: 3 - [c20]Ameneh Shamekhi, Q. Vera Liao, Dakuo Wang, Rachel K. E. Bellamy, Thomas Erickson:
Face Value? CHI 2018: 391 - [c19]Yunfeng Zhang, Q. Vera Liao, Biplav Srivastava:
Towards an Optimal Dialog Strategy for Information Retrieval Using Both Open- and Close-ended Questions. IUI 2018: 365-369 - [i2]Neil Mallinar, Abhishek Shah, Rajendra Ugrani, Ayush Gupta, Manikandan Gurusankar, Tin Kam Ho, Q. Vera Liao, Yunfeng Zhang, Rachel K. E. Bellamy, Robert Yates, Chris Desmarais, Blake McGregor:
Bootstrapping Conversational Agents With Weak Supervision. CoRR abs/1812.06176 (2018) - 2017
- [c18]Praveen Chandar, Yasaman Khazaeni, Matthew Davis, Michael J. Muller, Marco Crasso, Q. Vera Liao, N. Sadat Shami, Werner Geyer:
Leveraging Conversational Systems to Assists New Hires During Onboarding. INTERACT (2) 2017: 381-391 - [i1]Q. Vera Liao, Biplav Srivastava, Pavan Kapanipathi:
A Measure for Dialog Complexity and its Application in Streamlining Service Operations. CoRR abs/1708.04134 (2017) - 2016
- [c17]Q. Vera Liao, Matthew Davis, Werner Geyer, Michael J. Muller, N. Sadat Shami:
What Can You Do?: Studying Social-Agent Orientation and Agent Proactive Interactions with an Agent for Employees. Conference on Designing Interactive Systems 2016: 264-275 - [c16]Qingzi Vera Liao, Wai-Tat Fu, Markus Strohmaier:
#Snowden: Understanding Biases Introduced by Behavioral Differences of Opinion Groups on Social Media. CHI 2016: 3352-3363 - [c15]Q. Vera Liao, Victoria Bellotti, Michael Youngblood:
Improvising Harmony: Opportunities for Technologies to Support Crowd Orchestration. GROUP 2016: 159-169 - 2015
- [c14]Qingzi Vera Liao, Wai-Tat Fu, Sri Shilpa Mamidi:
It Is All About Perspective: An Exploration of Mitigating Selective Exposure with Aspect Indicators. CHI 2015: 1439-1448 - 2014
- [j1]Qingzi Vera Liao, Wai-Tat Fu:
Age differences in credibility judgments of online health information. ACM Trans. Comput. Hum. Interact. 21(1): 2:1-2:23 (2014) - [c13]Qingzi Vera Liao, Wai-Tat Fu:
Expert voices in echo chambers: effects of source expertise indicators on exposure to diverse opinions. CHI 2014: 2745-2754 - [c12]Qingzi Vera Liao, Wai-Tat Fu:
Can you hear me now?: mitigating the echo chamber effect by source position indicators. CSCW 2014: 184-196 - 2013
- [c11]Jason H. D. Cho, Q. Vera Liao, Yunliang Jiang, Bruce R. Schatz:
Aggregating Personal Health Messages for Scalable Comparative Effectiveness Research. BCB 2013: 907 - [c10]Qingzi Vera Liao, Wai-Tat Fu:
Beyond the filter bubble: interactive effects of perceived threat and topic involvement on selective exposure to information. CHI 2013: 2359-2368 - 2012
- [c9]Qingzi Vera Liao, Claudia Wagner, Peter Pirolli, Wai-Tat Fu:
Understanding experts' and novices' expertise judgment of twitter users. CHI 2012: 2461-2464 - [c8]Qingzi Vera Liao, Wai-Tat Fu:
Age differences in credibility judgment of online health information. IHI 2012: 353-362 - [c7]Wai-Tat Fu, Qingzi Vera Liao:
Information and Attitude Diffusion in Networks. SBP 2012: 205-213 - [c6]Claudia Wagner, Vera Liao, Peter Pirolli, Les Nelson, Markus Strohmaier:
It's Not in Their Tweets: Modeling Topical Expertise of Twitter Users. SocialCom/PASSAT 2012: 91-100 - 2011
- [c5]Vera Liao:
How user reviews influence older and younger adults' credibility judgments of online health information. CHI Extended Abstracts 2011: 893-898 - [c4]Qingzi Vera Liao, Wai-Tat Fu:
Effects of Aging and Individual Differences on Credibility Judgment of Online Health Information. CogSci 2011 - [c3]Qingzi Vera Liao, Wai-Tat Fu:
The Impact of User Reviews on Older and Younger Adults' Attitude towards Online Medication Information. CogSci 2011 - [c2]Wai-Tat Fu, Vera Liao:
Crowdsourcing Quality Control of Online Information: A Quality-Based Cascade Model. SBP 2011: 147-154 - 2010
- [c1]Qingzi Vera Liao:
Effects of cognitive aging on credibility assessment of online health information. CHI Extended Abstracts 2010: 4321-4326
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-12-01 00:12 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint