Export Citations
Save this search
Please login to be able to save your searches and receive alerts for new content matching your search criteria.
- research-articleAugust 2024
AutoXPCR: Automated Multi-Objective Model Selection for Time Series Forecasting
KDD '24: Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data MiningAugust 2024, Pages 806–815https://doi.org/10.1145/3637528.3672057Automated machine learning (AutoML) streamlines the creation of ML models, but few specialized methods have approached the challenging domain of time series forecasting. Deep neural networks (DNNs) often deliver state-of-the-art predictive performance ...
- research-articleAugust 2024
Explanatory Model Monitoring to Understand the Effects of Feature Shifts on Performance
KDD '24: Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data MiningAugust 2024, Pages 550–561https://doi.org/10.1145/3637528.3671959Monitoring and maintaining machine learning models are among the most critical challenges in translating recent advances in the field into real-world applications. However, current monitoring methods lack the capability of provide actionable insights ...
- research-articleAugust 2024
CAFO: Feature-Centric Explanation on Time Series Classification
KDD '24: Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data MiningAugust 2024, Pages 1372–1382https://doi.org/10.1145/3637528.3671724In multivariate time series (MTS) classification, finding the important features (e.g., sensors) for model performance is crucial yet challenging due to the complex, high-dimensional nature of MTS data, intricate temporal dynamics, and the necessity for ...
- research-articleAugust 2024
Explainable and Interpretable Forecasts on Non-Smooth Multivariate Time Series for Responsible Gameplay
KDD '24: Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data MiningAugust 2024, Pages 5126–5137https://doi.org/10.1145/3637528.3671657Multi-variate Time Series (MTS) forecasting has made large strides (with very negligible errors) through recent advancements in neural networks, e.g., Transformers. However, in critical situations like predicting gaming overindulgence that affects one's ...
- research-articleAugust 2024
XRL-Bench: A Benchmark for Evaluating and Comparing Explainable Reinforcement Learning Techniques
- Yu Xiong,
- Zhipeng Hu,
- Ye Huang,
- Runze Wu,
- Kai Guan,
- XingChen Fang,
- Ji Jiang,
- Tianze Zhou,
- YuJing Hu,
- Haoyu Liu,
- Tangjie Lyu,
- Changjie Fan
KDD '24: Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data MiningAugust 2024, Pages 6073–6082https://doi.org/10.1145/3637528.3671595Reinforcement Learning (RL) has demonstrated substantial potential across diverse fields, yet understanding its decision-making process, especially in real-world scenarios where rationality and safety are paramount, is an ongoing challenge. This paper ...
-
- tutorialAugust 2024
Explainable Artificial Intelligence on Biosignals for Clinical Decision Support
- Miriam Cindy Maurer,
- Jacqueline Michelle Metsch,
- Philip Hempel,
- Theresa Bender,
- Nicolai Spicher,
- Anne-Christin Hauschild
KDD '24: Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data MiningAugust 2024, Pages 6597–6604https://doi.org/10.1145/3637528.3671459Deep learning has proven effective in several areas, including computer vision, natural language processing, and disease prediction, which can support clinicians in making decisions along the clinical pathway. However, in order to successfully integrate ...
- research-articleAugust 2024
Self-Explainable Temporal Graph Networks based on Graph Information Bottleneck
KDD '24: Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data MiningAugust 2024, Pages 2572–2583https://doi.org/10.1145/3637528.3671962Temporal Graph Neural Networks (TGNN) have the ability to capture both the graph topology and dynamic dependencies of interactions within a graph over time. There has been a growing need to explain the predictions of TGNN models due to the difficulty in ...
- research-articleAugust 2024
Unraveling Block Maxima Forecasting Models with Counterfactual Explanation
KDD '24: Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data MiningAugust 2024, Pages 562–573https://doi.org/10.1145/3637528.3671923Disease surveillance, traffic management, and weather forecasting are some of the key applications that could benefit from block maxima forecasting of a time series as the extreme block maxima values often signify events of critical importance such as ...
- research-articleJuly 2024
Explainability for Transparent Conversational Information-Seeking
SIGIR '24: Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information RetrievalJuly 2024, Pages 1040–1050https://doi.org/10.1145/3626772.3657768The increasing reliance on digital information necessitates advancements in conversational search systems, particularly in terms of information transparency. While prior research in conversational information-seeking has concentrated on improving ...
- research-articleJuly 2024
Influence on Judgements of Learning Given Perceived AI Annotations
L@S '24: Proceedings of the Eleventh ACM Conference on Learning @ ScaleJuly 2024, Pages 221–231https://doi.org/10.1145/3657604.3662044In this study, we designed a tool to investigate the relationship between students' ability to render accurate judgements of learning (JOLs) with decision-making behavior when annotating their own work and comparing it with perceived AI-generated ...
- articleJune 2024
Exploring Trade-offs in Explainable AI
ACM SIGAda Ada Letters (SIGADA), Volume 43, Issue 2December 2023, Pages 36–42https://doi.org/10.1145/3672359.3672363Machine Learning (ML) models are increasingly used in systems that involve physical human interaction or decision-making systems that impact human health and safety. Ensuring that these systems are safe and reliable is an important topic of current AI ...
- research-articleMay 2024JUST ACCEPTED
Abusive Comment Detection in Tamil Code-Mixed Data by Adjusting Class Weights and Refining Features
ACM Transactions on Asian and Low-Resource Language Information Processing (TALLIP), Just Accepted https://doi.org/10.1145/3664619In recent years, a significant portion of the content on various platforms on the internet has been found to be offensive or abusive. Abusive comment detection can go a long way in preventing internet users from facing the adverse effects of coming in ...
- research-articleMay 2024
Learning to Generate Explainable Stock Predictions using Self-Reflective Large Language Models
WWW '24: Proceedings of the ACM Web Conference 2024May 2024, Pages 4304–4315https://doi.org/10.1145/3589334.3645611Explaining stock predictions is generally a difficult task for traditional non-generative deep learning models, where explanations are limited to visualizing the attention weights on important texts. Today, Large Language Models (LLMs) present a solution ...
- research-articleMay 2024
A Counterfactual Framework for Learning and Evaluating Explanations for Recommender Systems
WWW '24: Proceedings of the ACM Web Conference 2024May 2024, Pages 3723–3733https://doi.org/10.1145/3589334.3645560In the field of recommender systems, explainability remains a pivotal yet challenging aspect. To address this, we introduce the Learning to eXplain Recommendations (LXR) framework, a post-hoc, model-agnostic approach designed for providing counterfactual ...
- research-articleMay 2024
Game-theoretic Counterfactual Explanation for Graph Neural Networks
WWW '24: Proceedings of the ACM Web Conference 2024May 2024, Pages 503–514https://doi.org/10.1145/3589334.3645419Graph Neural Networks (GNNs) have been a powerful tool for node classification tasks in complex networks. However, their decision-making processes remain a black-box to users, making it challenging to understand the reasoning behind their predictions. ...
- research-articleMay 2024
Building Trustworthy Human-Centric Autonomous Systems Via Explanations
AAMAS '24: Proceedings of the 23rd International Conference on Autonomous Agents and Multiagent SystemsMay 2024, Pages 2752–2754Autonomous systems suffer from people's mistrust, as these systems rely on highly accurate yet inscrutable black box methods that are not amenable to safety guarantees nor common sense understanding. As a result, we see the erosion of accountability, ...
- research-articleMay 2024
Explainable Agents (XAg) by Design
AAMAS '24: Proceedings of the 23rd International Conference on Autonomous Agents and Multiagent SystemsMay 2024, Pages 2712–2716The likes of ChatGPT has propelled the use of AI techniques beyond our community's expectations. Along with this, the fear of AI has also risen, in particular around the ability, or lack thereof, of the AI system to explain its behaviours. Explainability ...
- research-articleMay 2024
Causal Explanations for Sequential Decision-Making in Multi-Agent Systems
AAMAS '24: Proceedings of the 23rd International Conference on Autonomous Agents and Multiagent SystemsMay 2024, Pages 771–779We present CEMA: Causal Explanations in Multi-A gent systems; a framework for creating causal natural language explanations of an agent's decisions in dynamic sequential multi-agent systems to build more trustworthy autonomous agents. Unlike prior work ...
- research-articleMay 2024
Approximating the Core via Iterative Coalition Sampling
- Ian Gemp,
- Marc Lanctot,
- Luke Marris,
- Yiran Mao,
- Edgar Duéñez-Guzmán,
- Sarah Perrin,
- Andras Gyorgy,
- Romuald Elie,
- Georgios Piliouras,
- Michael Kaisers,
- Daniel Hennes,
- Kalesha Bullard,
- Kate Larson,
- Yoram Bachrach
AAMAS '24: Proceedings of the 23rd International Conference on Autonomous Agents and Multiagent SystemsMay 2024, Pages 669–678The core is a central solution concept in cooperative game theory, defined as the set of feasible allocations or payments such that no subset of agents has incentive to break away and form their own subgroup or coalition. However, it has long been known ...
- extended-abstractMay 2024
Unlocking the Potential of Machine Ethics with Explainability
AAMAS '24: Proceedings of the 23rd International Conference on Autonomous Agents and Multiagent SystemsMay 2024, Pages 2477–2479Roughly speaking, the research field of machine ethics deals with devising behavioral constraints on computational systems to ensure restricted, morally acceptable behavior. The potential benefits of researching machine ethics are substantial, ...