Export Citations
Save this search
Please login to be able to save your searches and receive alerts for new content matching your search criteria.
- research-articleJuly 2024JUST ACCEPTED
Fair-RGNN: Mitigating Relational Bias on Knowledge Graphs
ACM Transactions on Knowledge Discovery from Data (TKDD), Just Accepted https://doi.org/10.1145/3681792Knowledge graph data are prevalent in real-world applications, and knowledge graph neural networks (KGNNs) are essential techniques for knowledge graph representation learning. Although KGNN effectively models the structural information from knowledge ...
- research-articleJuly 2024
Mitigating social biases of pre-trained language models via contrastive self-debiasing with double data augmentation
AbstractPre-trained Language Models (PLMs) have been shown to inherit and even amplify the social biases contained in the training corpus, leading to undesired stereotype in real-world applications. Existing techniques for mitigating the social biases of ...
- research-articleMay 2024JUST ACCEPTED
Boosting Fair Classifier Generalization through Adaptive Priority Reweighing
ACM Transactions on Knowledge Discovery from Data (TKDD), Just Accepted https://doi.org/10.1145/3665895With the increasing penetration of machine learning applications in critical decision-making areas, calls for algorithmic fairness are more prominent. Although there have been various modalities to improve algorithmic fairness through learning with ...
- introductionMay 2024
AI Driven Online Advertising: Market Design, Generative AI, and Ethics
WWW '24: Companion Proceedings of the ACM Web Conference 2024Pages 1407–1409https://doi.org/10.1145/3589335.3641295Online advertising contributes a considerable part of the tech sector's revenue, and has been remarkably influencing the public agenda. With evolving developments, AI is playing an increasingly significant role in online advertising. We propose to create ...
- surveyFebruary 2024
Explainability for Large Language Models: A Survey
- Haiyan Zhao,
- Hanjie Chen,
- Fan Yang,
- Ninghao Liu,
- Huiqi Deng,
- Hengyi Cai,
- Shuaiqiang Wang,
- Dawei Yin,
- Mengnan Du
ACM Transactions on Intelligent Systems and Technology (TIST), Volume 15, Issue 2Article No.: 20, Pages 1–38https://doi.org/10.1145/3639372Large language models (LLMs) have demonstrated impressive capabilities in natural language processing. However, their internal mechanisms are still unclear and this lack of transparency poses unwanted risks for downstream applications. Therefore, ...
-
- research-articleJanuary 2024
Unifying Fourteen Post-Hoc Attribution Methods With Taylor Interactions
IEEE Transactions on Pattern Analysis and Machine Intelligence (ITPM), Volume 46, Issue 7Pages 4625–4640https://doi.org/10.1109/TPAMI.2024.3358410Various attribution methods have been developed to explain deep neural networks (DNNs) by inferring the attribution/importance/contribution score of each input variable to the final output. However, existing attribution methods are often built upon ...
- research-articleDecember 2023
Shortcut Learning of Large Language Models in Natural Language Understanding
Shortcuts often hinder the robustness of large language models.
- research-articleMay 2024
Black-box backdoor defense via zero-shot image purification
NIPS '23: Proceedings of the 37th International Conference on Neural Information Processing SystemsArticle No.: 2505, Pages 57336–57366Backdoor attacks inject poisoned samples into the training data, resulting in the misclassification of the poisoned input during a model's deployment. Defending against such attacks is challenging, especially for real-world black-box models where only ...
- research-articleMay 2024
M4: a unified XAI benchmark for faithfulness evaluation of feature attribution methods across metrics, modalities and models
NIPS '23: Proceedings of the 37th International Conference on Neural Information Processing SystemsArticle No.: 81, Pages 1630–1643While Explainable Artificial Intelligence (XAI) techniques have been widely studied to explain predictions made by deep neural networks, the way to evaluate the faithfulness of explanation results remains challenging, due to the heterogeneity of ...
- short-paperOctober 2023
Exposing Model Theft: A Robust and Transferable Watermark for Thwarting Model Extraction Attacks
CIKM '23: Proceedings of the 32nd ACM International Conference on Information and Knowledge ManagementPages 4315–4319https://doi.org/10.1145/3583780.3615211The increasing prevalence of Deep Neural Networks (DNNs) in cloud-based services has led to their widespread use through various APIs. However, recent studies reveal the susceptibility of these public APIs to model extraction attacks, where adversaries ...
- research-articleOctober 2023
Attacking Neural Networks with Neural Networks: Towards Deep Synchronization for Backdoor Attacks
CIKM '23: Proceedings of the 32nd ACM International Conference on Information and Knowledge ManagementPages 608–618https://doi.org/10.1145/3583780.3614784Backdoor attacks inject poisoned samples into training data, where backdoor triggers are embedded into the model trained on the mixture of poisoned and clean samples. An interesting phenomenon can be observed in the training process: the loss of poisoned ...
- ArticleSeptember 2023
Deep Serial Number: Computational Watermark for DNN Intellectual Property Protection
Machine Learning and Knowledge Discovery in Databases: Applied Data Science and Demo TrackPages 157–173https://doi.org/10.1007/978-3-031-43427-3_10AbstractIn this paper, we present DSN (Deep Serial Number), a simple yet effective watermarking algorithm designed specifically for deep neural networks (DNNs). Unlike traditional methods that incorporate identification signals into DNNs, our approach ...
- ArticleSeptember 2023
Mitigating Algorithmic Bias with Limited Annotations
Machine Learning and Knowledge Discovery in Databases: Research TrackPages 241–258https://doi.org/10.1007/978-3-031-43415-0_15AbstractExisting work on fairness modeling commonly assumes that sensitive attributes for all instances are fully available, which may not be true in many real-world applications due to the high cost of acquiring sensitive information. When sensitive ...
- research-articleAugust 2023
Fairness via group contribution matching
IJCAI '23: Proceedings of the Thirty-Second International Joint Conference on Artificial IntelligenceArticle No.: 49, Pages 436–445https://doi.org/10.24963/ijcai.2023/49Fairness issues in Deep Learning models have recently received increasing attention due to their significant societal impact. Although methods for mitigating unfairness are constantly proposed, little research has been conducted to understand how ...
- research-articleJuly 2023
FAIRER: fairness as decision rationale alignment
ICML'23: Proceedings of the 40th International Conference on Machine LearningArticle No.: 805, Pages 19471–19489Deep neural networks (DNNs) have made significant progress, but often suffer from fairness issues, as deep models typically show distinct accuracy differences among certain subgroups (e.g., males and females). Existing research addresses this critical ...
- research-articleAugust 2022
Towards Learning Disentangled Representations for Time Series
KDD '22: Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data MiningPages 3270–3278https://doi.org/10.1145/3534678.3539140Promising progress has been made toward learning efficient time series representations in recent years, but the learned representations often lack interpretability and do not encode semantic meanings by the complex interactions of many latent factors. ...
- letterDecember 2021
Towards structured NLP interpretation via graph explainers
AbstractNatural language processing (NLP) models have been increasingly deployed in real‐world applications, and interpretation for textual data has also attracted dramatic attention recently. Most existing methods generate feature importance ...
In this paper we propose two kinds of structured interpretation for pre‐trained GNNs for NLP applications: edge‐level interpretation and subgraph‐level interpretation. image image
- research-articleJune 2024
Fairness via representation neutralization
NIPS '21: Proceedings of the 35th International Conference on Neural Information Processing SystemsArticle No.: 925, Pages 12091–12103Existing bias mitigation methods for DNN models primarily work on learning debiased encoders. This process not only requires a lot of instance-level annotations for sensitive attributes, it also does not guarantee that all fairness sensitive information ...
- research-articleAugust 2021
Mutual Information Preserving Back-propagation: Learn to Invert for Faithful Attribution
KDD '21: Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data MiningPages 258–268https://doi.org/10.1145/3447548.3467310Back-propagation based visualizations have been proposed to interpret deep neural networks (DNNs), some of which produce interpretations with good visual quality. However, there exist doubts about whether these intuitive visualizations are related to ...
- research-articleJuly 2021
Fairness in Deep Learning: A Computational Perspective
IEEE Intelligent Systems (IEEECS-INTELLI-NEW), Volume 36, Issue 4Pages 25–34https://doi.org/10.1109/MIS.2020.3000681Fairness in deep learning has attracted tremendous attention recently, as deep learning is increasingly being used in high-stake decision making applications that affect individual lives. We provide a review covering recent progresses to tackle ...