Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article
Open access

Counterfactual Explanation at Will, with Zero Privacy Leakage

Published: 30 May 2024 Publication History

Abstract

While counterfactuals have been extensively studied as an intuitive explanation of model predictions, they still have limited adoption in practice due to two obstacles: (a) They rely on excessive access to the model for explanation that the model owner may not provide; and (b) counterfactuals carry information that adversarial users can exploit to launch model extraction attacks. To address the challenges, we propose CPC, a data-driven approach to counterfactual. CPC works at the client side and gives full control and right-to-explain to model users, even when model owners opt not to. Moreover, CPC warrants that adversarial users cannot exploit counterfactuals to extract models. We formulate properties and fundamental problems underlying CPC, study their complexity and develop effective algorithms. Using real-world datasets and user study, we verify that CPC does prevent adversaries from exploiting counterfactuals for model extraction attacks, and is orders of magnitude faster than existing explainers, while maintaining comparable and often higher quality.

Supplemental Material

MP4 File
Presentation video
MP4 File
Presentation video

References

[1]
2020. Credit dataset. https://github.com/DrIanGregory/Kaggle-GiveMeSomeCredit.
[2]
2022. Compas dataset. https://www.kaggle.com/datasets/danofer/compass.
[3]
2022. Kaggle. https://www.kaggle.com/.
[4]
2022. Loan dataset. https://www.kaggle.com/datasets/vikasukani/loan-eligible-dataset.
[5]
2022. UCI Machine Learning Repository. https://archive.ics.uci.edu/ml/index.php.
[6]
2023. Amazon SageMaker. https://docs.aws.amazon.com/sagemaker/latest/dg/clarify-model-explainability.html.
[7]
2023. Google cloud. https://cloud.google.com/.
[8]
2023. Microsoft Azure Machine Learning. https://azure.microsoft.com/en-gb.
[9]
2023. Upstart. https://www.upstart.com/.
[10]
2023. Zest AI. https://www.zest.ai/.
[11]
Raghavendra Addanki, Andrew McGregor, Alexandra Meliou, and Zafeiria Moumoulidou. 2022. Improved Approximation and Scalability for Fair Max-Min Diversification. In ICDT, Vol. 220. 7:1--7:21.
[12]
Ulrich Aïvodji, Alexandre Bolot, and Sébastien Gambs. 2020. Model extraction from counterfactual explanations. arXiv preprint arXiv:2009.01884 (2020).
[13]
Susanne Albers. 2003. Online algorithms: a survey. Mathematical Programming 97, 1 (2003), 3--26.
[14]
Emanuele Albini, Shubham Sharma, Saumitra Mishra, Danial Dervovic, and Daniele Magazzeni. 2023. On the Connection between Game-Theoretic Feature Attributions and Counterfactual Explanations. In Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society. 411--431.
[15]
Shuai An and Yang Cao. 2024. Relative Keys: Putting Feature Explanation into Context. Proc. ACM Manag. Data 2, 1, Article 8 (mar 2024), 28 pages.
[16]
Javier Antorán, Umang Bhatt, Tameem Adel, Adrian Weller, and José Miguel Hernández-Lobato. 2020. Getting a clue: A method for explaining uncertainty estimates. arXiv preprint arXiv:2006.06848 (2020).
[17]
Alejandro Barredo Arrieta, Natalia Díaz-Rodríguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador García, Sergio Gil-López, Daniel Molina, Richard Benjamins, et al. 2020. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information fusion 58 (2020), 82--115.
[18]
Giorgio Ausiello, Nicolas Boria, Aristotelis Giannakos, Giorgio Lucarelli, and V Th Paschos. 2012. Online maximum k-coverage. Discrete Applied Mathematics 160, 13--14 (2012), 1901--1913.
[19]
Michael Barlow, Christian Konrad, and Charana Nandasena. 2021. Streaming Set Cover in Practice. In ALENEX. 181--192.
[20]
Roberto J Bayardo and Rakesh Agrawal. 2005. Data privacy through optimal k-anonymization. In ICDE. IEEE, 217--228.
[21]
Nicole Bidoit, Melanie Herschel, and Katerina Tzompanaki. 2016. Refining SQL Queries based on Why-Not Polynomials. In TAPP.
[22]
C Bishop. 2006. Pattern recognition and machine learning. Springer google schola 2 (2006), 35--42.
[23]
Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ B. Altman, Simran Arora, Sydney von Arx, Michael S. Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, Erik Brynjolfsson, Shyamal Buch, Dallas Card, Rodrigo Castellon, Niladri S. Chatterji, Annie S. Chen, Kathleen Creel, Jared Quincy Davis, Dorottya Demszky, Chris Donahue, Moussa Doumbouya, Esin Durmus, Stefano Ermon, John Etchemendy, Kawin Ethayarajh, Li Fei-Fei, Chelsea Finn, Trevor Gale, Lauren Gillespie, Karan Goel, Noah D. Goodman, Shelby Grossman, Neel Guha, Tatsunori Hashimoto, Peter Henderson, John Hewitt, Daniel E. Ho, Jenny Hong, Kyle Hsu, Jing Huang, Thomas Icard, Saahil Jain, Dan Jurafsky, Pratyusha Kalluri, Siddharth Karamcheti, Geoff Keeling, Fereshte Khani, Omar Khattab, Pang Wei Koh, Mark S. Krass, Ranjay Krishna, Rohith Kuditipudi, and et al. 2021. On the Opportunities and Risks of Foundation Models. CoRR abs/2108.07258 (2021).
[24]
Allan Borodin, Hyun Chul Lee, and Yuli Ye. 2012. Max-sum diversification, monotone submodular functions and dynamic updates. In SIGMOD. 155--166.
[25]
Tomas Borovicka, Marcel Jirina Jr, Pavel Kordik, and Marcel Jirina. 2012. Selecting representative data sets. In ICDM, Vol. 12. 43--70.
[26]
El Bachir Boukherouaa, Mr Ghiath Shabsigh, Khaled AlAjmi, Jose Deodoro, Aquiles Farias, Ebru S Iskender, Mr Alin T Mirestean, and Rangachary Ravikumar. 2021. Powering the Digital Economy: Opportunities and Risks of Artificial Intelligence in Finance. International Monetary Fund.
[27]
Tim Brennan and William L Oliver. 2013. Emergence of machine learning techniques in criminology: implications of complexity in our data and in research questions. Criminology & Pub. Pol'y 12 (2013), 551.
[28]
Matteo Brucato, Azza Abouzied, and Alexandra Meliou. 2018. Package queries: efficient and scalable computation of high-order constraints. The VLDB Journal 27 (2018), 693--718.
[29]
Matteo Brucato, Juan Felipe Beltran, Azza Abouzied, and Alexandra Meliou. 2016. Scalable Package Queries in Relational Database Systems. Proc. VLDB Endow. 9, 7 (2016), 576--587.
[30]
Matteo Brucato, Miro Mannino, Azza Abouzied, Peter J Haas, and Alexandra Meliou. 2020. sPaQLTooLs: a stochastic package query interface for scalable constrained optimization. Proc. VLDB Endow. (2020).
[31]
Dieter Brughmans, Pieter Leyman, and David Martens. 2023. Nice: an algorithm for nearest instance counterfactual explanations. Data Mining and Knowledge Discovery (2023), 1--39.
[32]
Peter Buneman, Sanjeev Khanna, andWang Chiew Tan. 2001. Why and Where: A Characterization of Data Provenance. In ICDT.
[33]
Peter Buneman and Wang Chiew Tan. 2007. Provenance in databases. In SIGMOD. ACM.
[34]
Nicholas Carlini, Steve Chien, Milad Nasr, Shuang Song, Andreas Terzis, and Florian Tramer. 2022. Membership inference attacks from first principles. In 2022 IEEE Symposium on Security and Privacy (SP). IEEE, 1897--1914.
[35]
Tianqi Chen, Tong He, Michael Benesty, et al. 2015. Xgboost: extreme gradient boosting. R package version 0.4--2 1, 4 (2015), 1--4.
[36]
Mark Craven and Jude Shavlik. 1995. Extracting tree-structured representations of trained networks. Advances in neural information processing systems 8 (1995).
[37]
Susanne Dandl, Christoph Molnar, Martin Binder, and Bernd Bischl. 2020. Multi-objective counterfactual explanations. In International Conference on Parallel Problem Solving from Nature. Springer, 448--469.
[38]
Amit Dhurandhar, Pin-Yu Chen, Ronny Luss, Chun-Chen Tu, Paishun Ting, Karthikeyan Shanmugam, and Payel Das. 2018. Explanations based on the missing: Towards contrastive explanations with pertinent negatives. NeurIPS 31 (2018).
[39]
Matthew F Dixon, Igor Halperin, and Paul Bilokon. 2020. Machine learning in finance. Vol. 1170. Springer.
[40]
Jonathan Dodge, Q Vera Liao, Yunfeng Zhang, Rachel KE Bellamy, and Casey Dugan. 2019. Explaining models: an empirical study of how explanations impact fairness judgment. In Proceedings of the 24th international conference on intelligent user interfaces.
[41]
Cynthia Dwork. 2008. Differential privacy: A survey of results. In International conference on theory and applications of models of computation. Springer, 1--19.
[42]
Cynthia Dwork, Aaron Roth, et al. 2014. The algorithmic foundations of differential privacy. Foundations and Trends® in Theoretical Computer Science 9, 3--4 (2014), 211--407.
[43]
Kareem El Gebaly, Parag Agrawal, Lukasz Golab, Flip Korn, and Divesh Srivastava. 2014. Interpretable and informative explanations of outcomes. Proceedings of the VLDB Endowment 8, 1 (2014), 61--72.
[44]
Yuval Emek and Adi Rosén. 2016. Semi-streaming set cover. ACM Transactions on Algorithms (TALG) 13, 1 (2016), 1--22.
[45]
Anna Fariha, Suman Nath, and Alexandra Meliou. 2020. Causality-guided adaptive interventional debugging. In SIGMOD. 431--446.
[46]
Benjamin CM Fung, Ke Wang, and S Yu Philip. 2007. Anonymizing classification data for privacy preservation. TKDE 19, 5 (2007), 711--725.
[47]
Sainyam Galhotra, Anna Fariha, Raoni Lourenço, Juliana Freire, Alexandra Meliou, and Divesh Srivastava. 2022. Dataprism: Exposing disconnect between data and systems. In Proceedings of the 2022 International Conference on Management of Data. 217--231.
[48]
Sainyam Galhotra, Amir Gilad, Sudeepa Roy, and Babak Salimi. 2022. HypeR: Hypothetical Reasoning With What-If and How-To Queries Using a Probabilistic Causal Approach. In SIGMOD.
[49]
Sainyam Galhotra, Romila Pradhan, and Babak Salimi. 2021. Explaining Black-Box Algorithms Using Probabilistic Contrastive Counterfactuals. In SIGMOD. ACM.
[50]
M. R. Garey and David S. Johnson. 1979. Computers and Intractability: A Guide to the Theory of NP-Completeness. W. H. Freeman.
[51]
Boris Glavic, Alexandra Meliou, and Sudeepa Roy. 2021. Trends in explanations: Understanding and debugging data-driven systems. Foundations and Trends® in Databases (2021).
[52]
Yash Goyal, Ziyan Wu, Jan Ernst, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019. Counterfactual visual explanations. In ICML. 2376--2384.
[53]
Todd J. Green, Gregory Karvounarakis, and Val Tannen. 2007. Provenance semirings. In PODS. ACM, 31--40.
[54]
Riccardo Guidotti. 2022. Counterfactual explanations and how to find them: literature review and benchmarking. Data Mining and Knowledge Discovery (2022), 1--55.
[55]
Sariel Har-Peled, Piotr Indyk, Sepideh Mahabadi, and Ali Vakilian. 2016. Towards tight bounds for the streaming set cover problem. In PODS. 371--383.
[56]
Johannes Haug, Alexander Braun, Stefan Zürn, and Gjergji Kasneci. 2022. Change Detection for Local Explainability in Evolving Data Streams. In CIKM. 706--716.
[57]
Haoran He, Zhao Wang, Hemant Jain, Cuiqing Jiang, and Shanlin Yang. 2023. A privacy-preserving decentralized credit scoring method based on multi-party information. Decision Support Systems 166 (2023), 113910.
[58]
Hans G Herzberger. 1979. Counterfactuals and consistency. The Journal of Philosophy 76, 2 (1979), 83--88.
[59]
Alexey Ignatiev, Yacine Izza, Peter J. Stuckey, and João Marques-Silva. 2022. Using MaxSAT for Efficient Explanations of Tree Ensembles. In AAAI. 3776--3785.
[60]
Matthew Jagielski, Nicholas Carlini, David Berthelot, Alex Kurakin, and Nicolas Papernot. 2020. High accuracy and high fidelity extraction of neural networks. In 29th USENIX security symposium (USENIX Security 20). 1345--1362.
[61]
Tyler B Johnson and Carlos Guestrin. 2018. Training deep models faster with robust, approximate importance sampling. NeurIPS 31 (2018).
[62]
Shalmali Joshi, Oluwasanmi Koyejo, Warut Vijitbenjaronk, Been Kim, and Joydeep Ghosh. 2019. Towards Realistic Individual Recourse and Actionable Explanations in Black-Box Decision Making Systems. CoRR (2019).
[63]
John Jumper, Richard Evans, Alexander Pritzel, Tim Green, Michael Figurnov, Olaf Ronneberger, Kathryn Tunyasuvunakool, Russ Bates, Augustin ?ídek, Anna Potapenko, et al. 2021. Highly accurate protein structure prediction with AlphaFold. Nature 596, 7873 (2021), 583--589.
[64]
Amir-Hossein Karimi, Bernhard Schölkopf, and Isabel Valera. 2021. Algorithmic recourse: from counterfactual explanations to interventions. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency. 353--362.
[65]
Amir-Hossein Karimi, Julius Von Kügelgen, Bernhard Schölkopf, and Isabel Valera. 2020. Algorithmic recourse under imperfect causal knowledge: a probabilistic approach. Advances in neural information processing systems 33 (2020), 265--277.
[66]
Niki Kilbertus, Mateo Rojas Carulla, Giambattista Parascandolo, Moritz Hardt, Dominik Janzing, and Bernhard Schölkopf. 2017. Avoiding discrimination through causal reasoning. Advances in neural information processing systems 30 (2017).
[67]
Ron Kohavi. 1996. Scaling Up the Accuracy of Naive-Bayes Classifiers: A Decision-Tree Hybrid. In SIGKDD. 202--207.
[68]
Matt J. Kusner, Joshua R. Loftus, Chris Russell, and Ricardo Silva. 2017. Counterfactual Fairness. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4--9, 2017, Long Beach, CA, USA. 4066--4076.
[69]
Markus Langer, Daniel Oster, Timo Speith, Holger Hermanns, Lena Kästner, Eva Schmidt, Andreas Sesing, and Kevin Baum. 2021. What do we want from Explainable Artificial Intelligence (XAI)?--A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research. Artificial Intelligence 296 (2021), 103473.
[70]
Thibault Laugel, Marie-Jeanne Lesot, Christophe Marsala, Xavier Renard, and Marcin Detyniecki. 2018. Comparisonbased inverse classification for interpretability in machine learning. In IPMU. 100--111.
[71]
Thai Le, Suhang Wang, and Dongwon Lee. 2020. Grace: generating concise and informative contrastive sample to explain neural network model's prediction. In SIGKDD. 238--248.
[72]
Seokki Lee, Bertram Ludäscher, and Boris Glavic. 2020. Approximate Summaries for Why and Why-not Provenance. Proc. VLDB Endow. 13, 6 (2020), 912--924.
[73]
Raoni Lourenço, Juliana Freire, and Dennis Shasha. 2020. Bugdoc: Algorithms to debug computational processes. In Proceedings of the 2020 ACM SIGMOD International Conference on Management of Data. 463--478.
[74]
Scott M. Lundberg and Su-In Lee. 2017. A Unified Approach to Interpreting Model Predictions. In NeurIPS. 4765--4774.
[75]
Anh Mai, Matteo Brucateo, Azza Abouzied, Peter J Haas, and Alexandra Meliou. 2024. Scaling Package Queries to a Billion Tuples via Hierarchical Partitioning and Customized Optimization. Proc. VLDB Endow. (2024).
[76]
Andrew McGregor and Hoa T Vu. 2019. Better streaming algorithms for the maximum coverage problem. Theory of Computing Systems 63 (2019), 1595--1619.
[77]
Alexandra Meliou, Wolfgang Gatterbauer, Katherine F. Moore, and Dan Suciu. 2010. The Complexity of Causality and Responsibility for Query Answers and non-Answers. Proc. VLDB Endow. (2010).
[78]
Alexandra Meliou, Wolfgang Gatterbauer, Suman Nath, and Dan Suciu. 2011. Tracing data errors with viewconditioned causality. In Proceedings of the 2011 ACM SIGMOD International Conference on Management of data. 505--516.
[79]
Alexandra Meliou, Sudeepa Roy, and Dan Suciu. 2014. Causality and explanations in databases. Proceedings of the VLDB Endowment 7, 13 (2014), 1715--1716.
[80]
Michelle Miao. 2022. Debating the Right to Explanation: An Autonomy-Based Analytical Framework. SAcLJ 34 (2022), 864.
[81]
Zhengjie Miao, Qitian Zeng, Boris Glavic, and Sudeepa Roy. 2019. Going beyond provenance: Explaining query answers with pattern-based counterbalances. In Proceedings of the 2019 International Conference on Management of Data. 485--502.
[82]
Tim Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial intelligence 267 (2019), 1--38.
[83]
Smitha Milli, Ludwig Schmidt, Anca D Dragan, and Moritz Hardt. 2019. Model reconstruction from model explanations. In Proceedings of the Conference on Fairness, Accountability, and Transparency. 1--9.
[84]
Christoph Molnar. 2020. Interpretable machine learning. Lulu. com.
[85]
Ramaravind K Mothilal, Amit Sharma, and Chenhao Tan. 2020. Explaining machine learning classifiers through diverse counterfactual explanations. In Proceedings of the 2020 conference on fairness, accountability, and transparency. 607--617.
[86]
Zafeiria Moumoulidou, Andrew McGregor, and Alexandra Meliou. 2021. Diverse Data Selection under Fairness Constraints. In ICDT, Vol. 186. 13:1--13:25.
[87]
Harsh Parikh, Carlos Varjao, Louise Xu, and Eric Tchetgen Tchetgen. 2022. Validating causal inference methods. In ICML. 17346--17358.
[88]
Martin Pawelczyk, Sascha Bielawski, Johannes van den Heuvel, Tobias Richter, and Gjergji Kasneci. 2021. CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms. In NeurIPS.
[89]
Martin Pawelczyk, Klaus Broelemann, and Gjergji Kasneci. 2020. Learning model-agnostic counterfactual explanations for tabular data. In Proceedings of the web conference 2020. 3126--3132.
[90]
Martin Pawelczyk, Himabindu Lakkaraju, and Seth Neel. 2023. On the privacy risks of algorithmic recourse. In International Conference on Artificial Intelligence and Statistics. PMLR, 9680--9696.
[91]
Judea Pearl et al. 2000. Models, reasoning and inference. Cambridge, UK: CambridgeUniversityPress 19, 2 (2000), 3.
[92]
Judea Pearl, Madelyn Glymour, and Nicholas P Jewell. 2016. Causal inference in statistics: A primer. John Wiley & Sons.
[93]
Rafael Poyiadzi, Kacper Sokol, Raul Santos-Rodriguez, Tijl De Bie, and Peter Flach. 2020. FACE: feasible and actionable counterfactual explanations. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society. 344--350.
[94]
Romila Pradhan, Aditya Lahiri, Sainyam Galhotra, and Babak Salimi. 2022. Explainable AI: Foundations, Applications, Opportunities for Data Management Research. In Proceedings of the 2022 International Conference on Management of Data. 2452--2457.
[95]
Romila Pradhan, Jiongli Zhu, Boris Glavic, and Babak Salimi. 2022. Interpretable data-based explanations for fairness debugging. In SIGMOD. 247--261.
[96]
Marco Túlio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. "Why Should I Trust You?": Explaining the Predictions of Any Classifier. In KDD. 1135--1144.
[97]
Marco Túlio Ribeiro, Sameer Singh, and Carlos Guestrin. 2018. Anchors: High-Precision Model-Agnostic Explanations. In AAAI. 1527--1535.
[98]
Sudeepa Roy, Laurel J. Orr, and Dan Suciu. 2015. Explaining Query Answers with Explanation-Ready Databases. Proc. VLDB Endow. 9, 4 (2015), 348--359.
[99]
Sudeepa Roy and Babak Salimi. 2022. Causal Inference in Data Analysis with Applications to Fairness and Explanations. In Reasoning Web.
[100]
Sudeepa Roy and Dan Suciu. 2014. A formal approach to finding explanations for database queries. In SIGMOD. 1579--1590.
[101]
Sandhya Saisubramanian, Sainyam Galhotra, and Shlomo Zilberstein. 2020. Balancing the tradeoff between clustering value and interpretability. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society. 351--357.
[102]
Babak Salimi, Corey Cole, Peter Li, Johannes Gehrke, and Dan Suciu. 2018. HypDB: a demonstration of detecting, explaining and resolving bias in OLAP queries. Proceedings of the VLDB Endowment 11, 12 (2018), 2062--2065.
[103]
Babak Salimi, Bill Howe, and Dan Suciu. 2020. Database repair meets algorithmic fairness. ACM SIGMOD Record 49, 1 (2020), 34--41.
[104]
Andrew Selbst and Julia Powles. 2018. ?Meaningful information" and the right to explanation. In conference on fairness, accountability and transparency. 48--48.
[105]
Reza Shokri, Martin Strobel, and Yair Zick. 2021. On the privacy risks of model explanations. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. 231--241.
[106]
Nina Spreitzer, Hinda Haned, and Ilse van der Linden. 2022. Evaluating the Practicality of Counterfactual Explanations. In NeurIPS.
[107]
Bram Steenwinckel, Dieter De Paepe, Sander Vanden Hautte, Pieter Heyvaert, Mohamed Bentefrit, Pieter Moens, Anastasia Dimou, Bruno Van Den Bossche, Filip De Turck, Sofie Van Hoecke, et al. 2021. FLAGS: A methodology for adaptive anomaly detection and root cause analysis on sensor data streams by fusing expert knowledge with machine learning. Future Generation Computer Systems 116 (2021), 30--48.
[108]
Balder ten Cate, Cristina Civili, Evgeny Sherkhonov, and Wang-Chiew Tan. 2015. High-level why-not explanations using ontologies. In Proceedings of the 34th ACM SIGMOD-SIGACT-SIGAI Symposium on Principles of Database Systems. 31--43.
[109]
Tommaso Teofili, Donatella Firmani, Nick Koudas, Vincenzo Martello, Paolo Merialdo, and Divesh Srivastava. 2022. Effective explanations for entity resolution models. In 2022 IEEE 38th International Conference on Data Engineering (ICDE). IEEE, 2709--2721.
[110]
Florian Tramèr, Fan Zhang, Ari Juels, Michael K Reiter, and Thomas Ristenpart. 2016. Stealing machine learning models via prediction {APIs}. In 25th USENIX security symposium (USENIX Security 16). 601--618.
[111]
Guy Van den Broeck, Anton Lykov, Maximilian Schleich, and Dan Suciu. 2022. On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74 (2022), 851--886.
[112]
Vijay V Vazirani. 2001. Approximation algorithms. Vol. 1. Springer.
[113]
Sahil Verma, Varich Boonsanong, Minh Hoang, Keegan E Hines, John P Dickerson, and Chirag Shah. 2020. Counterfactual explanations and algorithmic recourses for machine learning: A review. arXiv preprint arXiv:2010.10596 (2020).
[114]
Sandra Wachter, Brent Mittelstadt, and Chris Russell. 2017. Counterfactual explanations without opening the black box: Automated decisions and the GDPR. Harv. JL & Tech. 31 (2017), 841.
[115]
Xiaolan Wang and Alexandra Meliou. 2019. Explain3D: Explaining Disagreements in Disjoint Datasets. Proc. VLDB Endow. (2019).
[116]
Yongjie Wang, Qinxu Ding, Ke Wang, Yue Liu, Xingyu Wu, Jinglong Wang, Yong Liu, and Chunyan Miao. 2021. The skyline of counterfactual explanations for machine learning decision models. In CIKM. 2030--2039.
[117]
YongjieWang, Hangwei Qian, and Chunyan Miao. 2022. Dualcf: Efficient model extraction attack from counterfactual explanations. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency. 1318--1329.
[118]
Yuhao Wen, Xiaodan Zhu, Sudeepa Roy, and Jun Yang. 2018. Interactive summarization and exploration of top aggregate query answers. In Proceedings of the VLDB Endowment. International Conference on Very Large Data Bases, Vol. 11. 2196.
[119]
James Wexler, Mahima Pushkarna, Tolga Bolukbasi, Martin Wattenberg, Fernanda Viégas, and Jimbo Wilson. 2019. The what-if tool: Interactive probing of machine learning models. IEEE transactions on visualization and computer graphics 26, 1 (2019), 56--65.
[120]
Eugene Wu and Samuel Madden. 2013. Scorpion: Explaining Away Outliers in Aggregate Queries. Proc. VLDB Endow. 6, 8 (2013), 553--564.
[121]
Samuel Yeom, Irene Giacomelli, Matt Fredrikson, and Somesh Jha. 2018. Privacy risk in machine learning: Analyzing the connection to overfitting. In 2018 IEEE 31st computer security foundations symposium (CSF). IEEE, 268--282.
[122]
Brit Youngmann, Michael J. Cafarella, Babak Salimi, and Anna Zeng. 2023. Causal Data Integration. Proc. VLDB Endow. 16, 10 (2023), 2659--2665.
[123]
Huiwen Yu and Dayu Yuan. 2013. Set coverage problems in a one-pass data stream. In ICDM. SIAM, 758--766.

Index Terms

  1. Counterfactual Explanation at Will, with Zero Privacy Leakage

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image Proceedings of the ACM on Management of Data
      Proceedings of the ACM on Management of Data  Volume 2, Issue 3
      SIGMOD
      June 2024
      1953 pages
      EISSN:2836-6573
      DOI:10.1145/3670010
      Issue’s Table of Contents
      This work is licensed under a Creative Commons Attribution International 4.0 License.

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 30 May 2024
      Published in PACMMOD Volume 2, Issue 3

      Author Tags

      1. counterfactual
      2. database for explainable machine learning
      3. in-database explanation
      4. model explainability
      5. model extraction
      6. privacy

      Qualifiers

      • Research-article

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • 0
        Total Citations
      • 394
        Total Downloads
      • Downloads (Last 12 months)394
      • Downloads (Last 6 weeks)66
      Reflects downloads up to 05 Jan 2025

      Other Metrics

      Citations

      View Options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Login options

      Full Access

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media