Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3319535.3354222acmconferencesArticle/Chapter ViewAbstractPublication PagesccsConference Proceedingsconference-collections
research-article

AdVersarial: Perceptual Ad Blocking meets Adversarial Machine Learning

Published: 06 November 2019 Publication History
  • Get Citation Alerts
  • Abstract

    Perceptual ad-blocking is a novel approach that detects online advertisements based on their visual content. Compared to traditional filter lists, the use of perceptual signals is believed to be less prone to an arms race with web publishers and ad networks. We demonstrate that this may not be the case. We describe attacks on multiple perceptual ad-blocking techniques, and unveil a new arms race that likely disfavors ad-blockers. Unexpectedly, perceptual ad-blocking can also introduce new vulnerabilities that let an attacker bypass web security boundaries and mount DDoS attacks. We first analyze the design space of perceptual ad-blockers and present a unified architecture that incorporates prior academic and commercial work. We then explore a variety of attacks on the ad-blocker's detection pipeline, that enable publishers or ad networks to evade or detect ad-blocking, and at times even abuse its high privilege level to bypass web security boundaries. On one hand, we show that perceptual ad-blocking must visually classify rendered web content to escape an arms race centered on obfuscation of page markup. On the other, we present a concrete set of attacks on visual ad-blockers by constructing adversarial examples in a real web page context. For seven ad-detectors, we create perturbed ads, ad-disclosure logos, and native web content that misleads perceptual ad-blocking with 100% success rates. In one of our attacks, we demonstrate how a malicious user can upload adversarial content, such as a perturbed image in a Facebook post, that fools the ad-blocker into removing another users' non-ad content. Moving beyond the Web and visual domain, we also build adversarial examples for AdblockRadio, an open source radio client that uses machine learning to detects ads in raw audio streams.

    Supplementary Material

    WEBM File (p2005-tramer.webm)

    References

    [1]
    Adblock Plus. Issue 7088: Implement hide-if-contains-image snippet. https://issues.adblockplus.org/ticket/7088.
    [2]
    Adblock Plus. 2018a. Customize Facebook with Adblock Plus. https://facebook.adblockplus.me/.
    [3]
    Adblock Plus. 2018b. Sentinel. https://adblock.ai/.
    [4]
    Anish Athalye, Nicholas Carlini, and David Wagner. 2018a. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In International Conference on Machine Learning (ICML).
    [5]
    Anish Athalye, Logan Engstrom, Andrew Ilyas, and Kevin Kwok. 2018b. Synthesizing robust adversarial examples. In International Conference on Machine Learning (ICML).
    [6]
    Shai Avidan and Ariel Shamir. 2007. Seam carving for content-aware image resizing. In ACM Transactions on graphics, Vol. 26. 10.
    [7]
    Sruti Bhagavatula, Christopher Dunn, Chris Kanich, Minaxi Gupta, and Brian Ziebart. 2014. Leveraging machine learning to improve unwanted resource filtering. In ACM Workshop on Artificial Intelligence and Security. ACM.
    [8]
    Elie Bursztein, Jonathan Aigrain, Angelika Moscicki, and John C Mitchell. 2014. The End is Nigh: Generic Solving of Text-based CAPTCHAs. In USENIX Workshop on Offensive Technologies. USENIX.
    [9]
    Nicholas Carlini and David Wagner. 2017a. Adversarial examples are not easily detected: Bypassing ten detection methods. In ACM Workshop on Artificial Intelligence and Security. ACM, 3--14.
    [10]
    Nicholas Carlini and David Wagner. 2017b. Towards evaluating the robustness of neural networks. In IEEE Symposium on Security and Privacy. IEEE.
    [11]
    Nicholas Carlini and David Wagner. 2018. Audio adversarial examples: Targeted attacks on speech-to-text. In DLS.
    [12]
    Steven Chen, Nicholas Carlini, and David Wagner. 2019. Stateful Detection of Black-Box Adversarial Attacks. arXiv preprint arXiv:1907.05587 (2019).
    [13]
    Xinyun Chen, Chang Liu, Bo Li, Kimberly Lu, and Dawn Song. 2017. Targeted backdoor attacks on deep learning systems using data poisoning. arXiv preprint arXiv:1712.05526 (2017).
    [14]
    Edward Chou, Florian Tramèr, Giancarlo Pellegrino, and Dan Boneh. 2018. Sentinet: Detecting physical attacks against deep learning systems. arXiv preprint arXiv:1812.00292 (2018).
    [15]
    Nicolas Christin, Sally S Yanagihara, and Keisuke Kamataki. 2010. Dissecting one click frauds. In Proceedings of the 17th ACM conference on Computer and communications security. ACM, 15--26.
    [16]
    Justin Crites and Mathias Ricken. 2004. Automatic ad blocking: Improving AdBlock for the Mozilla platform. (2004).
    [17]
    Digital Advertising Alliance (DAA). 2009. Self Regulatory Principles for Online Behavioral Advertising. https://digitaladvertisingalliance.org/sites/aboutads/files/DAA_files/seven-principles-07-01-09.pdf.
    [18]
    Digital Advertising Alliance (DAA). 2013. DAA Icon Ad Marker Creative Guidelines. https://digitaladvertisingalliance.org/sites/aboutads/files/DAA_files/DAA_Icon_Ad_Creative_Guidelines.pdf.
    [19]
    Benjamin Edelman. 2009. False and Deceptive Display Ads at Yahoo's Right Media. http://www.benedelman.org/rightmedia-deception.
    [20]
    Logan Engstrom, Dimitris Tsipras, Ludwig Schmidt, and Aleksander Madry. 2017. A rotation and a translation suffice: Fooling CNNs with simple transformations. arXiv preprint arXiv:1712.02779 (2017).
    [21]
    Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Florian Tramèr, Atul Prakash, Tadayoshi Kohno, and Dawn Song. 2018a. Physical Adversarial Examples for Object Detectors. In USENIX Workshop on Offensive Technologies. USENIX.
    [22]
    Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Chaowei Xiao, Atul Prakash, Tadayoshi Kohno, and Dawn Song. 2018b. Robust Physical-World Attacks on Deep Learning Visual Classification. In Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 1625--1634.
    [23]
    Matt Fredrikson, Somesh Jha, and Thomas Ristenpart. 2015. Model inversion attacks that exploit confidence information and basic countermeasures. In ACM SIGSAC Conference on Computer and Communications Security. ACM.
    [24]
    Justin Gilmer, Ryan P Adams, Ian Goodfellow, David Andersen, and George E Dahl. 2018a. Motivating the rules of the game for adversarial example research. arXiv preprint arXiv:1807.06732 (2018).
    [25]
    Justin Gilmer, Luke Metz, Fartash Faghri, Samuel S Schoenholz, Maithra Raghu, Martin Wattenberg, and Ian Goodfellow. 2018b. Adversarial spheres. arXiv preprint arXiv:1801.02774 (2018).
    [26]
    Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and harnessing adversarial examples. In International Conference on Learning Representations (ICLR).
    [27]
    Kathrin Grosse, Praveen Manoharan, Nicolas Papernot, Michael Backes, and Patrick McDaniel. 2017a. On the (statistical) detection of adversarial examples. arXiv preprint arXiv:1702.06280 (2017).
    [28]
    Kathrin Grosse, Nicolas Papernot, Praveen Manoharan, Michael Backes, and Patrick McDaniel. 2017b. Adversarial perturbations against deep neural networks for malware classification, In ESORICS. arXiv preprint arXiv:1606.04435.
    [29]
    David Gugelmann, Markus Happe, Bernhard Ager, and Vincent Lenders. 2015. An automated approach for complementing ad blockers' blacklists. Privacy Enhancing Technologies Symposium 2 (2015), 282--298.
    [30]
    Warren He, James Wei, Xinyun Chen, Nicholas Carlini, and Dawn Song. 2017. Adversarial Example Defenses: Ensembles of Weak Defenses are not Strong. arXiv preprint arXiv:1706.04701 (2017).
    [31]
    Jovanni Hernandez, Akshay Jagadeesh, and Jonathan Mayer. 2011. Tracking the trackers: The AdChoices icon. http://cyberlaw.stanford.edu/blog/2011/08/tracking-trackers-adchoices-icon.
    [32]
    Chao-Yung Hsu, Chun-Shien Lu, and Soo-Chang Pei. 2009. ACM International conference on Multimedia. In ICM. ACM, 637--640.
    [33]
    Zaeem Hussain, Mingda Zhang, Xiaozhong Zhang, Keren Ye, Christopher Thomas, Zuha Agha, Nathan Ong, and Adriana Kovashka. 2017. Automatic understanding of image and video advertisements. In Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 1100--1110.
    [34]
    Andrew Ilyas, Logan Engstrom, Anish Athalye, and Jessy Lin. 2018. Black-box Adversarial Attacks with Limited Queries and Information. In International Conference on Machine Learning (ICML).
    [35]
    Umar Iqbal, Zubair Shafiq, and Zhiyun Qian. 2017. The ad wars: retrospective measurement and analysis of anti-adblock filter lists. In Internet Measurement Conference. ACM, 171--183.
    [36]
    Umar Iqbal, Zubair Shafiq, Peter Snyder, Shitong Zhu, Zhiyun Qian, and Benjamin Livshits. 2018. AdGraph: A Machine Learning Approach to Automatic and Effective Adblocking. arXiv preprint arXiv:1805.09155 (2018).
    [37]
    Ilker Koksal. 2018. How Alexa Is Changing The Future Of Advertising. https://www.forbes.com/sites/ilkerkoksal/2018/12/11/how-alexa-is-changing-the-future-of-advertising.
    [38]
    Zico Kolter and Eric Wong. 2017. Provable defenses against adversarial examples via the convex outer adversarial polytope. In ICML.
    [39]
    Georgios Kontaxis and Monica Chew. 2015. Tracking protection in Firefox for privacy and performance. arXiv preprint arXiv:1506.04104 (2015).
    [40]
    Viktor Krammer. 2008. An effective defense against intrusive web advertising. In Conference on Privacy, Security and Trust. IEEE, 3--14.
    [41]
    Alexey Kurakin, Ian Goodfellow, and Samy Bengio. 2017a. Adversarial examples in the physical world. In International Conference on Learning Representations (ICLR).
    [42]
    Alexey Kurakin, Ian Goodfellow, and Samy Bengio. 2017b. Adversarial Machine Learning at Scale. In International Conference on Learning Representations (ICLR).
    [43]
    Pedro Giovanni Leon, Justin Cranshaw, Lorrie Faith Cranor, Jim Graves, Manoj Hastak, Blase Ur, and Guzi Xu. 2012. What do online behavioral advertising privacy disclosures communicate to users?. In Workshop on Privacy in the electronic society. ACM, 19--30.
    [44]
    Zhou Li, Kehuan Zhang, Yinglian Xie, Fang Yu, and XiaoFeng Wang. 2012. Knowing your enemy: understanding and detecting malicious web advertising. In ACM SIGSAC Conference on Computer and Communications Security. ACM.
    [45]
    David G Lowe. 2004. Distinctive image features from scale-invariant keypoints. International journal of computer vision, Vol. 60, 2 (2004), 91--110.
    [46]
    Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2018. Towards deep learning models resistant to adversarial attacks. In International Conference on Learning Representations (ICLR).
    [47]
    Matthew Malloy, Mark McNamara, Aaron Cahn, and Paul Barford. 2016. Ad blockers: Global prevalence and impact. In Internet Measurement Conference. ACM, 119--125.
    [48]
    Jan Hendrik Metzen, Tim Genewein, Volker Fischer, and Bastian Bischoff. 2017. On detecting adversarial perturbations. In International Conference on Learning Representations (ICLR).
    [49]
    Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, and Pascal Frossard. 2017. Universal Adversarial Perturbations. In Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 1765--1773.
    [50]
    Muhammad Haris Mughees, Zhiyun Qian, and Zubair Shafiq. 2017. Detecting anti ad-blockers in the wild. In Privacy Enhancing Technologies Symposium, Vol. 2017.
    [51]
    Muhammad Haris Mughees, Zhiyun Qian, Zubair Shafiq, Karishma Dash, and Pan Hui. 2016. A first look at ad-block detection: A new arms race on the web. arXiv preprint arXiv:1605.05841 (2016).
    [52]
    Meghan Neal. 2016. You're Going to Need an Ad Blocker for Your Next TV. https://motherboard.vice.com/en_us/article/mg7ek8/youre-going-to-need-an-ad-blocker-for-your-next-tv.
    [53]
    Rishab Nithyanand, Sheharbano Khattak, Mobin Javed, Narseo Vallina-Rodriguez, Marjan Falahrastegar, Julia E Powles, ED Cristofaro, Hamed Haddadi, and Steven J Murdoch. 2016. Adblocking and counter blocking: A slice of the arms race. In USENIX Workshop on Free and Open Communications on the Internet.
    [54]
    Paraska Oleksandr. 2018. Towards more intelligent ad blocking on the web. https://medium.com/@shoniko/towards-more-intelligent-ad-blocking-on-the-web-9f67bf2a12b5.
    [55]
    George Paliy. 2018. The Future Of Advertising In Virtual Reality. https://stopad.io/blog/future-virtual-reality-advertising.
    [56]
    Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z Berkay Celik, and Ananthram Swami. 2017. Practical Black-Box Attacks against Machine Learning. In ACM ASIA Conference on Computer and Communications Security. ACM, 506--519.
    [57]
    Nicolas Papernot, Patrick McDaniel, Somesh Jha, Matt Fredrikson, Z Berkay Celik, and Ananthram Swami. 2016a. The limitations of deep learning in adversarial settings. In IEEE European Symposium on Security and Privacy.
    [58]
    Nicolas Papernot, Patrick McDaniel, Arunesh Sinha, and Michael Wellman. 2016b. Towards the Science of Security and Privacy in Machine Learning. arXiv preprint arXiv:1611.03814 (2016).
    [59]
    Giancarlo Pellegrino, Martin Johns, Simon Koch, Michael Backes, and Christian Rossow. 2017. Deemon: Detecting CSRF with Dynamic Analysis and Property Graphs. In ACM SIGSAC Conference on Computer and Communications Security. ACM, 1757--1771.
    [60]
    Giancarlo Pellegrino, Christian Rossow, Fabrice J Ryba, Thomas C Schmidt, and Matthias W"ahlisch. 2015. Cashing Out the Great Cannon? On Browser-Based DDoS Attacks and Economics. In USENIX Workshop on Offensive Technologies.
    [61]
    Enric Pujol, Oliver Hohlfeld, and Anja Feldmann. 2015. Annoyed Users: Ads and Ad-Block Usage in the Wild. In Internet Measurement Conference. ACM, 93--106.
    [62]
    Aditi Raghunathan, Jacob Steinhardt, and Percy Liang. 2018. Certified defenses against adversarial examples. In International Conference on Learning Representations (ICLR).
    [63]
    Joseph Redmon, Santosh Kumar Divvala, Ross B. Girshick, and Ali Farhadi. 2016. You Only Look Once: Unified, Real-Time Object Detection. In Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 779--788.
    [64]
    Joseph Redmon and Ali Farhadi. 2017. YOLO9000: Better, Faster, Stronger. In Conference on Computer Vision and Pattern Recognition (CVPR), IEEE (Ed.).
    [65]
    Joseph Redmon and Ali Farhadi. 2018. YOLOv3: An Incremental Improvement. arXiv preprint arXiv:1804.02767 (2018).
    [66]
    Tim Salimans, Jonathan Ho, Xi Chen, Szymon Sidor, and Ilya Sutskever. 2017. Evolution strategies as a scalable alternative to reinforcement learning. arXiv preprint arXiv:1703.03864 (2017).
    [67]
    Ludwig Schmidt, Shibani Santurkar, Dimitris Tsipras, Kunal Talwar, and Aleksander Madry. 2018. Adversarially robust generalization requires more data. In Advances in Neural Information Processing Systems. 5014--5026.
    [68]
    Mahmood Sharif, Sruti Bhagavatula, Lujo Bauer, and Michael K Reiter. 2016. Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition. In ACM SIGSAC Conference on Computer and Communications Security. ACM.
    [69]
    Yash Sharma and Pin-Yu Chen. 2017. Attacking the Madry Defense Model with $ L_1 $-based Adversarial Examples. arXiv preprint arXiv:1710.10733 (2017).
    [70]
    Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. 2017. Membership inference attacks against machine learning models. In IEEE Symposium on Security and Privacy. IEEE, 3--18.
    [71]
    Ashish Kumar Singh and Vidyasagar Potdar. 2009. Blocking online advertising-A state of the art. In IEEE International Conference on Industrial Technology. IEEE.
    [72]
    Congzheng Song and Vitaly Shmatikov. 2018. Fooling OCR Systems with Adversarial Text Images. arXiv preprint arXiv:1802.05385 (2018).
    [73]
    Nedim Srndic and Pavel Laskov. 2014. Practical evasion of a learning-based classifier: A case study. In IEEE Symposium on Security and Privacy.
    [74]
    Grant Storey, Dillon Reisman, Jonathan Mayer, and Arvind Narayanan. 2017a. The Future of Ad Blocking: An Analytical Framework and New Techniques. arXiv preprint arXiv:1705.08568 (2017).
    [75]
    Grant Storey, Dillon Reisman, Jonathan Mayer, and Arvind Narayanan. 2017b. Perceptual Ad Highlighter. Chrome Extension: https://chrome.google.com/webstore/detail/perceptual-ad-highlighter/mahgiflleahghaapkboihnbhdplhnchp; Source code: https://github.com/citp/ad-blocking.
    [76]
    Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2014. Intriguing properties of neural networks. In International Conference on Learning Representations (ICLR).
    [77]
    Panagiotis Tigas, Samuel T King, Benjamin Livshits, et almbox. 2019. Percival: Making In-Browser Perceptual Ad Blocking Practical With Deep Learning. arXiv preprint arXiv:1905.07444 (2019).
    [78]
    Florian Tramèr and Dan Boneh. 2019. Adversarial Training and Robustness for Multiple Perturbations. arXiv preprint arXiv:1904.13000 (2019).
    [79]
    Florian Tramèr, Alexey Kurakin, Nicolas Papernot, Ian Goodfellow, Dan Boneh, and Patrick McDaniel. 2018. Ensemble adversarial training: Attacks and defenses. In International Conference on Learning Representations (ICLR).
    [80]
    Florian Tramèr, Fan Zhang, Ari Juels, Michael K Reiter, and Thomas Ristenpart. 2016. Stealing machine learning models via prediction apis. In USENIX Security Symposium.
    [81]
    uBlockOrigin. Issue 3367: Facebook. https://github.com/uBlockOrigin/uAssets/issues/3367.
    [82]
    Blase Ur, Pedro Giovanni Leon, Lorrie Faith Cranor, Richard Shay, and Yang Wang. 2012. Smart, useful, scary, creepy: perceptions of online behavioral advertising. In Symposium on Usable Privacy and Security. ACM, 4.
    [83]
    Ennèl van Eeden and Wilson Chow. 2018. Perspectives from the Global Entertainment & Media Outlook 2018--2022. https://www.statista.com/topics/1176/online-advertising/.
    [84]
    Antoine Vastel, Peter Snyder, and Benjamin Livshits. 2018. Who Filters the Filters: Understanding the Growth, Usefulness and Efficiency of Crowdsourced Ad Blocking. arXiv preprint arXiv:1810.09160 (2018).
    [85]
    Craig E Wills and Doruk C Uzunoglu. 2016. What ad blockers are (and are not) doing. In IEEE Workshop on Hot Topics in Web Systems and Technologies. IEEE.
    [86]
    Xinyu Xing, Wei Meng, Byoungyoung Lee, Udi Weinsberg, Anmol Sheth, Roberto Perdisci, and Wenke Lee. 2015. Understanding Malvertising Through Ad-Injecting Browser Extensions. In International Conference on World Wide Web.
    [87]
    Guixin Ye, Zhanyong Tang, Dingyi Fang, Zhanxing Zhu, Yansong Feng, Pengfei Xu, Xiaojiang Chen, and Zheng Wang. 2018. Yet Another Text Captcha Solver: A Generative Adversarial Network Based Approach. In ACM SIGSAC Conference on Computer and Communications Security. ACM.
    [88]
    Shitong Zhu, Xunchao Hu, Zhiyun Qian, Zubair Shafiq, and Heng Yin. 2018. Measuring and Disrupting Anti-Adblockers Using Differential Execution Analysis. In Network and Distributed System Security Symposium (NDSS).

    Cited By

    View all
    • (2024)Pervasive User Data Collection from Cyberspace: Privacy Concerns and CountermeasuresCryptography10.3390/cryptography80100058:1(5)Online publication date: 31-Jan-2024
    • (2024)AdFlush: A Real-World Deployable Machine Learning Solution for Effective Advertisement and Web Tracker PreventionProceedings of the ACM on Web Conference 202410.1145/3589334.3645698(1902-1913)Online publication date: 13-May-2024
    • (2024)LAFIT: Efficient and Reliable Evaluation of Adversarial Defenses With Latent FeaturesIEEE Transactions on Pattern Analysis and Machine Intelligence10.1109/TPAMI.2023.332369846:1(354-369)Online publication date: Jan-2024
    • Show More Cited By

    Index Terms

    1. AdVersarial: Perceptual Ad Blocking meets Adversarial Machine Learning

        Recommendations

        Comments

        Information & Contributors

        Information

        Published In

        cover image ACM Conferences
        CCS '19: Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security
        November 2019
        2755 pages
        ISBN:9781450367479
        DOI:10.1145/3319535
        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Sponsors

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        Published: 06 November 2019

        Permissions

        Request permissions for this article.

        Check for updates

        Author Tags

        1. ad blocking
        2. adversarial example
        3. machine learning

        Qualifiers

        • Research-article

        Conference

        CCS '19
        Sponsor:

        Acceptance Rates

        CCS '19 Paper Acceptance Rate 149 of 934 submissions, 16%;
        Overall Acceptance Rate 1,261 of 6,999 submissions, 18%

        Upcoming Conference

        CCS '24
        ACM SIGSAC Conference on Computer and Communications Security
        October 14 - 18, 2024
        Salt Lake City , UT , USA

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)111
        • Downloads (Last 6 weeks)7
        Reflects downloads up to

        Other Metrics

        Citations

        Cited By

        View all
        • (2024)Pervasive User Data Collection from Cyberspace: Privacy Concerns and CountermeasuresCryptography10.3390/cryptography80100058:1(5)Online publication date: 31-Jan-2024
        • (2024)AdFlush: A Real-World Deployable Machine Learning Solution for Effective Advertisement and Web Tracker PreventionProceedings of the ACM on Web Conference 202410.1145/3589334.3645698(1902-1913)Online publication date: 13-May-2024
        • (2024)LAFIT: Efficient and Reliable Evaluation of Adversarial Defenses With Latent FeaturesIEEE Transactions on Pattern Analysis and Machine Intelligence10.1109/TPAMI.2023.332369846:1(354-369)Online publication date: Jan-2024
        • (2024)Evading Black-box Classifiers Without Breaking Eggs2024 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML)10.1109/SaTML59370.2024.00027(408-424)Online publication date: 9-Apr-2024
        • (2023)AutoFRProceedings of the 32nd USENIX Conference on Security Symposium10.5555/3620237.3620659(7535-7552)Online publication date: 9-Aug-2023
        • (2023)Preprocessors matter! realistic decision-based attacks on machine learning systemsProceedings of the 40th International Conference on Machine Learning10.5555/3618408.3619735(32008-32032)Online publication date: 23-Jul-2023
        • (2023)“Real Attackers Don't Compute Gradients”: Bridging the Gap Between Adversarial ML Research and Practice2023 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML)10.1109/SaTML54575.2023.00031(339-364)Online publication date: Mar-2023
        • (2023)AdvDiffuser: Natural Adversarial Example Synthesis with Diffusion Models2023 IEEE/CVF International Conference on Computer Vision (ICCV)10.1109/ICCV51070.2023.00421(4539-4549)Online publication date: 1-Oct-2023
        • (2023)vWitness: Certifying Web Page Interactions with Computer Vision2023 53rd Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN)10.1109/DSN58367.2023.00048(431-444)Online publication date: Jun-2023
        • (2023)Towards Defending Multiple $$\ell _p$$-Norm Bounded Adversarial Perturbations via Gated Batch NormalizationInternational Journal of Computer Vision10.1007/s11263-023-01884-w132:6(1881-1898)Online publication date: 4-Sep-2023
        • Show More Cited By

        View Options

        Get Access

        Login options

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media