default search action
Milad Nasr
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [c27]Nicholas Carlini, Daniel Paleka, Krishnamurthy Dj Dvijotham, Thomas Steinke, Jonathan Hayase, A. Feder Cooper, Katherine Lee, Matthew Jagielski, Milad Nasr, Arthur Conmy, Eric Wallace, David Rolnick, Florian Tramèr:
Stealing part of a production language model. ICML 2024 - [c26]Karan Chadha, Matthew Jagielski, Nicolas Papernot, Christopher A. Choquette-Choo, Milad Nasr:
Auditing Private Prediction. ICML 2024 - [c25]Aldo G. Carranza, Rezsa Farahani, Natalia Ponomareva, Alexey Kurakin, Matthew Jagielski, Milad Nasr:
Synthetic Query Generation for Privacy-Preserving Deep Retrieval Systems using Differentially Private Language Models. NAACL-HLT 2024: 3920-3930 - [c24]Edoardo Debenedetti, Giorgio Severi, Milad Nasr, Christopher A. Choquette-Choo, Matthew Jagielski, Eric Wallace, Nicholas Carlini, Florian Tramèr:
Privacy Side Channels in Machine Learning Systems. USENIX Security Symposium 2024 - [i34]Xinyu Tang, Ashwinee Panda, Milad Nasr, Saeed Mahloujifar, Prateek Mittal:
Private Fine-tuning of Large Language Models with Zeroth-order Optimization. CoRR abs/2401.04343 (2024) - [i33]Karan Chadha, Matthew Jagielski, Nicolas Papernot, Christopher A. Choquette-Choo, Milad Nasr:
Auditing Private Prediction. CoRR abs/2402.09403 (2024) - [i32]Jonathan Hayase, Ema Borevkovic, Nicholas Carlini, Florian Tramèr, Milad Nasr:
Query-Based Adversarial Prompt Generation. CoRR abs/2402.12329 (2024) - [i31]Nicholas Carlini, Daniel Paleka, Krishnamurthy (Dj) Dvijotham, Thomas Steinke, Jonathan Hayase, A. Feder Cooper, Katherine Lee, Matthew Jagielski, Milad Nasr, Arthur Conmy, Eric Wallace, David Rolnick, Florian Tramèr:
Stealing Part of a Production Language Model. CoRR abs/2403.06634 (2024) - [i30]Harsh Chaudhari, Giorgio Severi, John Abascal, Matthew Jagielski, Christopher A. Choquette-Choo, Milad Nasr, Cristina Nita-Rotaru, Alina Oprea:
Phantom: General Trigger Attacks on Retrieval Augmented Language Generation. CoRR abs/2405.20485 (2024) - [i29]Ali Zand, Milad Nasr:
Avoiding Generative Model Writer's Block With Embedding Nudging. CoRR abs/2408.15450 (2024) - [i28]Thomas Steinke, Milad Nasr, Arun Ganesh, Borja Balle, Christopher A. Choquette-Choo, Matthew Jagielski, Jamie Hayes, Abhradeep Guha Thakurta, Adam D. Smith, Andreas Terzis:
The Last Iterate Advantage: Empirical Auditing and Principled Heuristic Analysis of Differentially Private SGD. CoRR abs/2410.06186 (2024) - [i27]Yangsibo Huang, Daogao Liu, Lynn Chua, Badih Ghazi, Pritish Kamath, Ravi Kumar, Pasin Manurangsi, Milad Nasr, Amer Sinha, Chiyuan Zhang:
Unlearn and Burn: Adversarial Machine Unlearning Requests Destroy Model Accuracy. CoRR abs/2410.09591 (2024) - [i26]Nicholas Carlini, Milad Nasr:
Remote Timing Attacks on Efficient Language Model Inference. CoRR abs/2410.17175 (2024) - 2023
- [j2]Meisam Hejazinia, Dzmitry Huba, Ilias Leontiadis, Kiwan Maeng, Mani Malek, Luca Melis, Ilya Mironov, Milad Nasr, Kaikai Wang, Carole-Jean Wu:
Federated Ensemble Learning: Increasing the Capacity of Label Private Recommendation Systems. IEEE Data Eng. Bull. 46(1): 145-157 (2023) - [c23]Arun Ganesh, Mahdi Haghifam, Milad Nasr, Sewoong Oh, Thomas Steinke, Om Thakkar, Abhradeep Guha Thakurta, Lun Wang:
Why Is Public Pretraining Necessary for Private Model Training? ICML 2023: 10611-10627 - [c22]Milad Nasr, Saeed Mahloujifar, Xinyu Tang, Prateek Mittal, Amir Houmansadr:
Effectively Using Public Data in Privacy Preserving Machine Learning. ICML 2023: 25718-25732 - [c21]Daphne Ippolito, Florian Tramèr, Milad Nasr, Chiyuan Zhang, Matthew Jagielski, Katherine Lee, Christopher A. Choquette-Choo, Nicholas Carlini:
Preventing Generation of Verbatim Memorization in Language Models Gives a False Sense of Privacy. INLG 2023: 28-53 - [c20]Daphne Ippolito, Nicholas Carlini, Katherine Lee, Milad Nasr, Yun William Yu:
Reverse-Engineering Decoding Strategies Given Blackbox Access to a Language Generation System. INLG 2023: 396-406 - [c19]Thomas Steinke, Milad Nasr, Matthew Jagielski:
Privacy Auditing with One (1) Training Run. NeurIPS 2023 - [c18]Nicholas Carlini, Milad Nasr, Christopher A. Choquette-Choo, Matthew Jagielski, Irena Gao, Pang Wei Koh, Daphne Ippolito, Florian Tramèr, Ludwig Schmidt:
Are aligned neural networks adversarially aligned? NeurIPS 2023 - [c17]Matthew Jagielski, Milad Nasr, Katherine Lee, Christopher A. Choquette-Choo, Nicholas Carlini, Florian Tramèr:
Students Parrot Their Teachers: Membership Inference on Model Distillation. NeurIPS 2023 - [c16]Milad Nasr, Jamie Hayes, Thomas Steinke, Borja Balle, Florian Tramèr, Matthew Jagielski, Nicholas Carlini, Andreas Terzis:
Tight Auditing of Differentially Private Machine Learning. USENIX Security Symposium 2023: 1631-1648 - [c15]Nicholas Carlini, Jamie Hayes, Milad Nasr, Matthew Jagielski, Vikash Sehwag, Florian Tramèr, Borja Balle, Daphne Ippolito, Eric Wallace:
Extracting Training Data from Diffusion Models. USENIX Security Symposium 2023: 5253-5270 - [i25]Nicholas Carlini, Jamie Hayes, Milad Nasr, Matthew Jagielski, Vikash Sehwag, Florian Tramèr, Borja Balle, Daphne Ippolito, Eric Wallace:
Extracting Training Data from Diffusion Models. CoRR abs/2301.13188 (2023) - [i24]Milad Nasr, Jamie Hayes, Thomas Steinke, Borja Balle, Florian Tramèr, Matthew Jagielski, Nicholas Carlini, Andreas Terzis:
Tight Auditing of Differentially Private Machine Learning. CoRR abs/2302.07956 (2023) - [i23]Arun Ganesh, Mahdi Haghifam, Milad Nasr, Sewoong Oh, Thomas Steinke, Om Thakkar, Abhradeep Thakurta, Lun Wang:
Why Is Public Pretraining Necessary for Private Model Training? CoRR abs/2302.09483 (2023) - [i22]Matthew Jagielski, Milad Nasr, Christopher A. Choquette-Choo, Katherine Lee, Nicholas Carlini:
Students Parrot Their Teachers: Membership Inference on Model Distillation. CoRR abs/2303.03446 (2023) - [i21]Aldo Gael Carranza, Rezsa Farahani, Natalia Ponomareva, Alex Kurakin, Matthew Jagielski, Milad Nasr:
Privacy-Preserving Recommender Systems with Synthetic Query Generation using Differentially Private Large Language Models. CoRR abs/2305.05973 (2023) - [i20]Thomas Steinke, Milad Nasr, Matthew Jagielski:
Privacy Auditing with One (1) Training Run. CoRR abs/2305.08846 (2023) - [i19]Nicholas Carlini, Milad Nasr, Christopher A. Choquette-Choo, Matthew Jagielski, Irena Gao, Anas Awadalla, Pang Wei Koh, Daphne Ippolito, Katherine Lee, Florian Tramèr, Ludwig Schmidt:
Are aligned neural networks adversarially aligned? CoRR abs/2306.15447 (2023) - [i18]Daphne Ippolito, Nicholas Carlini, Katherine Lee, Milad Nasr, Yun William Yu:
Reverse-Engineering Decoding Strategies Given Blackbox Access to a Language Generation System. CoRR abs/2309.04858 (2023) - [i17]Edoardo Debenedetti, Giorgio Severi, Nicholas Carlini, Christopher A. Choquette-Choo, Matthew Jagielski, Milad Nasr, Eric Wallace, Florian Tramèr:
Privacy Side Channels in Machine Learning Systems. CoRR abs/2309.05610 (2023) - [i16]A. Feder Cooper, Katherine Lee, James Grimmelmann, Daphne Ippolito, Christopher Callison-Burch, Christopher A. Choquette-Choo, Niloofar Mireshghallah, Miles Brundage, David Mimno, Madiha Zahrah Choksi, Jack M. Balkin, Nicholas Carlini, Christopher De Sa, Jonathan Frankle, Deep Ganguli, Bryant Gipson, Andres Guadamuz, Swee Leng Harris, Abigail Z. Jacobs, Elizabeth Joh, Gautam Kamath, Mark Lemley, Cass Matthews, Christine McLeavey, Corynne McSherry, Milad Nasr, Paul Ohm, Adam Roberts, Tom Rubin, Pamela Samuelson, Ludwig Schubert, Kristen Vaccaro, Luis Villa, Felix Wu, Elana Zeide:
Report of the 1st Workshop on Generative AI and Law. CoRR abs/2311.06477 (2023) - [i15]Milad Nasr, Nicholas Carlini, Jonathan Hayase, Matthew Jagielski, A. Feder Cooper, Daphne Ippolito, Christopher A. Choquette-Choo, Eric Wallace, Florian Tramèr, Katherine Lee:
Scalable Extraction of Training Data from (Production) Language Models. CoRR abs/2311.17035 (2023) - 2022
- [j1]Xinyu Tang, Milad Nasr, Saeed Mahloujifar, Virat Shejwalkar, Liwei Song, Amir Houmansadr, Prateek Mittal:
Machine Learning with Differentially Private Labels: Mechanisms and Frameworks. Proc. Priv. Enhancing Technol. 2022(4): 332-350 (2022) - [c14]Nicholas Carlini, Steve Chien, Milad Nasr, Shuang Song, Andreas Terzis, Florian Tramèr:
Membership Inference Attacks From First Principles. SP 2022: 1897-1914 - [c13]Xinyu Tang, Saeed Mahloujifar, Liwei Song, Virat Shejwalkar, Milad Nasr, Amir Houmansadr, Prateek Mittal:
Mitigating Membership Inference Attacks by Self-Distillation Through a Novel Ensemble Architecture. USENIX Security Symposium 2022: 1433-1450 - [i14]Meisam Hejazinia, Dzmitry Huba, Ilias Leontiadis, Kiwan Maeng, Mani Malek, Luca Melis, Ilya Mironov, Milad Nasr, Kaikai Wang, Carole-Jean Wu:
FEL: High Capacity Learning for Recommendation and Ranking via Federated Ensemble Learning. CoRR abs/2206.03852 (2022) - [i13]Nicholas Carlini, Vitaly Feldman, Milad Nasr:
No Free Lunch in "Privacy for Free: How does Dataset Condensation Help Privacy". CoRR abs/2209.14987 (2022) - [i12]Daphne Ippolito, Florian Tramèr, Milad Nasr, Chiyuan Zhang, Matthew Jagielski, Katherine Lee, Christopher A. Choquette-Choo, Nicholas Carlini:
Preventing Verbatim Memorization in Language Models Gives a False Sense of Privacy. CoRR abs/2210.17546 (2022) - 2021
- [c12]Alireza Bahramali, Milad Nasr, Amir Houmansadr, Dennis Goeckel, Don Towsley:
Robust Adversarial Attacks Against DNN-Based Wireless Communication Systems. CCS 2021: 126-140 - [c11]Milad Nasr, Shuang Song, Abhradeep Thakurta, Nicolas Papernot, Nicholas Carlini:
Adversary Instantiation: Lower Bounds for Differentially Private Machine Learning. SP 2021: 866-882 - [c10]Milad Nasr, Alireza Bahramali, Amir Houmansadr:
Defeating DNN-Based Traffic Analysis Systems in Real-Time With Blind Adversarial Perturbations. USENIX Security Symposium 2021: 2705-2722 - [i11]Milad Nasr, Shuang Song, Abhradeep Thakurta, Nicolas Papernot, Nicholas Carlini:
Adversary Instantiation: Lower Bounds for Differentially Private Machine Learning. CoRR abs/2101.04535 (2021) - [i10]Alireza Bahramali, Milad Nasr, Amir Houmansadr, Dennis Goeckel, Don Towsley:
Robust Adversarial Attacks Against DNN-Based Wireless Communication Systems. CoRR abs/2102.00918 (2021) - [i9]Xinyu Tang, Saeed Mahloujifar, Liwei Song, Virat Shejwalkar, Milad Nasr, Amir Houmansadr, Prateek Mittal:
Mitigating Membership Inference Attacks by Self-Distillation Through a Novel Ensemble Architecture. CoRR abs/2110.08324 (2021) - [i8]Nicholas Carlini, Steve Chien, Milad Nasr, Shuang Song, Andreas Terzis, Florian Tramèr:
Membership Inference Attacks From First Principles. CoRR abs/2112.03570 (2021) - 2020
- [c9]Milad Nasr, Michael Carl Tschantz:
Bidding strategies with gender nondiscrimination constraints for online ad auctions. FAT* 2020: 337-347 - [c8]Milad Nasr, Hadi Zolfaghari, Amir Houmansadr, Amirhossein Ghafari:
MassBrowser: Unblocking the Censored Web for the Masses, by the Masses. NDSS 2020 - [i7]Milad Nasr, Alireza Bahramali, Amir Houmansadr:
Blind Adversarial Network Perturbations. CoRR abs/2002.06495 (2020) - [i6]Milad Nasr, Reza Shokri, Amir Houmansadr:
Improving Deep Learning with Differential Privacy using Gradient Encoding and Denoising. CoRR abs/2007.11524 (2020)
2010 – 2019
- 2019
- [c7]Milad Nasr, Sadegh Farhang, Amir Houmansadr, Jens Grossklags:
Enemy At the Gateways: Censorship-Resilient Proxy Distribution Using Game Theory. NDSS 2019 - [c6]Milad Nasr, Reza Shokri, Amir Houmansadr:
Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-box Inference Attacks against Centralized and Federated Learning. IEEE Symposium on Security and Privacy 2019: 739-753 - [i5]Milad Nasr, Michael Carl Tschantz:
Bidding Strategies with Gender Nondiscrimination: Constraints for Online Ad Auctions. CoRR abs/1909.02156 (2019) - 2018
- [c5]Milad Nasr, Reza Shokri, Amir Houmansadr:
Machine Learning with Membership Privacy using Adversarial Regularization. CCS 2018: 634-646 - [c4]Milad Nasr, Alireza Bahramali, Amir Houmansadr:
DeepCorr: Strong Flow Correlation Attacks on Tor Using Deep Learning. CCS 2018: 1962-1976 - [i4]Milad Nasr, Reza Shokri, Amir Houmansadr:
Machine Learning with Membership Privacy using Adversarial Regularization. CoRR abs/1807.05852 (2018) - [i3]Milad Nasr, Alireza Bahramali, Amir Houmansadr:
DeepCorr: Strong Flow Correlation Attacks on Tor Using Deep Learning. CoRR abs/1808.07285 (2018) - [i2]Milad Nasr, Reza Shokri, Amir Houmansadr:
Comprehensive Privacy Analysis of Deep Learning: Stand-alone and Federated Learning under Passive and Active White-box Inference Attacks. CoRR abs/1812.00910 (2018) - 2017
- [c3]Milad Nasr, Hadi Zolfaghari, Amir Houmansadr:
The Waterfall of Liberty: Decoy Routing Circumvention that Resists Routing Attacks. CCS 2017: 2037-2052 - [c2]Milad Nasr, Amir Houmansadr, Arya Mazumdar:
Compressive Traffic Analysis: A New Paradigm for Scalable Traffic Analysis. CCS 2017: 2053-2069 - [i1]Milad Nasr, Sadegh Farhang, Amir Houmansadr, Jens Grossklags:
Enemy At the Gateways: A Game Theoretic Approach to Proxy Distribution. CoRR abs/1709.04030 (2017) - 2016
- [c1]Milad Nasr, Amir Houmansadr:
GAME OF DECOYS: Optimal Decoy Routing Through Game Theory. CCS 2016: 1727-1738
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-11-28 20:34 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint