Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3375627.3375815acmconferencesArticle/Chapter ViewAbstractPublication PagesaiesConference Proceedingsconference-collections
research-article

The Offense-Defense Balance of Scientific Knowledge: Does Publishing AI Research Reduce Misuse?

Published: 07 February 2020 Publication History

Abstract

There is growing concern over the potential misuse of artificial intelligence (AI) research. Publishing scientific research can facilitate misuse of the technology, but the research can also contribute to protections against misuse. This paper addresses the balance between these two effects. Our theoretical framework elucidates the factors governing whether the published research will be more useful for attackers or defenders, such as the possibility for adequate defensive measures, or the independent discovery of the knowledge outside of the scientific community. The balance will vary across scientific fields. However, we show that the existing conversation within AI has imported concepts and conclusions from prior debates within computer security over the disclosure of software vulnerabilities. While disclosure of software vulnerabilities often favours defence, this cannot be assumed for AI research. The AI research community should consider concepts and policies from a broad set of adjacent fields, and ultimately needs to craft policy well-suited to its particular challenges.

References

[1]
Matt Blaze. 2003. Master-Keyed Lock Vulnerability. Retrieved November 3, 2019, from https://www.mattblaze.org/masterkey.html.
[2]
Matt Blaze. 2003. Rights amplification in master-keyed mechanical locks. IEEE Security & Privacy 1, 2 (2003), 24--32. https://doi.org/10.1109/MSECP.2003.1193208
[3]
Nicholas Bloom, Charles I Jones, John Van Reenen, and Michael Webb. 2017. Are Ideas Getting Harder to Find? National Bureau of Economic Research Working Paper Series 23782 (2017). https://doi.org/10.3386/w23782.
[4]
Nick Bostrom. 2017. Strategic Implications of Openness in AI Development. Global Policy (2017). Retrieved from https://ora.ox.ac.uk/objects/uuid:83ea712faba3- 4176--957a-3bb4af0209d6.
[5]
Miles Brundage, Shahar Avin, Jack Clark, Helen Toner, Peter Eckersley, Ben Garfinkel, Allan Dafoe, Paul Scharre, Thomas Zeitzoff, Bobby Filar, Hyrum Anderson, Heather Roff, Gregory C. Allen, Jacob Steinhardt, Carrick Flynn, Seán Ó hÉigeartaigh, Simon Beard, Haydn Belfield, Sebastian Farquhar, Clare Lyle, Rebecca Crootof, Owain Evans, Michael Page, Joanna Bryson, Roman Yampolskiy, and Dario Amodei. 2018. The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation. arXiv:1802.07228 [cs.AI] (2018).
[6]
Rebecca Crootof. 2019. Artificial Intelligence Research Needs Responsible Publication Norms. Retrieved October 26, 2019, from Lawfare website: https://www.lawfareblog.com/artificial-intelligence-research-needsresponsible- publication-norms.
[7]
Ben Garfinkel and Allan Dafoe. 2019. How does the offense-defense balance scale? Journal of Strategic Studies 42, 6 (2019), 736--763. https://doi.org/10.1080/ 01402390.2019.1631810.
[8]
Sebastian Gehrmann, Hendrik Strobelt, and Alexander Rush. 2019. GLTR: Statistical Detection and Visualization of Generated Text. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations. 111--116. https://doi.org/10.18653/v1/P19--3019.
[9]
Aaron Gokaslan and Vanya Cohen. 2019. OpenGPT-2: We Replicated GPT- 2 Because You Can Too. Retrieved October 25, 2019, from Medium website: https://blog.usejournal.com/opengpt-2-we-replicated-gpt-2-because-youcan- too-45e34e6d36dc.
[10]
Robert Jervis. 1978. Cooperation Under the Security Dilemma. World Politics 30, 2 (1978), 167--214. https://doi.org/10.2307/2009958.
[11]
Sarah Kreps and Miles McCain. 2019. Not Your Father's Bots: AI Is Making Fake News Look Real. Foreign Affairs (2019). Retrieved from https://www. foreignaffairs.com/articles/2019-08-02/not-your-fathers-bots.
[12]
Claire Leibowicz, Steven Adler, and Peter Eckersley. 2019. When Is It Appropriate to Publish High-Stakes AI Research? Retrieved October 26, 2019, from Partnership on AI website: https://www.partnershiponai.org/when-is-it-appropriate-topublish- high-stakes-ai-research/.
[13]
Gregory Lewis, Piers Millett, Anders Sandberg, Andrew Snyder-Beattie, and Gigi Gronvall. 2018. Information Hazards in Biotechnology. Risk Analysis 39, 5 (2018), 975--981. https://doi.org/10.1111/risa.13235.
[14]
Alec Radford, Jeff Wu, Dario Amodei, Jack Clark, Miles Brundage, and Ilya Sutskever. 2019. Better Language Models and Their Implications. Retrieved October 24, 2019, from OpenAI Blog website: https://openai.com/blog/betterlanguage- models/.
[15]
Kevin Rawlinson. 2019. Heathrow and Gatwick invest millions in anti-drone technology. The Guardian (2019). Retrieved from https://www.theguardian.com/ world/2019/jan/03/heathrow-and-gatwick-millions-anti-drone-technology.
[16]
Janko Roettgers. 2019. Mark Zuckerberg Says Facebook Will Spend More Than $3.7 Billion on Safety, Security in 2019. Retrieved November 3, 2019, from Variety website: https://variety.com/2019/digital/news/facebook-2019-safety-speding- 1203128797/.
[17]
Jacob N. Shapiro and David A. Siegel. 2010. Is this Paper Dangerous? Balancing Secrecy and Openness in Counterterrorism. Security Studies 19, 1 (2010), 66--98. https://doi.org/10.1080/09636410903546483.
[18]
Richard Socher. 2019. Introducing a Conditional Transformer Language Model for Controllable Generation. Retrieved October 25, 2019, from Einstein Blog website: https://blog.einstein.ai/introducing-a-conditional-transformer-languagemodel- for-controllable-generation/.
[19]
Irene Solaiman, Miles Brundage, Jack Clark, Amanda Askell, Ariel Herbert-Voss, JeffWu, Alec Radford, and JasmineWang. 2019. Release Strategies and the Social Impacts of Language Models. arXiv:1908.09203 [cs.CL] (2019).
[20]
Nick Statt. 2019. Thieves are now using AI deepfakes to trick companies into sending them money. Retrieved November 3, 2019, from The Verge website: https://www.theverge.com/2019/9/5/20851248/deepfakes-ai-fake-audiophone- calls-thieves-trick-companies-stealing-money.
[21]
Peter P. Swire. 2004. A Model for When Disclosure Helps Security: What Is Different about Computer and Network Security Symposium - The Digital Broadband Migration: Toward a Regulatory Regime for the Internet Age. Journal on Telecommunications & High Technology Law 1 (2004), 163--208.
[22]
Rowan Zellers. 2019. Why We Released Grover. The Gradient (2019). Retrieved from https://thegradient.pub/why-we-released-grover/.
[23]
Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Franziska Roesner, and Yejin Choi. 2019. Counteracting neural disinformation with Grover. Retrieved October 25, 2019, from Medium website: https://medium.com/ai2-blog/counteracting-neural-disinformation-withgrover- 6cf6690d463b.
[24]
Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Franziska Roesner, and Yejin Choi. 2019. Defending Against Neural Fake News. arXiv:1905.12616 [cs.CL] (2019).

Cited By

View all
  • (2024)PositionProceedings of the 41st International Conference on Machine Learning10.5555/3692070.3692998(23082-23104)Online publication date: 21-Jul-2024
  • (2024)Are skepticism and moderation dominating attitudes toward AI‐based technologies?The American Journal of Economics and Sociology10.1111/ajes.1256583:3(567-607)Online publication date: 24-Feb-2024
  • (2024)Ethical AI governance: mapping a research ecosystemAI and Ethics10.1007/s43681-023-00416-zOnline publication date: 14-Feb-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
AIES '20: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society
February 2020
439 pages
ISBN:9781450371100
DOI:10.1145/3375627
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 07 February 2020

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. ai governance
  2. disclosure of research
  3. misuse of ai
  4. publication norms

Qualifiers

  • Research-article

Funding Sources

  • Open Philanthropy Project

Conference

AIES '20
Sponsor:

Acceptance Rates

Overall Acceptance Rate 61 of 162 submissions, 38%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)77
  • Downloads (Last 6 weeks)6
Reflects downloads up to 12 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2024)PositionProceedings of the 41st International Conference on Machine Learning10.5555/3692070.3692998(23082-23104)Online publication date: 21-Jul-2024
  • (2024)Are skepticism and moderation dominating attitudes toward AI‐based technologies?The American Journal of Economics and Sociology10.1111/ajes.1256583:3(567-607)Online publication date: 24-Feb-2024
  • (2024)Ethical AI governance: mapping a research ecosystemAI and Ethics10.1007/s43681-023-00416-zOnline publication date: 14-Feb-2024
  • (2024)How to design an AI ethics boardAI and Ethics10.1007/s43681-023-00409-yOnline publication date: 15-Feb-2024
  • (2024)Protecting society from AI misuse: when are restrictions on capabilities warranted?AI & SOCIETY10.1007/s00146-024-02130-8Online publication date: 10-Dec-2024
  • (2023)Future-proof: Monitoring the development, deployment, and impacts of Artificial IntelligenceJournal of Science Policy & Governance10.38126/JSPG22030522:03Online publication date: 11-Sep-2023
  • (2023)AI is Like… A Literature Review of AI Metaphors and Why They Matter for PolicySSRN Electronic Journal10.2139/ssrn.4612468Online publication date: 2023
  • (2023)Reclaiming the Digital Commons: A Public Data Trust for Training DataProceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society10.1145/3600211.3604658(855-868)Online publication date: 8-Aug-2023
  • (2023)The Gradient of Generative AI Release: Methods and ConsiderationsProceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency10.1145/3593013.3593981(111-122)Online publication date: 12-Jun-2023
  • (2023)Determining the best feature combination through text and probabilistic feature analysis for GPT-2-based mobile app review detectionApplied Intelligence10.1007/s10489-023-05201-354:2(1219-1246)Online publication date: 26-Dec-2023
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media