Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
opinion
Free access

Is Computing a Discipline in Crisis?

Published: 06 June 2024 Publication History
  • Get Citation Alerts
  • Computing has changed significantly since the launch of ChatGPT in 2022. For decades, artificial intelligence (AI) was a subfield of computer science that overpromised and underdelivered. Language mastery has been the holy grail of AI from its early days. Suddenly, computers can communicate fluently in natural language—sometimes nonsense, but always in a very polished language. Suddenly, the dream of artificial general intelligence, which always seemed beyond the horizon, does not seem so far off.
    In January 2024, a survey of thousands of AI authors on the future of AI was publisheda on arXiv. While the survey quickly came under some criticism,b its main message cannot be ignored: The AI community clearly has deep concerns about where the field is heading. Human-extinction risk often dominates discussions about AI risks, but the survey revealed deep concerns about other significant risks, such as the spread of false information, large-scale manipulation of public opinion, authoritarian control of populations, worsening economic inequality, and more.
    There was little agreement among the surveyed authors on what to do about these concerns. For example, there was disagreement about whether faster or slower AI progress would be better for the future of humanity. There was, however, broad agreement that research aimed at minimizing potential risks from AI ought to be prioritized more. So, I went to the call for papers of NeurIPS 2024, one of the largest AI conferences, and searched for the word “risk.” I did find it; submissions that do not meet some requirements “risk being removed without consideration.” In fairness, NeurIPS papers are required to adhere to the NeurIPS Code of Ethicsc and may be subject to an ethics review, in which the issue of harmful consequences will be considered, but the topic of AI risks has yet to become a major topic at NeurIPS.
    The point here is not to single out NeurIPS. The concerns of the AI community about the direction of their field cannot be addressed by one researcher at a time or one conference at a time, just as concerns about climate change cannot be addressed by individual actions. These concerns require a collective action. Is the AI community capable of agreeing on a collective action?
    ACM is the largest professional society in computing. It has a special interest group on artificial intelligence (SIGAI), but SIGAI does not sponsor major AI conferences or journals. ACM does co-sponsor the AAAI/ACM Conference on AI, Ethics, and Society, yet the general sense is that ACM allowed itself to “lose” AI many years ago. In fact, AI has its own professional association, the Association for the Advancement of Artificial Intelligence (AAAI), which does sponsor the AAAI Conference on Artificial Intelligence. But this North America-based conference has a “sister” conference, the International Joint Conference on Artificial Intelligence (IJCAI), which is an independent professional society. Past attempts to bring AAAI and IJCAI together have failed. Furthermore, the huge growth over the past decade has been in the area of machine learning. NeurIPS 2023, for example, was attended by more than 13,000 participants. NeurIPS is sponsored by the NeurIPS Foundation, yet another professional society. The AI community is badly fragmented, which makes it quite challenging to agree on collective actions. Hence the sense of crisis.
    A deeper chasm in the AI community is between academia and industry. Academic researchers are comfortable with the ACM Code of Ethics,d which requires computing professionals to consistently support the public good. But industrial researchers typically work at for-profit corporations, who often pay lip servicee to corporate social responsibility, but, in practice, typically focus on profit maximization. As Jeff Horwitz wrote in his 2023 book, Broken Code,f about Facebook: “The chief executive, and his closest lieutenants have chosen to prioritize growth and engagement over any other objective.” With Big Tech consisting of six corporations with more than one trillion dollars in market capitalization, their research budgets dwarf governments’ research budgets in computing. Furthermore, industrial researchers have access to large-scale data and computing that academic researchers can only dream of.
    So where does the field go from here? I believe that we must find a way to pick up the baton of social responsibilityg that was left on the ground when Computer Professionals for Social Responsibility was dissolved in 2013. Social responsibility is significantly more central to computing today than it was then. I would like to see one of our professional societies lead the AI community by acting as convenor and moderator for a community-wide conversation about the future of AI. Such a conversation is badly needed.
    Moshe Y. Vardi ([email protected]) is University Professor and the Karen Ostrum George Distinguished Service Professor in Computational Engineering at Rice University, Houston, TX, USA. He is the former Editor-in-Chief of Communications.

    Footnotes

    f
    Horwitz, J. Broken Code: Inside Facebook and the Fight to Expose Its Harmful Secrets, Doubleday (Nov. 14, 2023)

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image Communications of the ACM
    Communications of the ACM  Volume 67, Issue 7
    July 2024
    82 pages
    ISSN:0001-0782
    EISSN:1557-7317
    DOI:10.1145/3676630
    • Editor:
    • James Larus
    Issue’s Table of Contents
    Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 02 July 2024
    Online First: 06 June 2024
    Published in CACM Volume 67, Issue 7

    Check for updates

    Qualifiers

    • Opinion

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 7,666
      Total Downloads
    • Downloads (Last 12 months)7,666
    • Downloads (Last 6 weeks)6,665

    Other Metrics

    Citations

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Digital Edition

    View this article in digital edition.

    Digital Edition

    HTML Format

    View this article in HTML Format.

    HTML Format

    Get Access

    Login options

    Full Access

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media