David Golumbia - Cyberlibertarianism the Right-Wing Politics of Digital Technology (2024)
David Golumbia - Cyberlibertarianism the Right-Wing Politics of Digital Technology (2024)
David Golumbia - Cyberlibertarianism the Right-Wing Politics of Digital Technology (2024)
The Right-
Wing Politics
of Digital
Technology
David Golumbia
FOREWORD BY GEORGE JUSTICE
Cyberlibertarianism
Cyberlibertarianism
The Right-Wing Politics of Digital Technology
DAVID GOLUMBIA
U N I V ER SI T Y O F M I N N E S OTA PR E S S
MINNEAPOLIS • LONDON
Copyright 2024 by David Golumbia
33 32 31 30 29 28 27 26 25 24 10 9 8 7 6 5 4 3 2 1
Contents
At the time when David Golumbia was diagnosed with an aggressive can-
cer in summer 2023, his completed manuscript for Cyberlibertarianism had
been through peer review and approved for publication by the Faculty
Editorial Board of the University of Minnesota Press. David intended to
make minor revisions and to write a brief preface that would bring the
work up to date, but regrettably his treatment did not allow him to do so.
He died on September 14, 2023. The publisher gratefully acknowledges the
help of David’s friend and interlocutor of long standing George Justice,
provost of the University of Tulsa, for advising us on David’s final wishes
and reviewing the copyedited manuscript and proofs.
vii
Foreword
George Justice
ix
x Foreword
Barlow and Weir must have written this song before they jumped too deeply
into libertarian bullshit.) He probably wished, at some level, that he could
have attended and enjoyed Burning Man. David was able to maintain seem-
ingly contradictory elements in his thinking and his practice. He would
never have been so naive as to see a realm of culture dissociated from ideol-
ogy. But he was certainly fascinated by, and perhaps even “loved,” aspects of
culture, whether in music, writing, or film, in which he could identify dan-
gerous, undemocratic strains of thought and through which forces exer-
cised undemocratic control over human lives.
As a matter of fact, some of our first conversations in 1988 were about
the Grateful Dead. It would be fair to say that he was my teacher about the
Grateful Dead over decades of close friendship. I never heard him happier
than when, a few years ago, he greeted me on the phone with the simple
command “Listen” and played, with an aspirational fidelity, the rhythm
guitar part of “Scarlet Begonias,” a song we both loved. In later years he
pursued things like playing the guitar and getting tattoos, neither of which
he did when we met in graduate school. Indeed, in the weeks before he
died David was still directing me to both obscure and famous shows—
available through Archive.org, a robust repository that houses live record-
ings of Dead shows as well as hosting the Wayback Machine, which aims
to ensure that web pages never die. Archive.org was the kind of digital tool
on which David relied for information and entertainment. As skeptical as
Cyberlibertarianism shows him to be about “free” things on the internet
and tech’s self-interested assault on copyright, he was happy to enjoy boot-
legged recordings of the musicians he loved wherever he could find them.
I briefly met David Golumbia in August 1988 at the orientation for new
doctoral students in the Department of English at the University of Penn-
sylvania. Two days later, we ran into each other in a stairwell in what was
then called Bennett Hall, which was the department’s home. A quick hello
turned into one of the most compelling forty-five-minute conversations I
have ever had, ranging from books to music to politics to personal matters.
And pretty much every conversation I had with David from that day until
our last conversation in August 2023, thirty-five years later, covered the same
territory. I wouldn’t be the person I am today without those conversations,
which were endlessly interesting to me.
We both came to Penn planning to study eighteenth-century literature.
David was fascinated with Daniel Defoe, who possessed an intellect and set
xii Foreword
drama alike, always finding what was funnier in the comedies than the shows
themselves understood and what was ironic or tragic beyond the ostensible
plots and situations of the dramas.
Although I believe I gained the most from them, our conversations weren’t
one-sided. What I lack in David’s encyclopedic knowledge and analytical
brilliance I might make up for in general competence at navigating the
world, particularly academia. It wasn’t my job to make David any different
from the quixotic polymath who was continually shocked that people
shared neither his intellectual abilities nor his moral and political commit-
ments. That would have involved betrayal of who he was at his core. But
I did take it as a responsibility (and a deep pleasure) to give him a bit of
insight into the world as many (most?) people experience it. Was I helpful?
I hope so. Was he helpful to me? Absolutely.
David knew a lot more than I did. When I got depressed because I
couldn’t understand a book bringing together Lacanian psychoanalysis
and eighteenth-century philosophy, he patiently talked me through it—for
hours. I grew dependent on his interpretations of the theory that we read
together, and I took for granted his explanations of analytic philosophy and
contemporary linguistics.
We talked quite a bit about computers back in the day when we both
came to Penn lugging our Macintosh Plus desktop machines. He explained
things to me along the way, from getting my first modem and using Gopher
to find information (and look at early dating sites), to deeper forms of
information as the World Wide Web got up and running. He had distinc-
tive handwriting that I am convinced achieved some of its shape and clar-
ity through an imitation of Apple’s Geneva font (I was more a New York
kind of guy). He was an early practitioner of computer art, which he pur-
sued seriously along with the L=A=N=G=U=A=G=E poetry he wrote (and
published) all while building his career in teaching and literary criticism.
David didn’t find academic employment after finishing his studies.
Instead, he moved to New York and worked on Wall Street as a software
developer while his partner (and then spouse) Suzanne Daly pursued her
PhD in the English department at Columbia University. His years work-
ing in software gave him additional knowledge about how computers had
become indispensable, in sometimes uncomfortable ways, to culture and
commerce. He began to develop a knowledgeable skepticism about what
computers could and should (as well as shouldn’t) do. He understood and
xiv Foreword
person. But in the end (and he did understand this), colleagues who were
committed to mainstream thinking about the wonders of technology and
the future of the digital humanities fundamentally and forcefully rejected his
analysis of computing and culture. Without their really knowing it, these
colleagues were enthralled—as so many political progressives are—with the
miracle of tech. Even if such people abhor Ayn Rand, they subscribe, im-
plicitly and sometimes even explicitly, to cyberlibertarianism, the approach
to technology that David explains, and rejects, in the pages that follow.
Over the course of his career David was equally frustrated with his fal-
lible colleagues and his tyrannical universities. At the same time, he enjoyed
being in academic communities and loved the full life of research, teach-
ing, and service at the heart of a faculty career. He was hired at Virginia
Commonwealth University, where he earned tenure and became a beloved
teacher of both graduate students and undergraduates. While he was never
an easy colleague, Catherine Ingrassia, his chair and then dean, gave him
enough space to read, write, and think, while keeping him engaged with
the department in meaningful ways. The symposium held in spring 2024
by colleagues and former students of David’s at VCU was a revelation to
me—not of David’s intellect and care for others, but of how he made a
profound difference in the lives of his students.
It was at VCU that David developed the big ideas that this book ex-
presses so powerfully. In 2016 he published with the University of Minne-
sota Press an installment in its Forerunners series, The Politics of Bitcoin:
Software as Right-Wing Extremism. Here David made a powerful inter
vention in the world, combining his deep academic and practical knowl-
edge with a sharply pointed argument about not just the uses of but also
the nature of cryptocurrencies. This short book continues to be discussed
widely. Cheered by some, treated with contempt by others, the work allowed
David to develop the voice that carried Cyberlibertarianism: knowledge-
able, engaged, sometimes deeply polemical. The Politics of Bitcoin was not
a mere academic exercise, nor is Cyberlibertarianism, despite its deeply seri-
ous research and argumentation.
As cynical as David could be about colleagues and institutions of higher
learning, he was a true believer in the potential of academic culture and
the idea of the university. He valued friends outside of academic life, but
he found sustenance within the walls of academia, misunderstood as he
was. Despite loathing the world of the digital humanities, he continued to
xvi Foreword
editor Paul Bové, a prominent intellectual who finally got David—not be-
cause their work was identical or even similar, but because Bové appreciated
who David was and how David thought. David helped build b2 Online,
and he cherished his relationships with the entire editorial collective at
boundary 2, especially Hortense Spillers. He would also have wanted me
to mention his beloved friend of many years, Lisa Alspector, and his col-
laborator and great friend in more recent years, Chris Gilliard. These are
only a few of the people. David was known to so many, reflected in the fact
that my post on X that David died has received, at this writing, more
than 284,000 views: by far the most well-read (and likely well-appreciated)
piece of writing I will ever produce.
I end with another mention of the Grateful Dead’s “Scarlet Begonias.”
David and I talked about this couplet when I was sick with stage 3 rectal
cancer from 2015 to 2016, and we revisited it in the weeks before his death:
For thirty-five years, David Golumbia showed me the light about so many
things about our world, its culture, and its politics. Here, in Cyberliber-
tarianism, his magnum opus, he shares the light, writing with complexity
and power about things he understood better than nearly anyone else: how
software works; how language and culture work; how human beings deceive
themselves, but more brutally deceive others, sometimes for profit and other
forms of exploitation, at other times out of zealous adherence to delusional
principles. There’s a bit of both of those things in the right-wing ideology
he associated with “computationalism” throughout his career and here ex-
plains in greater detail and deeper pockets of thought. He lived for his aca-
demic work, and I know he would have loved himself to contribute to the
conversation, and the controversy, this book is sure to spark.
Tulsa, Oklahoma
April 6, 2024
Preface
The Critique of Cyberlibertarianism
From free and open-source software to free culture and “internet freedom”;
from net neutrality to “censorship” to demands for unbreakable encryption
and absolute anonymity: these and many other terms serve as rallying cries
for digital activists from across the political spectrum. Raising questions
about what the terms might actually mean, let alone what might be the argu-
ments for and against the causes they seem to represent, is to stand in the
way of progress. To oppose the favored position is to be idiotic, a Luddite,
or morally inferior. Freedom and justice lie only on the side of the cause
that advocates promote.
These claims should give those of us outside the hothouse of digital ad-
vocacy pause. Given the urgency with which digital enthusiasts tell us that
projects like open-source software and internet freedom are vital political
causes, it is odd that we find such a lack of detailed discussion of positions
supporting and opposing them, or demonstration of the ways they fit into
the rest of our politics. It is difficult, and maybe even impossible, to find
complete descriptions of the various nuanced positions and their pros and
cons in digital advocacy, as there are for other political issues. Instead, one
finds work after work recommending just one position, or at best slight
variations of that one position. These works often seem as if they are copy–
pasted from each other, with very little interest in articulating the under
lying and foundational questions that the topics seem to entail. Free soft-
ware pioneer Richard Stallman’s famous assertion that in “free software,”
“free” means “free as in free speech, not as in free beer” (2002, 43), is almost
always taken to be clear and sufficient. Yet even cursory reflection will show
that what the free software movement recommends has almost nothing in
xix
xx Preface
common with what “free” means in “free speech.” Further, to the limited
degree that software is analogous to speech, it automatically has the same
protections as do other forms of speech. It is nearly impossible to find free
software advocates reflecting on this or providing anything like the detailed
analyses that would be required to connect free speech to free software.
While these topics are often discussed and promoted as political, their
relationship to non-digital political issues and orientations is at best opaque.
“Privacy,” “rights,” “free,” and “freedom” are frequently repeated in the names
given to these issues. However, it is often unclear how we should compare
them to other non-or pre-digital uses of these terms. When encryption
advocates demand that governments be unable to surveil or retrospectively
examine electronic communications even with a legal warrant because of
what they call “privacy,” it is difficult to find any digital enthusiast re-
flecting on the fact that “privacy” has to mean something different in that
context than it does in what lawyers, judges, and legislators use the term
to mean.
This is because the term has rarely, if ever, been used in any prior demo-
cratic context to mean that law enforcement and legislators should never
be able to serve properly executed warrants. It is not a matter of whether
the encryption promoters were providing an accurate analysis, but rather
that the fundamental questions regarding privacy, which have been debated
in the United States and every other democracy for hundreds of years, are
being ignored in favor of a dogma that prevents such discourse. Despite the
fact that quite a few lawyers, judges, legal scholars, and democracies other
than the United States take a different approach to this complex question,
we hear repeatedly that everyone should be in favor of absolute freedom of
speech. We believe we understand the principle of “freedom of speech,”
but when we are told that everyone should be in favor of more or less abso-
lute “internet freedom” because it is “freedom of speech for the internet,”
or some similar formulation, these deep and abiding questions suddenly
fall away.
“Cyberlibertarianism” is a term (introduced in Winner 1997) that scholars
and journalists have developed to highlight and understand this phenom-
enon. Although useful, the term has potentially misleading connotations.
The incorporation of the word “libertarian” may suggest that it is meant
to point at the widespread presence of overt libertarian politics in digital
culture. It does mean that, to an extent, but also points at a wider phenom-
enon that may not be immediately obvious from the word itself.
Preface xxi
CYBERLIBERTARIANISM IN
THEORY AND PRACTICE
CHAPTER 1
L
angdon Winner, one of the world’s leading philosophers of tech-
nology, coined the term “cyberlibertarianism” in his 1997 article
“Cyberlibertarian Myths and the Prospects for Community.”
The ideas he expressed there were also developing elsewhere around the same
time, including in a much-cited essay called “The Californian Ideology” by
media studies scholars Richard Barbrook and Andy Cameron (1995, 1996).
While scholars and other commentators use the label “Californian Ideol-
ogy” as a near synonym for cyberlibertarianism, the first term can suggest
that the system of thought only operates in California (or even only in the
Bay Area), and thus risks confusion. A third work, Cyberselfish (2000) by
Paulina Borsook, is a semi-ethnographic account by a journalist who was
once a somewhat enthusiastic supporter of the digital revolution as a writer
for the central digital utopian publication, Wired. In that book Borsook
uses the term “technolibertarianism” in a very similar way to Winner’s and
Barbrook and Cameron’s terms, but her term lacks the specifically digital
connotations of the cyber-prefix. The term cyberlibertarianism continues to
turn up with some regularity in scholarship (e.g., Chenou 2014; Dahlberg
2010), journalism, and social media. Although these terms are not used in
other critical work, it still shows how right-wing ideas are deeply embed-
ded in digital evangelism (Morozov 2013d; Schradie 2019).
Winner describes cyberlibertarianism as “a collection of ideas that links
ecstatic enthusiasm for electronically mediated forms of living with radical,
right-wing libertarian ideas about the proper definition of freedom, social
life, economics, and politics” (1997, 14). The two prongs of this statement
are crucial: on the one hand, an ecstatic enthusiasm for digital technology
3
4 The Dogma of Cyberlibertarianism
Winner writes that these views about digital–technological progress are too
often characterized by a “whole hearted embrace of technological determin
ism. This is not the generalized determinism of earlier writings on technol-
ogy and culture, but one specifically tailored to the arrival of the electronic
technologies” (1997, 14). That technological determinism, typically framed
as apolitical, insists that “the experiential realm of digital devices and net-
worked computing offers endless opportunities for achieving wealth, power,
and sensual pleasure. Because inherited structures of social, political, and
economic organization pose barriers to the exercise of personal power and
self-realization, they simply must be removed” (15).
It is this attitude toward the “inherited structures of social, political, and
economic organization,” especially the construal of them as “barriers,” that
characterizes cyberlibertarianism as a political force. This kind of language
pervades the writings of not just overt right-wing figures like Eric Raymond,
Paul Graham, and Peter Thiel but also those with nominally left-of-center
political leanings like Clay Shirky and Yochai Benkler. These latter writers
often take as given right-wing definitions of social freedom, the role of
government, and the place of institutions, while occasionally paying what
is mostly lip service to the political goals of non-libertarians. By doing so,
they actively promote the idea that digital technology is irresistibly destroy-
ing our current social structures. We have no ability to preserve them, and
these explosions are largely, if incoherently, good for democracy. Therefore,
they recommend that we let technology take the lead: social institutions
and governments can try to catch up, ponder, and perhaps adjust to what
technology has done only afterward, if social institutions and governments
continue to exist at all.
Many of the best investigations have studied how digital culture has
tried to solve social problems, especially Fred Turner’s From Counterculture
to Cyberculture (2006a) and Adam Curtis’s documentaries, especially his
All Watched Over by Machines of Loving Grace (2011) and The Trap: What
Happened to Our Dream of Freedom (2007). In these works, we see people
The Dogma of Cyberlibertarianism 5
who believe they are working toward “social justice” or a vaguely 1960s-
inspired flower power vision of world improvement, but who become
entranced by the “possibilities” of digital technology. They transfer their
social beliefs into technological ones without careful reflection on whether
these goals are compatible. One need look no further for an example of
this pattern than to Apple cofounder Steve Jobs, whose pursuit of “insanely
great” technology via the empowerment of individuals was marked by coun-
terculture radicalism. Jobs’s crowning achievement, the iPhone, might even
be argued to escape certain typical limitations of commercial products and
have some kind of socially beneficial effect. However, it is hard to see how
these effects are much more than marginal. It is even harder to see why dis-
cussions of the prosocial features of the iPhone should preclude our talking
about aspects of the product that may be antisocial or antidemocratic.
It is a huge leap from the social justice pursuits associated with the civil
rights campaigns of the 1960s to the design, manufacturing, and distribution
of one of the most successful commercial products of the twenty-first cen-
tury. The iPhone primarily serves the interests of Apple’s Board of Directors,
senior executives, and shareholders. Further, the intense self-involvement
and lack of engagement with the immediate social world, often thought to
be predictable by-products of consumer technology, are far removed from
the egalitarian values of the 1960s. From Jobs’s perspective, the radical impe-
tus of the 1960s to help the world by turning away from self and commerce,
and toward other people and their needs, appears to have been warped into
something that looks almost like its polar opposite. The situation would be
less remarkable if there were not such a tenacious insistence on the conti-
nuity between the two projects, despite the obvious lack of fit between them.
The confluence of 1960s social activism and digital utopianism resulted
in a profound shift whose consequences remain under-examined to this
day. Built on top of the civil rights and antiwar movements, much of 1960s
activism was predicated on the notion that democratic government is and
should be the primary guarantor of social equality, in terms of both the
rights of minorities (which pure majoritarian democracy will inherently
fail to protect) and the attempts to realize through government certain
principles found in the U.S. Constitution and Bill of Rights, as might best
be understood via the Supreme Court’s actions in the 1954 Brown v. Board
of Education decision. This focus on equality has been a bedrock principle
of left politics, ranging from the moderate insistence on voting rights and
6 The Dogma of Cyberlibertarianism
resist efforts not just to describe these foundations, but to provide any foun-
dational support for the doctrines at all.
Cyberlibertarianism is especially dangerous because, like other libertar-
ian anti-philosophies, it speaks in the name of an individual or “negative”
freedom that takes advantage of rhetoric based in values like democracy,
rights, and equality while at the same time agitating strongly against all of
the structures and institutions that democracies build to protect those val-
ues. For example, the term “digital rights” (see chapter 2) is one with wide
currency in cyberlibertarian discourse. To the untrained ear, it may sound
like it refers to advocacy for human and civil rights in digital contexts, and
sometimes it does mean this. In its initial deployments, however, the phrase
meant something entirely different: the idea that copyright and intellectual
property should have different meanings on computers than off of them.
It even carried the connotation that the “right” to obtain works of intel-
lectual property without compensation to their creators was in some way
aligned with human rights. The poster child for this movement was always
Wikipedia, which has been portrayed as an unalloyed human good. How-
ever, the nonprofit organization that runs the site has consistently disparaged
the idea that creators, especially academics, should have property interests
in their work. Even academic treatments of “digital rights” (Herman 2013;
Postigo 2012; see also chapter 2) offered little space to the view that creator
rights might have any grounding in or relationship to human rights. To the
contrary, the ambiguity in the word “rights” in this context consistently
blurs the line between its narrow and legalistic meaning in terms of “own-
ership” of intellectual property versus its wider sense in “human rights.”
Wikipedia and other nonprofits are often portrayed as the main bene
ficiaries of the anti-copyright movement, prioritizing the user interests of
intellectual property (IP) consumers over the ability of creators to earn a
living from their hard work. Pop music artists from Metallica to Dr. Dre
who attempt to assert their rights to be compensated for work are often
ridiculed in the name of a nebulous idea of the “rights” of listeners. Mean-
while, writers like Cory Doctorow, Lawrence Lessig, and others are cele-
brated for their thinly reasoned arguments about the selfishness of creators
who want to be paid for their work.
It would be wrong to suggest that the legions of Reddit, Y Combina-
tor, and X (formerly Twitter) writers attacking the interests of creators are
willing participants in a nefarious plot to direct profits toward an unseen
commercial entity, let alone one in which those individuals have a vested
The Dogma of Cyberlibertarianism 9
is with the philosophical libertarianism in high tech. I can’t count the num-
ber of times I’ve gotten into a discussion with a thoughtful sweet high tech
guy about something where he will snort disdainfully about how he’s not a
libertarian (meaning, he’s not like those crazy people over there) and then
will come right out with a classic libertarian statement about the el stewpido
government or the wonders of market disciplines or whatever. (15)
Governments of the Industrial World, you weary giants of flesh and steel, I
come from Cyberspace, the new home of Mind. On behalf of the future, I ask
you of the past to leave us alone. You are not welcome among us. You have
no sovereignty where we gather.
We have no elected government, nor are we likely to have one, so I address
you with no greater authority than that with which liberty itself always
speaks. (28)
One notes the poetic, exultant register of these statements, which, despite
being peppered with terms and ideas familiar in political libertarianism,
nonetheless eclipses the more focused statements of Hayek or Nozick. The
Declaration contains many statements whose rhetoric eclipses the practical
sense that can be made of it, such as the idea that governments are “weary
giants of flesh and steel,” while cyberspace is made of some other, ineffa-
ble material. The florid use of metaphors and inexact characterizations of
“governments” as opposed to the “us” who make up “cyberspace” helps to
obscure the fact that Barlow’s text, as media studies scholar Aimée Hope
Morrison has written in a careful analysis, offers “an idealized but otherwise
easily recognizable online version of liberal individualism” that can only be
understood as something else through a “certain rhetorical violence, evi-
denced in the text by the wild proliferation of loaded metaphors and some
tricky pronoun usage” (2009, 61). This aspect of “raver” cyberlibertarianism
becomes critical for the ideology’s success, as its very haziness and rhetori-
cal power serves as a rallying cry especially for those who prefer to take
their policy prescriptions in sweeping, programmatic form.
At the other end of the spectrum, Borsook posits a group she calls “gild-
ers,” named after the once prominent digital culture figure George Gilder,
coauthor of another founding cyberlibertarian document, “Cyberspace and
The Dogma of Cyberlibertarianism 13
the American Dream: A Magna Carta for the Knowledge Age” (Dyson et al.
1994)—that Langdon Winner also analyzed in his foundational cyberliber-
tarianism essay. The “Tory leader of the Wired technolibertarian revolution,”
Borsook writes, Gilder is a “former Republican Party speechwriter and is
a family-values Cotton Mather” (2000, 140–141) who shared with Wired
founder Louis Rossetto an anti-feminist, arguably misogynist perspective
on women’s rights long before he found computers. (Borsook points out
that he is “singled out for special vivisection in Susan Faludi’s Backlash.”) An
ardent supporter of supply-side economics during the Reagan era, born-
again Christian, and proponent of the creationist pseudoscientific theory
of intelligent design, Gilder might be thought an odd figurehead for a move-
ment that purports to be based in scientific and technological progress.
Nevertheless, the profoundly right-wing nature of Gilder’s overall belief
system (in addition to Borsook, Gerard 2022 provides a thorough analysis
of Gilder’s more recent thought) turns out to be critical for understanding
how cyberlibertarianism functions.
While many cyberlibertarians earn their livings from disseminating that
dogma—including Jeff Jarvis, Clay Shirky, and Cory Doctorow, who make
enormous speaking and consulting fees for spreading the gospel of digi-
tal disruption—Gilder deserves Borsook’s special categorization for the
ways his economic and political theories about digital transformation play
transparently and even deceitfully into his personal fortunes. At just the
moment he was being prominently featured repeatedly in Wired, Gilder was
writing and publishing one of the leading technology-focused investment
newsletters in the United States, the Gilder Technology Report. Investment
newsletters exist in a contradictory segment of investment advice where it
can be hard to reconcile the adviser’s advertised expertise in investing with
their preference for selling that advice rather than following it themselves.
Such publications are often said to survive on the hype they generate to sell
copies. Gilder was selling his advice at top dollar while at the same time
hyping the radical success of technology firms as an ostensibly objective
“forecaster.” Of course, that top dollar was mostly available due to what is
now called the “dot-com bubble” of the late 1990s, whose spectacular pop-
ping in 2000–2001 took Gilder along with it.
As reads a portrait of Gilder published in 2002 in Wired itself, after he
and Rossetto, his strongest supporter, were no longer affiliated with the
magazine:
14 The Dogma of Cyberlibertarianism
For a short stretch during the late 1990s, Gilder’s newsletter made him a
very wealthy man. Anyone taking a cursory look at it might wonder why.
Every issue is densely freighted with talk of lambdas, petahertz, and erbium-
doped fiber amplifiers. The eighth and final page, however, explains how
so geeky a publication attained, at its zenith, an annual subscription base of
$20 million. It’s on the back page that Gilder lists the stocks he has dubbed
“telecosmic”—companies that have most faithfully and fully embraced the
“ascendant” telecom technologies in which he believes so wholly and deeply.
“For a few years in row there, I was the best stock picker in the world,”
Gilder says ruefully. “But last year you could say”—here, for emphasis, he
repeats each word as a sentence unto itself—“I. Was. The. Worst.” Most of
the companies listed have lost at least 90 percent of their value over the past
two years, if they’re even in business anymore. (Rivlin 2002)
As his use of the nonce term “telecosmic”—also the title of one of Gilder’s
mid-1990s books—suggests, Gilder’s stock picking was integral to his
techno-utopianism. Many who follow “gilder” cyberlibertarianism share
his crypto-religious view of technological progress—which must be recon-
ciled with his anti-scientific beliefs in creationism, evangelical Christianity,
and deep biological differences between the sexes—propelling humanity
toward a transcendent climax. This end point was referred to as the “New
Economy” in the 1990s, but it actually represented a self-interested belief
in the overcoming of the forces typically associated with the word “econ-
omy.” As Rivlin writes, the theory of the New Economy “that created its
own set of rules represented no great leap for this man who was inclined to
see history as the determined march from savage to enlightened being.”
Gilder’s peculiarities are themselves resonant for much of cyberliber
tarian culture. But Borsook draws attention to him for his philosophical
cyberlibertarianism, which may be at odds with the “raver” portrait more
familiar to the general public. “Ravers,” as Borsook sees it, are likely to
attend the Burning Man festival, admire Timothy Leary, and spend a great
deal of time fantasizing about sex, drugs, music, and video games when they
aren’t thinking about coding. Gilders, on the other hand, may well enjoy
the sybaritic pleasures but tend to spend more time focused on how digi-
tal technology can directly increase their material wealth. They are social
traditionalists (for the most part, perhaps with the exception of those
places where such attitudes might interfere with their own enrichment)
The Dogma of Cyberlibertarianism 15
cyberlibertarians may support women’s rights and gay rights and vote
Democratic. However, their failure to see the political ramifications of
many of their other beliefs only underscores the importance of identifying
and enumerating them.
There is one more concrete reason to talk about cyberlibertarianism as
a formation related to political libertarianism. Rather than being a clear
set of principles, political libertarianism, except in the hands of its most
sophisticated promoters, is not a coherent doctrine. This sets it apart from
all major forms of democratic theory, as well as from the theories of mon-
archy and feudalism that democracy replaced. In those theories, relations
of power and authority, though noxious to modern sensibilities, were at
least clear. Libertarianism has often been characterized as something like an
excuse for power: a set of myths propagated by those who have accumulated
a great deal of power to themselves, and now wish to pull up the bridge
behind them (see Haworth 1994). As articulated by its most fervent spokes-
people, from Ayn Rand to Murray Rothbard to Hans-Hermann Hoppe,
libertarianism depends on both prevarication over terms and false back-
ground presumptions about the nature of society.
extreme right wing came to dominate to the degree that what had once
been the mainstream now became exiled almost entirely. It is no longer
clear whether what we once called “conservatism” has much in common
with “right-wing politics” at all.
Complicating these problems is one occasionally remarked on by politi
cal theorists but hard to ignore: the political theories of the Enlighten-
ment, on which much of our current political understanding is based, were
themselves ambiguous in specific ways, often deliberately, but have become
lost to us over time. This is most especially true when we consider the idea
of “liberalism,” an overarching term for post-Enlightenment political philos
ophy that does not endorse one of the specific dissenting forms of politics
critical of it, including Marxism and fascism. In the late twentieth-century
United States, “liberal” became a generic term for left-wing politics that
stood back from socialism or communism. At the same time, “economic
liberalism” was a common phrase in international relations, referring to the
expansion of free market doctrine and the relaxing of regulation. Although
not exclusively, Republican administrations were more likely to advocate
for this than their Democratic counterparts. In this sense “liberal” seems to
point in two contradictory directions: toward the use of governmental power
on the one hand, and against it on the other hand. Those associated with
the Neoliberal Thought Collective and the Mont Pelerin Society (Mirowski
2009, 2013), the political actors furthest to the right, could berate liberal-
ism as the very name for all that they thought was worst in society while
describing themselves as “classical liberals.” They often pointed at the same
figures (especially John Stuart Mill) from whom those on the left might also
take inspiration for specific purposes.
As in the case of trade liberalization, economic liberalism seems to point
almost exclusively at capitalism as its goal, whereas political liberalism is far
more interested in political freedoms and rights. To some of the founding
theorists of liberalism like Adam Smith and John Locke, it might appear that
capitalism and political liberalism go hand in hand, although read closely
even these two thinkers express cautions about this equation. As Ishay Landa
puts it:
Liberalism was the socioeconomic doctrine with which the ascending Euro-
pean bourgeoisie of the late eighteenth and early nineteenth century chal-
lenged the nobility. It began optimistically, a “progressive” movement which
The Dogma of Cyberlibertarianism 21
as more specialized points of dogma than literal terms of reference and thus
deserve to be read in scare quotes:
• Everything in the world is being changed utterly, almost always for the
better, by the advent of digital technology.
• Anything that existed prior to the digital needs total or near-total
transformation.
• What existed prior to the digital was closed and private; digital phenomena
are, in contrast, open and public.
• Digital technology democratizes; the internet is fundamentally
democratizing.
• Everything can be or should be open.
• Information wants to be free.
• Despite the ubiquitous transformations created by digital technology,
whatever is essential or important to any given phenomena will remain
part of the transformed thing.
• Creators should have few or no ownership rights over the materials they
produce.
• The internet is democratizing culture in a way that is just as profound as
the impact of the printing press.
• Networked and peer-to-peer connections are distinctive marks of the
digital.
• Networked and peer-to-peer connections constitute a fundamental
transformation of human communications and social organization.
• People who resist any technological change or request any other means of
regulating change than the one just mentioned do so only out of fear,
jealousy, or lack of understanding.
• Widespread adoption and use of a technology is prima facie proof that the
technology is beneficial.
and democracy are considered unworthy of study, let alone that one might
want to study these subjects prior to actively engaging in attempting to
change the political and governmental structures under which we live.
Instead, “civic hackers” and others push forward, “doing” instead of “talk-
ing.” They are sure that their unexamined ideas about core political concepts
are so solid as not even to admit of analysis—despite the fact that most of
us familiar with those concepts find that even on close examination their
exact meanings and consequences are rarely transparent. Inherent in the
“civic hacking” project generally and the “Code for America” project spe-
cifically is a shiny, well-designed optimism for “citizen engagement” with
the levers and pulleys of democracy. These initiatives neither encourage
nor require participants to familiarize themselves with the actual American
system of government. Such engagement would impede the direct action
encouraged by such projects. It is unsurprising that these movements, as
discussed below, fail to achieve authentic political engagement, instead often
disparaging the project of government and seeking to monetize public re-
sources for private interests. This is usually done with the explicit proviso
that such monetization should not be reflected back on government or the
citizens responsible for the resource in the first place.
These projects are highly effective in steering public and civic impulses
for private benefit through implicitly belittling political expertise. However,
this implicit belittlement is overshadowed by the typically explicit disdain
expressed by digital enthusiast communities for the social, political, philo-
sophical, economic, and military expertise on which our democratic institu
tions rely. The self-identified “cypherpunks” routinely wax authoritatively
on matters on which they have no background whatsoever, such as diplo-
macy, intelligence, military operations, and state security. Such lack of con-
text might be excusable were they autodidacts who engaged deeply with
the wide range of materials and personal expertise available on these topics;
instead, they typically dismiss such knowledge with a variety of business-
derived, conspiratorial rhetorical gestures involving “gatekeepers,” “incum-
bents,” and the like.
• Government is illegitimate.
• Democracy is a fraud.
• Companies and the market are better guarantors of human rights than are
democratic governments.
• Equality of outcomes is an entirely illegitimate goal to pursue.
• Power in society is largely distributed through merit.
• Technology and commercial power are essentially outside of politics.
• The world as we have it is politically “neutral”; “bias” is attributable to
“bad actors” whose influence can be uprooted via more neutrality.
• Government is inherently far more destructive—and even “evil”—than
commercial or financial power can ever be.
• The “mainstream media” cannot be trusted at all.
• The real, exposed existence of secret government programs means that
all information associated with “government” can be discounted as
propaganda.
backers of the movement. More diffuse, but just as poisonous, are the mul-
tiple blatant contradictions among the Tea Party and its allied extreme right-
wing political factions. The ideas promoted by the movement—such as
the health insurance legislation known as “Obamacare,” the theory of the
“unitary executive,” the constitutional philosophy of “judicial restraint,”
the notion that it is treasonous to question the judgment of a president
during times of war—can be turned on their heads or altogether ignored
when they no longer serve.
Cyberlibertarianism is different from the Tea Party and climate denial
in that it is not directly and specifically articulated by right-wing funding
bodies like the Koch brothers, Richard Mellon Scaife, Karl Rove, and others.
It is also not primarily concerned with the nuts and bolts of legislation and
executive policy (although it is more interested in these than may seem
obvious at first). Where climate denial and the Tea Party require a great
deal of priming to function, cyberlibertarianism in many ways runs on its
own power. Cyberlibertarianism works by processes of identification and
attraction, in part because it does not appear to be constituted as a force
contrary to views which have widespread social support.
Cyberlibertarians (with few exceptions) do not refer to themselves by
that name; there is (as yet) no political movement that calls itself cyber
libertarianism. Cyberlibertarians do not have to advertise their wares with
great specificity. And unlike the case with the Tea Party and climate denial,
the borders of cyberlibertarianism do not have to be policed. Nevertheless,
for all its viral amorphousness, cyberlibertarianism is widespread, distinctly
powerful, unified despite its deep incoherence, and committed to its core
(contradictory) principles. However, like the Tea Party and climate denial,
when it comes down to money, cyberlibertarians almost always end up
on the side of those with the most access to and most investment in tools
of power.
There is one notable exception to the lack of self-identified cyberliber-
tarians: a pamphlet produced by the Technology Liberation Front, whose
name itself points at the movement’s faux-populist politics. The organization
is a project run by Adam Thierer, an affiliate and executive at several of the
furthest-right lobbies and academic centers in the United States, including
the Heritage Foundation, the Cato Institute, the Federalist Society, and
the Mercatus Center at George Mason University. Along with another right-
wing technology activist, Berin Szóka, Thierer wrote “Cyber-Libertarianism:
The Case for Real Internet Freedom,” which grounds cyberlibertarianism
28 The Dogma of Cyberlibertarianism
add fuel to the fire they do care about, which is the one that views govern-
ment as “the problem.” This view doesn’t oppose government in toto, but
instead promotes the virtual ownership of government by for-profit, “free
market” interests. These entities engage in large-scale projects of wealth re-
distribution and social engineering, of which radical inequality is not merely
an inadvertent product but the implicit and sometimes explicit goal.
Again, it is less the obvious, interested advocacy of right-wing politics
by openly libertarian figures like David Friedman, Peter Thiel, or Eric Ray-
mond that is of primary concern here. Instead, the same pro-business,
anti-government perspectives inherent in discourse of “the digital” are sub-
tly embedded within broader right-wing ideology. In this respect, cyber
libertarianism is best understood as a magician’s trick. The magician directs
the audience to look at one hand, which usually holds a flashy object or
performs some kind of dramatic action. Meanwhile, the magician carries
out the mechanics of the trick with their other hand, while the audience
is distracted. In cyberlibertarianism, the hand holding the flashy object is
our talk about digital technology; the other hand, to which nobody is pay-
ing attention, is doing the work.
This analogy is a bit misleading, though, because it implies a level of
willfulness on the part of those expressing cyberlibertarian beliefs. Such
individuals may not ever understand the nature of what they are doing.
They are focused on the digital object, program, app, or event and there-
fore do not see that it is not really doing the work it appears to be doing.
They rarely notice that how they refer to core values like “freedom,” “equal-
ity,” and “democracy” may no longer resemble the way they used those terms
before.
The analogy is also misleading because it suggests there is somebody
who is consciously guiding the whole presentation. That is, neither Jeff
Jarvis nor Clay Shirky acknowledges the ways they substitute discussion
of digital technologies for discussions of the core values they seem to be
talking about; and neither one of them receives direct orders from Karl
Rove, the Koch brothers, Peter Thiel, or Eric Raymond to put one over on
their audiences (although, to be fair, both Jarvis and Shirky do receive a
fair amount of money from the very corporate sources that benefit from
cyberlibertarian practices).
This is what makes cyberlibertarianism more dangerous than overt lib-
ertarianism informed by digital technologies. At least when Peter Thiel or
Elon Musk speaks, audiences can be clear about what politics they endorse
30 The Dogma of Cyberlibertarianism
and what policies might emerge from them. More careful attention needs be
paid to writers like Jarvis, Shirky, Doctorow, and other promoters of “inter-
net freedom,” “digital liberation,” “liberation technologies,” and “Twitter
revolutions.” These voices tend to allow descriptions of technology to over-
whelm and displace discussions of the politics they appear to be describing.
This lack of close attention to politics results in either a lack of resistance
or an explicit acceptance of identifiably neoliberal prescriptions for govern
ments, society, and the economy.
The neoliberal agenda is no secret. Critics from the political left have
been exploring this idea for decades, particularly in the 1990s and since
2000. It is also evident in the writings and speeches of far-right political
thinkers and politicians. This idea has its roots in the work of Friedrich
Hayek and Ludwig von Mises, who participated in the Mont Pelerin Soci-
ety (Mirowski 2009). It resurged in the philosophy of Robert Nozick in
the early 1970s and the practices of Chicago School economists like Milton
Friedman beginning in Pinochet’s Chile in 1973. The idea was most nota-
bly put into practice in the West by Ronald Reagan in the United States
and Margaret Thatcher in the United Kingdom in the 1980s. Both leaders
steered their countries in an economic direction that had been considered
too far to the right for any reasonable democracy to follow just a decade
earlier. The profound change in political outlook is exemplified by the tran-
sition from Abraham Lincoln’s 1863 Gettysburg Address, which describes
the American system of government as “of the people, by the people, for
the people”—“by,” notably, entailing that the U.S. system of representative
democracy means that in a legal and political sense government is the peo-
ple—to Ronald Reagan’s first inaugural address in 1981, where he famously
stated that “government is the problem.” At the time it was little noted—
because it was hard to believe it could be true—that Reagan represented a
doctrine according to which “government,” meaning “the people,” could be
“the problem” in a democracy, and that this formulation could be accept-
able from an elected president, let alone be seen as a welcome articulation
of political principle.
Cyberlibertarianism functions as a less explicit outer shell of neoliberal-
ism. Unlike political libertarianism, it has few overt proponents. Its sup-
porters tend instead to rally around specific terms like “free” and “open,”
“internet freedom,” and “copyright monopolies.” Their concerns appear
neither nationalistic nor especially populist; they sometimes embrace a far
more progressive social agenda than do libertarians. Yet on the matters
The Dogma of Cyberlibertarianism 31
about which adherents are most likely to have significant political impact,
cyberlibertarian doctrine is identical with those points of contact between
neoliberalism and libertarianism: Government is the problem. It is in-
competent and illegitimate. Attempts to ensure “equality” other than via
the invisible hand of the market are automatically the same thing as Soviet-
style social planning (a theme directly out of Hayek’s 1944 work). Govern-
ment should stay out of the way of “digital innovation,” whether that means
enforcing the copyrights of legitimate creators or interfering with the “dis-
ruptive innovations” of “sharing economy” startups like Lyft, Airbnb, and
Uber or the massive rewriting of world legal frameworks undertaken by
Google. Public goods are there for the taking and are best seen as resources
for profitable enterprises to exploit. Law itself is largely illegitimate, and
“digital natives” could do a better job—thus producing the bizarre, anar-
chic paradox that the world would be more lawful if there were no rules
to follow.
As with libertarians, the powerful neoliberal interests aligned with the
most concentrated and unequal exponents of capital do not actually care
about the specifics of cyberlibertarianism. But they care a great deal about
its main pillars, which “just happen” to be identical with their program of
“structural adjustment” and forced privatization of public goods. Finally,
but just as important, labor, in particular organized labor, is the enemy of
freedom—because “freedom” means the freedom of capitalists to concen-
trate and maximize profits. Even basic forms of labor organization are
viewed as redistributionist class warfare, while neoliberal “shock doctrine”
techniques are seen as promoting freedom, despite extracting public re-
sources and wages from the lower classes to enrich those who already have
much more than the rest of us.
A notable imbalance in digital discourse is the conflict between gov
ernmental and private entities regarding abrogations of rights, particularly
the right to privacy. For employees of corporations, in no small part due to
digital technology, nearly all the putative benefits of cyberlibertarian utopi
anism are overturned. As a general rule, employees have few rights to pri-
vacy or free expression; are subject to complete, invasive surveillance; have
no ownership over their work products; and, thanks to pressure against
labor laws, often have little legal recourse for mistreatment. Sometimes
employees even lose the right to sell their skills in the marketplace when
separated from their employer. They have little or no input over the capi-
talist owners of the enterprise offshoring significant parts of the company’s
32 The Dogma of Cyberlibertarianism
labor needs to cut costs, as is famously the case with virtually all the resource
procurement and manufacturing associated with digital technologies. Yet
much of the promotion of “digital rights” ignores most of these issues,
while magnifying the issue of “government surveillance” as the signal evil
of our times.
been considered during the later stages of bill drafting. Wikipedia’s threat
that the bill’s passage would have led to its blackout was dishonest. This
claim was not the intended effect of the bill, and if it were likely, the legis-
lation would have been redrafted to fix the problem. Several organizations,
both commercial and noncommercial, threatened to or did participate in
similar blackouts. Along with many individual activists, the widespread
consensus was that SOPA/PIPA were part of a desire by the evil “govern-
ment” to “censor” the internet.
These “hyperbolic mistruths” (Sherman 2012) misrepresented not just
the contents of the law but the intentions of the legislators. They made
it impossible to ask what should have been important questions: “When
the police close down a store fencing stolen goods, it isn’t censorship, but
when those stolen goods are fenced online, it is?” (Sherman 2012). They
repeatedly referred to something conspiratorially called “copyright monop-
oly” and the “copyright industry,” imputing illicit motives to copyright
holders, as if their pecuniary interests in the law were not abundantly clear
on the surface. Cui bono? is always a central question to ask with any law
or regulation. Yet the vehemence of their rhetoric and performative activ-
ism prevented them from being asked the question of who benefits from
the relatively easy use of copyrighted material without paying royalties.
Who benefits from the sale of counterfeited goods?
These questions were almost off the table, even though their answers
happened to align exactly with the interests of those stoking the protests.
This is easy enough to see in the commercial sector. At the height of the
controversy, for example, Google’s public policy director Bob Boorstin said,
“YouTube would just go dark immediately” (Pepitone 2012). Although Sena-
tor Patrick Leahy, the legislator who introduced PIPA, stated that websites
like Wikipedia and YouTube would not be subject to the provisions of the
bill, it was clear that major commercial interests were concerned that this
legislation would cut into their profits. In a characteristic trick of cyber
libertarian propaganda, even the sponsors of the legislation, along with its
plain text, were rejected as authorities about its meaning—even though
U.S. courts often refer to statements by legislators to determine ambiguous
parts of the meaning of a law.
While the SOPA/PIPA protests were widely promoted as triumphs of
the “digital commons,” they are better understood as triumphs of the pro-
paganda machine surrounding digital technology companies. Google was
found to be heavily involved in generating and encouraging digital activism
The Dogma of Cyberlibertarianism 35
against not just SOPA/PIPA, but also against any attempt to rewrite them
in order to avoid any real problems that might have been present in the
bills. Many of the apparently grassroots “digital rights” organizations, in-
cluding EFF, CDT, and Fight for the Future, were involved as well. Beyond
these players, a surprisingly wide range of activists joined in. As journalist
Yasha Levine put it:
Facebook, Yahoo, Amazon, eBay, Mozilla, Reddit, PayPal, Twitter, and scores
of smaller tech companies went into battle mode to oppose SOPA and PIPA.
They framed the legislative dispute as a fight between freedom and totali-
tarianism and launched a frenzied public relations and lobbying campaign
to kill the laws. The overheated rhetoric of the anti-SOPA tech moguls
resembled nothing so much as the take-no-prisoners agitprop of the National
Rifle Association—right down to the claim that, even if a regulatory curb on
the criminal abuse of tech platforms were to pass, it would prove useless in
execution and enforcement, just as Wayne LaPierre and Oliver North insist
that curbs on untrammeled gun ownership would do precisely nothing to
curb determined criminals from flouting such regulations. (2018a)
All of this might be less problematic if it had been clear just what the
activists wanted. Google, Facebook, Yahoo, and others would certainly be
affected by governments scrutinizing their business practices, which has
become more apparent since SOPA/PIPA. Yet the aims of nominally non-
industry activists is less clear, because what they rallied around was demon-
strably false—a pattern that has continued to haunt nearly every effort by
democracies to constrain the power of digital technology. David Newhoff,
a filmmaker and copyright activist who has emerged as one of the lead-
ing critics of digital astroturf, wrote that it was fair to compare “the belief
that SOPA threatens free speech with a belief in healthcare death panels;
and I am more than willing to insult my friends to make the point. Both
fears are irrational, both fears have been ginned up and funded by corpo-
rate interests, and both fears lead the electorate away from a sober effort
to address a tangible problem” (2012). The outrage about “censorship” and
against “copyright monopolies” was shouting about something that was
unlikely to happen.
David Lowery, musician and copyright activist, suggested to Levine in
2018 that groups like EFF and Fight for the Future are, in Levine’s words,
“Silicon Valley front-groups that masquerade as edgy and enlightened
36 The Dogma of Cyberlibertarianism
have done the same (Riccardi 2021; Tenney 2021; Williamson 2021; Yates
2020). However, groups with liberal or left leanings, including employees
inside the company itself, have seen both Zuckerberg and the platform as
far too accommodating toward Republican and right-wing causes (Lee 2016;
Newton 2020; Seetharaman and Glazer 2020). Zuckerberg has claimed
that he is neither a Democrat nor a Republican (Murse 2020; Seetharaman
and Glazer 2020) and instead leans toward a political syncretism (see chap-
ter 2). However, he appears to be unwilling or unable to reflect on the his-
tory and location of this syncretism within political science.
When pressed on his political opinions, Zuckerberg characteristically
responds with crafted remarks about the “neutrality” of platforms and in-
vokes core cyberlibertarian tropes. “I’m pro–knowledge economy” he said
in 2013, in a statement that tacitly endorses the right-wing reconfigura-
tion of social and cultural phenomena into free market terms. Zuckerberg
frequently references “community” and “connection” in both public and
internal corporate communications, without situating these ambiguous,
value-laden terms in a more ordinary political frame (Hoffmann, Proferes,
and Zimmer 2018; Rider and Wood 2019). This is especially relevant to
criticisms of Facebook that focus on the platform’s rebukes to and destruc-
tive impact on democracy. In later years Zuckerberg’s own discourse has
tended to concentrate on the importance of freedom of expression and
even freedom of the press to democracy, especially in his fall 2019 speech
at Georgetown University (Romm 2019). He ignores how the political
right has learned to turn certain specialized definitions of free speech
toward the expansion of corporate and market power and against demo-
cratic governance (see chapter 6).
In that same Georgetown speech, Zuckerberg invoked two of the most
important Black figures in U.S. history, and proffered that Facebook is im-
portant, maybe even necessary, for civil rights. He suggested that the world
needs Facebook for the same reason we need civil rights leaders: “Through-
out history, we’ve seen how being able to use your voice helps people come
together. We’ve seen this in the civil rights movement. Frederick Douglass
once called free expression ‘the great moral renovator of society.’ He said
‘slavery cannot tolerate free speech.’” Later he invokes Martin Luther King
Jr. and Eugene V. Debs: “We saw this when Martin Luther King Jr. wrote his
famous letter from Birmingham Jail, where he was unconstitutionally jailed
for protesting peacefully. We saw this in the efforts to shut down campus
38 The Dogma of Cyberlibertarianism
protests against the Vietnam War. We saw this way back when America was
deeply polarized about its role in World War I, and the Supreme Court
ruled that socialist leader Eugene Debs could be imprisoned for making an
anti-war speech.” He also touched on two of the best-known progressive
causes that have had social media components:
We now have significantly broader power to call out things we feel are unjust
and share our own personal experiences. Movements like #BlackLivesMatter
and #MeToo went viral on Facebook—the hashtag #BlackLivesMatter was
actually first used on Facebook—and this just wouldn’t have been possible
in the same way before. 100 years back, many of the stories people have
shared would have been against the law to even write down. And without
the internet giving people the power to share them directly, they certainly
wouldn’t have reached as many people. With Facebook, more than 2 billion
people now have a greater opportunity to express themselves and help others.
(Romm 2019)
The civil rights movement was not fought to vindicate free speech rights under
the First Amendment. It was a fight to fulfill the promise of full citizenship
and human dignity guaranteed to black people by the 14th Amendment. To
The Dogma of Cyberlibertarianism 39
Ifill was not the only civil rights advocate to point out how Zuckerberg
was misusing the language of civil rights and free speech to advance an
antidemocratic agenda. Rather than free speech, Martin Luther King Jr.’s
daughter Bernice focused on Facebook’s proven role in worldwide disin-
formation campaigns, writing that she would “like to help Facebook better
understand the challenges #MLK faced from disinformation campaigns
launched by politicians. These campaigns created an atmosphere for his
assassination” (Oreskovic 2019).
Alicia Garza, one of the founders of the Black Lives Matter movement,
was even more pointed: “If #BlackLivesMatter to Mark Zuckerberg, then
he should ensure that Black users are not targeted with misinformation,
harassment and censorship on his platform and stop cozying up to anti-
Black forces. Until then, his company will be remembered as an enabler
of white supremacy.” She argued that Zuckerberg was being “deceptive” in
his deployment of BLM to support Facebook: “It really lacks integrity for
Mark Zuckerberg to even invoke @Blklivesmatter in this kind of insidious
way. Not interested in being your mule. You’re being deceptive + it needs
to stop.” Rashad Robinson, president of the civil rights organization Color
of Change, expressed something similar: “‘These comments and compari-
sons, it’s not surprising but deeply disappointing,’ Robinson said. ‘These are
the kind of arguments against which I’ve been pushing back now for years.
The idea that you can bask in a delinquent idea of freedom of expression
without some kind of rules of the road is just, well, bankrupt.’” Facebook
offered a predictable, evasive response to these remarks through a spokes-
person: “We respect and appreciate the comments made by some of the
nation’s foremost civil rights leaders. Their perspectives are critically impor-
tant and we are committed to continuing the ongoing dialogue. Our work
is far from over” (Ross 2019).
40 The Dogma of Cyberlibertarianism
These criticisms were not only directed at the cynical rhetoric Zucker-
berg uses to promote Facebook. After all, these comments came in 2019,
long after the platform had been implicated in scandals involving the pro-
motion of hate, disinformation, violence, and antidemocratic sentiment
worldwide. Facebook played a central role in sowing racial hatred and dis-
cord, which was instrumental in the election of Donald Trump in the
2016 U.S. presidential election and the 2016 UK Brexit referendum result
(Cadwalladr 2019). Many concerned citizens and politicians had examined
Facebook’s role in these developments, and individuals as well as organiza-
tions had directly intervened to get it even to recognize, let alone confront,
its corrosive effects on democracy and civil rights.
Prior even to addressing the deeper issues of disinformation and hate,
there is Facebook’s role in suppressing the vote. As Ifill remarks in her op-ed,
the NAACP Legal Defense Fund she heads is one of many civil rights orga-
nizations that has tried to address this problem: “Facebook insists it does
not allow voter suppression on its platform. But that statement is more
aspiration than fact. After nearly two years of conversations between the
company and our groups, I am convinced that Facebook simply is ill-
equipped to define what constitutes voter suppression—especially at the
local level. To help Facebook understand, we have provided the company
with multiple examples of voter suppression practices we have seen at the
local level that would survive their policies” (Ifill 2019).
Despite the long-standing efforts of civil rights organizations and voting
advocates to highlight the usefulness of Facebook for those who want to
suppress the vote, the platform appears to be either unable or unwilling
to address the problem. A detailed ProPublica report found that even in
2020, Facebook remained “rife with false or misleading claims about vot-
ing, particularly regarding voting by mail, which is the safest way of casting
a ballot during the pandemic. Many of these falsehoods appear to violate
Facebook’s standards yet have not been taken down or labeled as inaccurate”
(McCarthy 2020). A Media Matters report found similar problems earlier
that year: “Even though Facebook claims that its policies are ‘helping to
protect the 2020 US elections,’ the social media platform is still earning
revenue on Trump’s ads that promote his right-wing misinformation about
voter fraud.” In part due to its own policies, Facebook continued to allow
ads that made false claims. These policies were enacted in October 2019,
the same month Zuckerberg made his “civil rights” speech (Gogarty 2020).
The Dogma of Cyberlibertarianism 41
policy is too narrow in that it only prohibits content expressly using the
phrase(s) ‘white nationalism’ or ‘white separatism,’ and does not prohibit
content that explicitly espouses the very same ideology without using those
exact phrases” (50). The auditors then suggested that Facebook “look to
expand the policy to prohibit content which expressly praises, supports, or
represents white nationalist or separatist ideology even if it does not explic-
itly use those terms.” At the time of the final report, Facebook had “not
made that policy change” (51).
Instead, the authors wrote, Facebook created a multidisciplinary team
that “brings together subject matter experts from policy, operations, prod-
uct, engineering, safety investigations, threat intelligence, law enforcement
investigations, and legal.” The team includes “350 people who work exclu-
sively on combating dangerous individuals and organizations, including
white nationalist and separatist groups and other organized hate groups.”
Facebook stated that “the collective work of this cross-functional team has
resulted in a ban on more than 250 white supremacist organizations from
its platform, and that the company uses a combination of AI and human
expertise to remove content praising or supporting these organizations.
Through this process, Facebook states that it has learned behavioral pat-
terns in organized hate and terrorist content that make them distinctive
from one another, which may aid in their detection” (51). This sounds
good on the surface—if anything it seems like Facebook may be going
out of its way to confront the use of its platform for hate. But the auditors
were not satisfied.
Among other points, they rightly noted that the public and even the
auditors themselves are repeatedly faced with Facebook’s own accounts of
the shocking amounts of hate they remove from the site but given no way
to assess its significance: “In its May 2020 Community Standards Enforce-
ment Report, Facebook reported that in the first three months of 2020,
it removed about 4.7 million pieces of content connected to organized
hate—an increase of over 3 million pieces of content from the end of 2019.
While this is an impressive figure, the Auditors are unable to assess its sig-
nificance without greater context (e.g., the amount of hate content that is
on the platform but goes undetected, or whether hate is increasing on the
platform overall, such that removing more does not necessarily signal better
detection)” (51). Even the auditors do not confront the question whether
products that attract so much hate (much as Facebook and other social
The Dogma of Cyberlibertarianism 43
or far-right content”; and that “in addition to the hate groups designated
by SPLC and ADL, TTP found white supremacist organizations that Face-
book had explicitly banned in the past” (Tech Transparency Project 2020).
TTP, SPLC, ADL, and CCDH, along with governmental officials, have
long been urging Facebook to confront the usefulness of its platform for
far-right extremism. The TTP report notes that “Facebook’s Community
Standards have included rules against hate speech for years, but in the past
three years the company has expanded its efforts”; however, despite these
efforts, overt, obvious uses of the platform for hate-motivated violence con-
tinue: “Despite the policy update, Facebook didn’t immediately take down
an event page for the ‘Unite the Right’ rally, which SPLC had tied to neo-
Nazis. According to one media report, Facebook only pulled the listing
the day before the rally.” This was 2017, and after Unite the Right Face-
book again announced it would redouble efforts to prevent racist hate and
violence. Yet
BuzzFeed quotes extremism researcher Megan Squire, who often works with
SPLC, as saying “Facebook likes to make a PR move and say that they’re
doing something but they don’t always follow up on that” (Lytvynenko,
Silverman, and Boutilier 2019); and TTP echoes this sentiment.
They are not alone. A coalition of civil rights groups launched Stop
Hate for Profit after Zuckerberg’s 2019 speech, the 2020 civil rights audit,
and other instances of hate-based violence in which Facebook was impli-
cated. The campaign’s name suggests what many are hesitant to say out loud:
Facebook is aware that its product suppresses democracy and promotes hate.
The Dogma of Cyberlibertarianism 45
The company puts its own power and economic welfare above the civil
rights and democracy it claims to support. ADL, Common Sense, NAACP,
Color of Change, and other civil rights organizations are all part of the
campaign coalition. On June 17, 2020, the coalition explains, “we asked
businesses to temporarily pause advertising on Facebook and Instagram
in order to force Mark Zuckerberg to address the effect that Facebook
has had on our society. Following an incredible groundswell of support,
Mr. Zuckerberg asked to meet with Stop Hate for Profit coalition leaders
on July 7th 2020.” Facebook’s civil rights audit was released the next day.
Yet during the July 7 meeting, Zuckerberg “made clear he had no intention
of taking any steps to tackle our requests.” And things did not improve:
“One year later, Facebook’s progress toward these demands [had] been
minimal at best” (Stop Hate for Profit 2021).
In the wake of the meeting, participants noted the strategic and repeti-
tive nature of Facebook’s claims about suppressing hate: “‘We’ve seen over
and over again how it will do anything to duck accountability by firing
up its powerful PR machine and trying to spin the news,’ said Jessica J.
González, co-CEO of Free Press, a nonprofit organization that lobbies
technology and media companies on behalf of people of color and under-
served communities.” Despite Facebook’s claims of having large internal
teams working on the matter, researchers at civil rights organizations are
able to locate hate material with relative ease. This material is only taken
down after the organizations present the data in public, as one participant
pointed out: “Derrick Johnson, the president and chief executive of the
NAACP, told NBC News in an interview, that Facebook only acted to take
down white supremacist groups after they were alerted rather than doing it
proactively. ‘They have the technology to prevent racial hate speech’” (Byers
and Atkinson 2020). Other meeting participants were equally frank. Color
of Change executive director Rashad Robinson said that Facebook “showed
up to the meeting expecting an A for attendance.” ADL CEO Jonathan
Greenblatt stated, “We had 10 demands and literally we went through the 10
and we didn’t get commitments or timeframes or clear outcomes.” Jessica
González of Free Press went further: we “didn’t hear anything today to
convince us that Zuckerberg and his colleagues are taking action. Instead
of committing to a timeline to root out hate and disinformation on Face-
book, the company’s leaders delivered the same old talking points to try to
placate us without meeting our demands” (Graham and Rodriguez 2020).
46 The Dogma of Cyberlibertarianism
core libertarian refrain Zuckerberg would return to again and again: the all-
important protection of free speech as laid out in the First Amendment of
the Bill of Rights. His interpretation was that speech should be unimpeded;
Facebook would host a cacophony of sparring voices and ideas to help
educate and inform its users. But the protection of speech adopted in 1791
had been designed specifically to promote a healthy democracy by ensuring
a plurality of ideas without government restraint. The First Amendment
was meant to protect society. And ad targeting that prioritized clicks and
salacious content and data mining of users was antithetical to the ideals of a
healthy society. (2021, 16–17)
distort obvious facts. They segment what civil rights leaders call “hate”
from what many Facebook users do not see as “hate” at all, pretending that
those who practice and transmit hate are not “users” with “likes.” We know
this is false. KKK members, MAGA extremists, ISIS recruiters, Hindu
nationalists, and others typically identified as hate groups all see themselves
as bound not by intolerance but by love of “family” and others close to
them. We know from many lines of research that some members of these
communities are the most active users of communications technologies,
and often at their vanguard (see Belew 2019; and chapter 7, below). Face-
book benefits from having U.S. MAGA and Donald Trump supporters on
its platforms, even if they constitute at most a quarter of the population.
In other countries, the percentage of users fundamentally committed to hate
may be even higher. There are many examples of nations made up of “com-
munities” largely constituted by hatred of one another. Through this very
narrow perspective, one might even ask whether some on the left “hate” the
MAGA right, whether Facebook can or should try to mitigate this “hate,”
or whether both the community-building and anger-generating aspects of
these groups fit right into the company’s business model. Though Clegg
says the company does not “profit from hate,” in-group identification and
out-group anger may be its primary product.
These clear affordances of Facebook and other platforms have led some
researchers to talk about “polarization.” While this may be a useful frame
in some instances, it seems to risk a “both sides” analysis that is fundamen-
tally political. Driving society into extremes, encouraging science denial,
jeopardizing democratic fundamentals, and ignoring political realities—all
phenomena of the contemporary and historical political right—are directly
associated with the rise of Facebook and other social media platforms.
While we don’t think of it this way, the majority of past and present science
and reality denial campaigns have been clearly identified with the right,
particularly when they align with deregulation and the extremes of free mar-
ket capitalism. From climate change denial to its close bedfellow, tobacco
“science” (Oreskes and Conway 2010), to the range of conspiracy theory
movements presently active, most aim to expand right-wing political power,
the interests of concentrated capital that wants to avoid political account-
ability and constraint, or some combination of the two. Financial interests
associated with the fossil fuel industry emerge at the bottom of many of
these movements.
50 The Dogma of Cyberlibertarianism
While its critics have rightly called out Facebook for promoting hate,
most of them step back from connecting these dots in the most straight-
forward way: that Facebook is a knowing and active promoter of right-
wing politics. Yet the evidence for this is substantial. Another revelation
from Haugen’s leaked documents is Facebook’s own employees stating
on its internal message boards that “the company was allowing conserva-
tive sites to skirt the company’s fact-checking rules, publish untrustworthy
and offensive content and harm the tech giant’s relationship with advertis-
ers.” Staff questioned the presence of far-right propaganda site Breitbart
on Facebook’s News tab; Facebook kept the listing, despite its own research-
ers determining that it “was the least trusted news source, and also ranked
as low quality, among several dozen it looked at across the US and Great
Britain.” In 2020, a “Facebook engineer gathered up a list of examples he
said were evidence that Facebook routinely declines to enforce its own
content moderation rules for big far-right publishers like Breitbart, Char-
lie Kirk, PragerU and Diamond and Silk” (Hagey and Horwitz 2021).
Around the same time, a Facebook spokesperson defended the platform’s
conservative bias by saying that “right-wing populism is always more en-
gaging.” As a former Facebook employee working at the liberal Center for
American Progress said in response, “Facebook is not a mirror—the news-
feed algorithm is an accelerant” (Thompson 2020; also see Edelson et al.
2021). It is difficult to reconcile these facts with the company’s claim that
“hate” is unwelcome on the platform.
Add to this Facebook’s work with lobbyists, most of them from the de-
regulatory far right, especially the American Edge Project, a business lobby
that, despite comprising twenty-four member organizations, is described
by the Tech Transparency Project (2022) as having been solely founded by
the social media giant. TTP claims that Facebook’s support for relatively
minor tech industry regulations, which Facebook would help write, is partly
meant to conceal the company’s opposition to antitrust efforts. Such large-
scale government action would threaten the concentration of wealth and
power on which Facebook thrives. Further, “Facebook has an active Repub-
lican operative—vice president of public policy Joel Kaplan . . . who repeat-
edly sided with right-wing misinformation peddlers.” In particular, Kaplan
helped to drive both right-wing disinformation on Facebook and the nar-
rative that Facebook is biased against the right: “Kaplan made an effort to
‘protect’ Breitbart’s account from being downsized in the news feed despite
The Dogma of Cyberlibertarianism 51
digital denialism mean that unlike Breitbart, the Daily Wire, the Washington
Times, Facebook is somehow seen as something different from what it is.
That this framing is invalid is obvious, but that must not obscure its power.
Unlike many academic and even political debates, those over the politics
of the digital are conducted between one side that directly represents the
interests of existing and new businesses and the accumulation of capital
more generally; the “other” side consists of academics and other writers who
believe the ideas at issue are fundamentally questionable. Further, nonprof-
its like the Mozilla Foundation and the Open Society Foundation—to say
nothing of NGOs with explicit tech-lobbying roots like EFF, CDT, and
the parts of ACLU that focus on digital technology (see Franks 2019a),
which on the surface looks as if it could not possibly be pro-business—
provide critical propagandistic power for the unfettered development of
technology. For contrast, truly pro-regulation and pro-democracy NGOs
like the Electronic Privacy and Information Center tend to promote much
more skeptical and well-considered platforms but receive much less atten-
tion than do the others.
Morozov (2013a) refers to the promoters of what he calls “solutionism”
as “Silicon Valley.” This is a useful figure of speech to describe the large
collection of businesses (Google, Facebook, Apple, Twitter, Airbnb, Intel,
The Dogma of Cyberlibertarianism 53
Cisco, Acxiom, etc.) that profit from the ongoing proliferation of digital
technology. Additionally, venture capitalists (such as Andreessen Horowitz,
Sequoia Partners, Founders Fund, and Kleiner Perkins Caufield & Byers),
consultants, and even academics benefit no less from the growth and de-
velopment of digital businesses. Even many of those with no apparent
stake in business profits—such as free and open-source software develop-
ers, various other “hackers,” and even directors of nonprofit enterprises like
Wikipedia—have much to gain in the proliferation of digital industries.
This leads to an uncomfortable asymmetry between organizations tradi-
tionally associated with “rights” of various sorts, such as Amnesty Interna-
tional, the ACLU, and Human Rights Watch. They are typically made
up almost entirely of disinterested academics and activists with no com-
mercial stake in the positions for which they fight. In contrast, organiza-
tions that have sprung up in the digital age, such as EFF and the CDT,
have deep and sustained ties to industry. Despite seeming like rights orga-
nizations, they more closely resemble lobbying organizations for digital-
industry businesses. Many of their boards and even staff members hold such
direct stakes.
The pro-digital side often accuses critics of promoting their own self-
interest, thus negating discussion of the stake either side has in the debate.
At its most absurd, this response accuses a writer like Evgeny Morozov
of “profiting” from the proliferation of his views, trying to “get rich” off
“people’s fear of technology.” This objection is ludicrous, of course. It is
no truer of Morozov or myself than it is of any authors or speakers that
proliferation of their views might lead to personal benefit. Rejecting the
views of all commentators because of their interest in promoting their
perspective would be required if this argument held any weight.
This response is truly bizarre because it both accepts and rejects the analy-
sis with which we began: that we should read (Google executive chairman,
ex-CEO, and major shareholder) Eric Schmidt and (Google Ideas direc-
tor) Jared Cohen’s New Digital Age (2013), for example, with an eye toward
the enormous profits available to Google if the book’s recommendations
were enacted. This parallels the bizarre and disingenuous attacks on cli-
mate scientists for being interested in their analysis due to theoretical prof-
its from carbon credits, a view promulgated by huge multinational energy
companies that stand to benefit if action on climate change is stalled and
to lose if the world takes action. If all Schmidt and Cohen gained from
54 The Dogma of Cyberlibertarianism
people reading their book were sales and speaking fees, it might be fair to
compare their “interestedness” with that of Morozov. What Morozov lacks
is the far greater—enormous, in fact—profit that Google and other Silicon
Valley ventures earn via the promotion of views like those of Schmidt and
Cohen. If personal bias in one’s analysis is a valid concern, then huge
financial interest should be our primary concern.
Morozov offers an apt analysis of the stakes of this debate:
For the true and democratically minded critic, “technology” is just a slick,
depoliticized euphemism for the neoliberal regime itself. To attack technol-
ogy today is not to attack the Enlightenment—no, it is to attack neoliberal-
ism itself.
Consider the outlines of a digital world that is rapidly coming into
existence. All the achievements of social democracy—public health, public
education, public transportation, public funding for the arts—are being
undermined by the proliferation of highly personalized app-based solutions
that seek to get rid of bureaucratic institutions and replace them with fluid
and horizontal market-based interactions.
M
uch of the character of cyberlibertarianism emerges from its
distinctive origins, crossing ideological lines from the begin-
ning, which turns out to be essential to its current function.
Despite its current reputation (especially among conservatives) for being
“liberal,” Silicon Valley and computer culture in general emerge from the
right. For a long time this was neither contested nor particularly note
worthy. Especially during the first flush of computer development, digital
technology was primarily associated with the military, the defense indus-
try, and the Cold War corporate and governmental establishment. As path-
breaking works like Paul Edwards’s The Closed World (1997), Alex Abella’s
Soldiers of Reason (2008), Philip Mirowski’s Machine Dreams (2002), and
S. M. Amadae’s Rationalizing Capitalist Democracy (2003) show, the tight
connections between the development of digital computers and conserva-
tive U.S. politics were taken for granted. To many political radicals of the
1960s, computers were among the most potent symbols of the conservative
political machine they opposed, and sometimes even served as targets for
direct political action (Larson, forthcoming).
While some on the left saw computers as a political threat, few (if any)
saw them as a site for political hope. The story of how that changed is both
remarkable and underrepresented. Histories like Markoff’s What the Dor-
mouse Said (2005) and especially Turner’s From Counterculture to Cybercul-
ture (2006a) show how a part of the 1960s counterculture came to embrace
digital technology, especially in the “friendly face” of the personal com-
puter. Turner in particular shows how this part of the counterculture had
little to do with leftist political activism. Yet even Turner’s detailed account
57
58 The Forms and Functions of Cyberlibertarianism
of Stewart Brand and others does not account in wider sociopolitical terms
how—or even whether—digital technology accomplished its apparent flip
from conservative to liberal politics.
Seen in retrospect, the advent of the politics surrounding digital tech-
nology resembles those surrounding some of the most intense corporate–
political issues of our time: tobacco science, climate change, and the eco-
nomic program associated with the Chicago School and the Neoliberal
Thought Collective (Mirowski 2013; Mirowski and Plehwe 2009). Yet it
stands out from them, too. Rather than a set of relatively centralized and
concentrated think tanks and funders producing industry-supporting pro-
paganda (i.e., astroturfing), the digital technology propaganda is, true to
form, “decentralized” and far less coordinated. In part this is due to the
far deeper penetration of digital tools into every facet of our social lives.
Digital tools themselves, including social media, are often used to spread
cyberlibertarian dogma. This is done by technologists and others who have
no direct connection with think tanks, right-wing funding bodies, or even
technology companies. It would not be wrong to label these people as “true
believers” in the cyberlibertarian cause, but they seem unable or unwilling to
locate their beliefs in the political foundations to which they claim to adhere.
While cyberlibertarianism is distinct from other conservative, deregu
latory, and antidemocratic campaigns, it uses many of the same devices to
realize its ideological goals—namely a series of rhetorical moves, including
metaphors and other tropes, argumentative strategies, and narratives, all
of which disrupt challenges to the political power of technology. Many of
these strategies divert attention from the political foundations of digital
technology, substituting vague syncretisms for thorough political analysis.
Despite the nearly inevitable contradictions of cyberlibertarian dogmatists,
their shifting tactics and multiple nodes mean there are always rhetorical
moves available to draw attention away from close analysis of the politics
on offer. Of course those politics are not a unified bloc. But on balance,
cyberlibertarian dogma works to stave off or at best shape regulation; to
dismiss serious concerns about the political impact of digital technology
while sometimes framing that impact as pro-democracy despite manifest
evidence otherwise; and in fact to insist on the continued expansion of
power of a set of interlocking technologies and corporate interests that a
half century ago were seen as contrary to democratic political interests they
now claim to have always championed.
The Forms and Functions of Cyberlibertarianism 59
Obviously there was a great deal of traffic between the two sides, and no
doubt many ordinary participants in the counterculture found it easy and
natural to participate in both. Frank and others have shown that industry
played a deep role in the generation of culture through entertainment
companies, as well as in the Madison Avenue generation of cultural forms
through advertising and branding. This suggests that the revolutionary
energies of 1960s radicals were quickly and effectively channeled into life-
style radicalism, which can seem like the opposite of the politics offered.
Turner calls the part of the counterculture not explicitly committed to
leftist politics the “New Communalists”:
Even as their peers organized political parties and marched against the
Vietnam War, this group . . . turned away from political action and toward
technology and the transformation of consciousness as the primary sources
of social change. If mainstream America had become a culture of conflict,
with riots at home and war abroad, the commune world would be one of
harmony. If the American state deployed massive weapons systems in order
to destroy faraway peoples, the New Communalists would deploy small-
scale technologies—ranging from axes and hoes to amplifiers, strobe lights,
slide projectors, and LSD—to bring people together and allow them to
experience their common humanity. (2006, 4)
helped to galvanize the computer-centric parts of the Valley into its self-
image as quasi-political revolutionaries.
The combination of the personal computing revolution and the New
Communalist movement has effectively remapped the cultural landscape,
rendering it today almost invisible and unthinkable. This is the view that
“centralized authority” and “bureaucracy” are somehow emblematic of con-
centrated power, whereas “distributed” and “nonhierarchical” systems oppose
that power. As Turner explains, one of the main sources for the idea of
flattened and distributed networks comes not from the political left, but
instead from
the social and rhetorical tactics by which the defense engineers of World
War II and the cold war had organized and claimed legitimacy for their own
work. Much like Norbert Wiener and the scientists of the Rad Lab, Stewart
Brand had made a career of crossing disciplinary and professional bound
aries. Like those who designed and funded the weapons research laborato-
ries of World War II, Brand had built a series of network forums—some
face-to-face, such as the Hacker’s Conference, others digital, such as the
WELL, or paper-based, such as the Whole Earth Catalog. Like the Rad Lab,
these forums allowed members of multiple communities to meet, to exchange
information, and to develop new rhetorical tools. Like their World War II
predecessors, they also facilitated the construction and dissemination of
techno-social prototypes. Sometimes, as in the case of the Catalog or the
WELL, Whole Earth productions themselves would model the sorts of rela-
tionships between technology, information, the individual, and the com-
munity favored by network members. (2006, 249–50)
and climate change denial. Naomi Oreskes, the Harvard historian of science,
is the most forceful analyst of these practices and the connections between
them. In her 2010 book with Erik Conway, Merchants of Doubt: How a
Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global
Warming, she explains how the tobacco industry discovered it “could use
science against itself. ‘Doubt is our product,’ ran the infamous memo writ-
ten by one tobacco industry executive in 1969, ‘since it is the best means of
competing with the “body of fact” that exists in the minds of the general
public’” (34). The power of convincing the public that the link between
smoking and lung cancer was unproven was exercised indirectly. The U.S.
surgeon general was able to place warnings on cigarette packages, which
might be thought to be a direct political effect. However, consumers were
subtly encouraged to discount the expertise of scientists, to doubt the sci-
entific enterprise itself, and to see their consumption of tobacco as a mat-
ter of “freedom.” All of these actions accomplished the tobacco industry’s
goals of remaining in business and diminishing the likelihood of preventa-
tive regulation and litigation.
Certainly, in some localities electoral politics were affected. In tobacco-
growing states, tobacco anti-science was one element of a suite of political
movements that kept politicians supporting the tobacco industry regardless
of party affiliation. It would be a mistake to stand above this set of facts and
declare that Big Tobacco’s propaganda efforts were politically ineffective just
because many in those states voted for Democratic presidential candidates.
It would be similarly inaccurate to see climate denialism simply as a
device used to maintain the dominance of one party over another. Instead,
its effect is to deform the very shape of the political debate, seeding a range
of frankly ridiculous “doubts” in the public discourse. Industry propaganda
becomes a quasi-religious belief that resists testing against evidence and is
adhered to with venomous force. Climate deniers are often convinced by
propaganda and write as if they are defending “science” because climate
scientists fail to follow the “evidence.” Yet it is next to impossible to find
any of these denialists indicating what they consider reasonable standards of
evidence to test the hypotheses of climate change. They do not say, “Your
climate science fails to meet X standard of proof ”; they say, “No evidence
could ever support your conclusions.”
Both tobacco anti-science and climate denialism promote anti-scientific
thinking as if it were science itself. Those who promote these views are doing
The Forms and Functions of Cyberlibertarianism 65
so in part because they do believe the science and are trying to forestall the
natural political effects of widespread belief in what they know the science
shows. This enables partisans (think Sean Hannity or Mark Levin) to
declare that, while they are not scientists, they are in position to show that
scientists themselves are not being scientific. The production of anti-
science denialism has the side effect of undermining the foundations of
scientific political formations. This can lead to the belief that vaccines cause
autism; that AIDS, Ebola, or Covid-19 were genetically engineered by the
pharmaceutical industry and government; or that the science of evolution
is a fraud.
These movements help to consolidate the power of the right and sow
doubt about government regulation and democratic power. This is done
through electoral politics and the maintenance of a level of political force
and affect that exists outside the direct management of those who benefit
from it most. The force exists outside of them but remains untapped, ready
to be used when it benefits one or another political effort. Perhaps just as
important, it creates political “events” that overwhelm the ability of the
electorate to focus on other issues.
The anti-vax movement is an effective example of how falsified research
by a small group of scientists can have unforeseeable consequences. It is
unlikely that the anti-science propagandists deliberately started the move-
ment. Their motivations for being suspicious of vaccines is difficult or even
impossible to pin down. The movement is routinely dismissed by scientists
across the board and is rarely embraced by politicians. Yet, it sticks in the
public imagination with a fierce tenacity despite how thoroughly it has
been discredited. It helps promulgate a set of anti-government, anti-science
beliefs that are accessible to those who need it, when they need it. Despite
anti-vax thought harming public health, it remains useful to those who pre-
fer to manage public opinion by emotional subterfuge rather than informed
debate.
Cyberlibertarianism resembles these movements but also differs in im-
portant ways. Its effects line up with theirs in surprising ways. Cyberliber-
tarianism was not produced by a coterie of scientists or political operatives;
instead, it emerged organically from the work and writings of some of the
most prominent figures in digital culture. Stewart Brand’s Whole Earth Cata-
log and members of the nascent cyberculture were involved in something
more than productive cross-fertilization. As Fred Turner narrates the story:
66 The Forms and Functions of Cyberlibertarianism
In 1968 Brand brought members of the two worlds together in the pages of
one of the defining documents of the era, the Whole Earth Catalog. In 1985
he gathered them again on what would become perhaps the most influential
computer conferencing system of the decade, the Whole Earth ’Lectronic
Link, or the WELL. Throughout the late 1980s and early 1990s, Brand and
other members of the network, including Kevin Kelly, Howard Rheingold,
Esther Dyson, and John Perry Barlow, became some of the most-quoted
spokespeople for a countercultural vision of the Internet. In 1993 all would
help create the magazine that, more than any other, depicted the emerging
digital world in revolutionary terms: Wired. (2006a, 3)
is the most effective way to achieve them. This view is sometimes called
green capitalism. While no doubt many of its exponents have a sincere and
committed orientation toward what may be our planet’s most serious
problem, green capitalism nevertheless rejects the view of most left-wing
environmentalists that unbridled capitalist development is the signal cause
of environmental destruction and therefore must be confronted for these
effects to be mitigated.
Turner goes on:
The popular perception of these figures is that they are hippies. However,
like the Grateful Dead, for whom Barlow wrote lyrics, their left-wing sur-
face conceals a pro-business, anti-government, Hayekian worldview. “Do
your own thing” is the guiding principle, which is as much a business mantra
as it is a statement of political tolerance. It is no surprise then that Brand
and his colleagues became successful entrepreneurs. Brand, along with sev-
eral other entrepreneurs and corporate leaders, formed a practice in the
late 1980s called the Global Business Network, among whose later “net-
work members” were Clay Shirky and Kevin Kelly. Today GBN is owned
by Deloitte, one of the world’s major business consulting organizations. It
is not difficult to see how the hippie mantra of “live and let live” has been
transformed into the neoliberal “don’t regulate what companies do.”
68 The Forms and Functions of Cyberlibertarianism
Free expression defenders argue that digital platforms like X and Face-
book should be governed by the same commitment to absolute free speech
found in the last hundred years of U.S. Supreme Court doctrine. Support-
ers of both left and right politics believe that all politics will be harmed if
platforms (and perhaps governments at a second-order level, through their
power to regulate platforms) are understood to have the power to decide
legitimate speech practices within their specific domains. This suggests that
these platforms exert little or no influence over the political views con-
veyed through them, which has provoked significant arguments between
partisans of left politics and free expression promoters. These arguments
parallel arguments between absolutist free speech advocates in general and
others, especially from critical legal studies and critical race theory, who
have questioned whether the specific form of free speech absolutism found
in U.S. legal doctrine treats all actors equally, or whether it too inherently
favors the political right (see chapter 6).
Evangelists who promote an absolute position on free speech for digital
platforms as obviously salutary for advocates across the political spectrum
advance an inchoate doctrine. They take for granted its political orienta-
tion while not being able to show how or why the political consequences
it claims for itself actually follow.
Syncretic politics, related to but not identical with inchoate politics, is a
more disturbing phenomenon. In political theory, syncretism has a specific
meaning. It is used especially in the context of “red–brown syncretism,”
a key idea in the theory of fascism. In that formulation, “red” represents
socialism or communism and “brown” represents fascism, through associa-
tion with Nazi brownshirts. Chapter 7 discusses syncretism as an on-ramp
to fascism. Here, we can employ the term in a slightly looser fashion while
not losing sight of its fascist overtones.
Two key insights emerge from studies of syncretic politics. First, they
advocate a kind of politics that is “above left and right,” sometimes referred
to as “third positionism” (Berlet 2016; Griffin 1995; Southern Poverty Law
Center 2015). Second, despite this claim, they nearly always serve as supports
for the right. Sometimes this advocacy may be something that emerges
over time, suggesting that the original claims to be “beyond left and right”
were sincere if misguided. At other times, the syncretic formations are
more open, including deliberate attempts to obscure the power and force
of right-wing politics.
70 The Forms and Functions of Cyberlibertarianism
The Tea Party can be seen as having led to the right-wing populism that
helped to elect Donald Trump in 2016. Originally seen as being other than
right wing, its functions in the media as a brand that “took up the tropes
of historical civil rights struggles” help to show how and why the political
right frequently relies on causes and language that obscure their political
commitments. The underlying movement serves in powerful ways to gal-
vanize and enact right-wing political goals.
Glenn Greenwald, whose contribution to cyberlibertarian dogma via
the WikiLeaks and Edward Snowden revelations is particularly telling,
appeared interested in blurring the right-wing commitments of the Tea
Party. In early 2010, Greenwald wrote a piece for Salon that referred to the
Tea Party “movement” in quotation marks, called it “totally incoherent,”
and described it as “at bottom, nothing more than a cynical marketing
attempt to re-brand the right wing of the Republican Party under the exact
same policies and principles which defined it for the last couple of decades.”
72 The Forms and Functions of Cyberlibertarianism
with the advent of digital technology, but inextricably tied to it, because
“the market” itself is a very special kind of computer. “In the popular Hayekian
account,” writes Mirowski, “the marketplace is deemed to be a superior
information processor, so therefore all human knowledge can be used to its
fullest only if it is comprehensively owned and priced” (2013, 65). Part of
the “double truth” of neoliberalism is that despite being “dedicated to
rational discourse about a market conceived as a superior information pro-
cessor,” the most influential members of the NTC “ended up praising and
promoting ignorance” (70).
This is not a minor complaint, but rather central to the public dogma
of these thinkers, although only critics point out the brute contradictions
it entails:
that the knowledge of the particular circumstances of time and place will
be promptly used” (524).
While it is probably true that Hayek intended this line of thinking to
be nominally constrained to the economic sphere, in later years no such
restraint has been observed. It should come as little surprise that Jimmy
Wales, who has said that Ayn Rand’s thought “colors everything I do
and think” (“Free-Knowledge Fundamentalist” 2008), and who named his
daughter after a character in Atlas Shrugged (Chozick 2013), drew inspira-
tion for Wikipedia from Hayek’s article:
Despite Hayek’s explicit refusal to apply his analysis outside of central eco-
nomic planning, Wales generalizes and expands on it, to the point of doc-
umenting the totality of human knowledge to which Wikipedia aspires.
In fact, despite Wikipedia’s aspirations, Mirowski has constructed an
elegant argument that suggests a main function of the site is to promote
ignorance, providing a central resource for spreading uncertainty over the
historical record and the reliability of experts. Noting that “many on the
contemporary left seem to be flummoxed when it comes to grasping some
basic facts of the modern neoliberal regime” (2009, 422), Mirowski points
out that Wikipedia is constantly changing. This occasionally lauded feature
of the “encyclopedia anyone can edit” means it “can’t manage to get much
of anything straight for very long.” This follows directly from Wales’s im-
plementation of the misapplied Hayekian economic dictum to the world
The Forms and Functions of Cyberlibertarianism 77
of knowledge at large: “The conviction that the truth ‘emerges’ from ran-
dom interactions of variously challenged participants in the precincts of
Wiki-world . . . only holds water if we are allowed great latitude in the
definition of ‘truth’” (424).
Noting Wikipedia’s destructive effects on “encyclopedias, journals, and
newspapers”—exactly those organs to which democracies turn in their pur-
suit of knowledge—Mirowski argues that Wikipedia disparages the very
notion of knowledge it claims to champion. While the site’s philosophy,
like that of the overall NTC project, “appears to be a radical levelling phi-
losophy, denigrating expertise and elite pretensions to hard-won knowl-
edge,” it “appeals to the vanity of every self-absorbed narcissist, who would
be glad to ridicule intellectuals as ‘professional secondhand dealers in ideas’”
(425, quoting Hayek’s Studies in Philosophy, Politics and Economics). As
in the culture of Wikipedia, so in the NTC: “Attacks on ‘intellectuals’ were
a common refrain in the history of Mont Pelerin and were not restricted to
Hayek” (448n12).
“But of course,” Mirowski notes, “the neoliberals didn’t renounce all
expertise—just the stuff they don’t like” (448n12). This “double truth” goes
hand in hand with another core neoliberalism contradiction. “For outsid-
ers, neoliberal thinkers are portrayed as plucky individual rebel blooms of
rage against the machine, arrayed against all the forces of big government
and special interests” (Mirowski 2013, 75–76), but the truth is that “once
initiated into the mysteries of the thought collective, only the organization
men rise to the top, and they know it” (76). This means that, despite all
appearances, neoliberalism is comfortable with authoritarian forms of gov-
ernance: Mirowski quotes Hayek in a 1966 address to the MPS: “It is at
least possible in principle that a democratic government may be authori-
tarian and that an authoritarian government may act on liberal principles”
(57). As the neoliberals’ dalliance with Chilean dictator Augusto Pinochet
suggests, the public is supposed to understand neoliberalism (and its more
public variant, contemporary libertarianism) as philosophies of freedom in
general. However, they entail that “freedom” for those closest to the center,
which results in massive curtailments of freedom for those at the margins.
The appearance of distributed and decentralized democracy satisfies the
masses. But there is a studied ignorance of the nature of the actual, operat-
ing form of power, which is closer to plutocracy or oligarchy than to rep-
resentative democracy. Attempts to bring these facts to light, no matter
78 The Forms and Functions of Cyberlibertarianism
(Benkler 2007; Shirky 2008a; Tapscott and Williams 2013), many of whom
continue to point to Wikipedia as the exemplar of what distributed, non-
hierarchical power can achieve.
In What the Dormouse Said, John Markoff writes that computing pio-
neer “Alan Kay observed that you could divide the pioneers of personal
computing into two camps: those who read and those who didn’t” (2005,
179). In a 2013 interview, Kay observes:
It is not a huge exaggeration to point out that electronic media over the last
100+ years have actually removed some of day to day needs for reading and
writing, and have allowed much of the civilized world to lapse back into oral
societal forms (and this is not a good thing at all for systems that require
most of the citizenry to think in modern forms).
For most people, what is going on is quite harmful. (Greelish 2013)
A wealth of data suggests that how we read online is less conducive to deep
intellectual engagement than how we engage with books. It is not unrea-
sonable to worry that we are trading deep engagement for surface engage-
ment. There is no evidence that one accomplishes the same goals as the
other. Giving up deep reading, on which so much of our political and
social institutions are based, in favor of products promoted by the very
neoliberal capital that reading deeply often calls us to question, is not jus-
tifiable for any reason beyond the immediate pleasures we get from engage-
ment with digital devices and media.
Furthermore, we occasionally see the promotion of the nearly ubiquitous
short-form texts in digital media as not just the apotheosis of long-form
texts, but also the replacement for them. Such rhetoric of replacement
is widespread, significantly under-motivated by evidence, and ideological,
dismissing the notion that reading books is critical to society. While there
are undoubtedly instances of what appear to be culturally conservative
“defenses of reading” (e.g., Birkerts 2006) that are routinely dismissed in
cyberlibertarian rhetoric as “romantic” or “traditional,” the vitriol that digi-
tal utopians have toward the very kind of practice—the reading of long
texts—that produced the democracy they appear to be championing is
striking and deserves more scrutiny.
I am not suggesting active conspiracy. Clay Shirky, Jeff Jarvis, Howard
Rheingold, Tim Berners-Lee, Jimmy Wales, and Nicholas Negroponte are
80 The Forms and Functions of Cyberlibertarianism
2016). Michael Casey—a Wall Street Journal journalist who is also chair
of the advisory board for cryptocurrency industry publication CoinDesk
and also a senior advisor for MIT’s Digital Currency Initiative—expressly
acknowledged the power of Zuboff ’s argument: “This book, which eviscer-
ates the ‘applied utopianism’ and ‘technological inevitably’ of data-gobbling
Silicon Valley titans such as Google and Facebook, will become a defining
text of our age. Read it. It is of vital importance” (2019). However, Casey
ultimately rejects most of Zuboff ’s analysis, claiming that the problems
lie not with the specific features of digital technology Zuboff describes—
such as data extraction and behavioral modification—but instead with
“the real centralizing power-mongers of our digital economy [that] have
been pillaging our data and reshaping humanity into an instrument of
their domination.”
Cyberlibertarian critics alter Zuboff ’s focus on the effects of technology
into the idea that the real problem is who is using it, and that if the right
people come along, they can use it and the politics on which it is built to
do “good.” According to Casey, Zuboff ’s critiques threaten to bolster an
“anti-technology” position that is both unreasonable and a mark of mental
or characterological weakness. Casey believes that “many of Zuboff’s anti-
technology positions [are] too extreme,” and “if blockchain technology is
to play an integral role in the evolution of the digital global economy and
be a force for good, rather than a vehicle of computerized subjugation, its
advocates will have to contend with the angry backlash against digital tech-
nologies that this book will help fuel.” Readers influenced by this analysis
will merely see an unjustified anger at technologies and need not confront
Zuboff’s idea that the cyberlibertarian belief that blockchain (or any technol-
ogy) can be a “force for good” might actually lie at the root of the problem.
Zuboff believes that democracy provides the solution to the problems
of surveillance capitalism, while Casey dismisses democracy as “the levers
of government.” However, blockchain promoters along with Casey know
that “a blockchain solution for breaking down surveillance capitalism
would naturally be a technological one, embracing the power of math and
cryptography to design a new digital topography of trust that disempowers
the centralized middleman and creates human agency within a decentral-
ized system.” Despite the fact that the notion of “centralized middlemen”
and “decentralized systems” (see chapter 5) are vague technological anti-
government sentiments that have characterized cyberlibertarianism from the
86 The Forms and Functions of Cyberlibertarianism
The infrastructure bill proposal sought “to address this problem by extend-
ing 1099-type reporting requirements to crypto intermediaries, part of a
broader effort to counter the vast underreporting of taxable cryptocur-
rency gains. The Joint Committee on Taxation estimates that the reporting
change alone would bring in $28 billion over the next decade.” Despite
cryptocurrency advocates using a variety of arguments to try to defeat the
initiative, which was not even a full-fledged proposed law yet as it had not
yet gone through reconciliation, the most interesting line of attack was the
claim that ordinary tax reporting, as required for essentially all financial
transactions in world democracies, constitutes a kind of illicit surveillance.
The world truly seemed to have turned upside-down.
Many in the crypto industry portrayed the tax reporting required by the
infrastructure bill as nothing less than an existential threat. The CEO of
crypto exchange Kraken, Jesse Powell, tweeted a “Join, or Die” graphic, urg-
ing followers and “cryptohornets” to “publicly shame” members of Congress.
And Messari Crypto CEO Ryan Selkis (who recently closed on $21 million
in funding from investors including Steve Cohen) described Treasury Secre-
tary Janet Yellen as a “white collar criminal” and called on the industry to
“destroy crypto’s enemies before they destroy us.”
There’s no such threat. As Omri Marian, a law professor at the University
of California, Irvine, put it, the industry is merely seeking to preserve an
“unwarranted, accidental tax preference” that “enables tax cheats to, well,
cheat.”
Of course it has been part of cypherpunk dogma from the beginning that
government is “criminal,” despite that word having very little meaning
beyond “violating criminal laws as enacted by a government.” Therefore,
the idea of government being criminal is essentially self-contradictory by
definition.
Not only cryptocurrency promoters but digital rights organizations have
also joined the bandwagon, claiming not only that attempts to require cryp-
tocurrency traders to report taxable trades in the same way that all other
traders must constitutes not just “surveillance” and a “disaster for digital
privacy,” as EFF put it (Reitman 2021), but that cryptocurrency constitutes
our best way to combat “surveillance capitalism.” Evan Greer—musician,
longtime digital rights activist, director of the problematic digital rights
The Forms and Functions of Cyberlibertarianism 89
organization Fight for the Future, and creator of a song called “Surveil-
lance Capitalism” (2021a) that purports to be inspired by and about the
same issues outlined by Zuboff—urged followers to contact their congres-
sional representatives by stating that “decentralization is our best bet for
having a future internet that’s not based on surveillance capitalism and
where people have basic rights. Cryptocurrencies are just sort of the tip
of the iceberg, messy (and often scammy) proofs of concept for something
much more important” (Greer 2021b). Zuboff’s argument is turned upside-
down and played against itself, so that the specific ideology, practices, tech-
nologies, and even entities that brought us to where we are today, we are
told, are the solutions to the problem they have created.
Greer misstates the stakes of the bill, contending that compelling wealthy
people who profit from cryptocurrency trading to pay taxes would inter-
fere with “an internet that’s not based on surveillance capitalism.” Some
commentators pushed back on Greer and asked for detailed explanations
for why we should accept any of the propositions offered, but no expla
nations were forthcoming. For many of those who follow Greer, Fight for
the Future, EFF, and other cyberlibertarian activists, the discourse had
been successfully managed and reframed, and the very problems Zuboff
analyzes had now been reframed as their own solutions. The Senate was
inundated with demands to undo this provision, and only the tangled
process of lawmaking saw proffered revisions of the offending part fail to
make it into the bill. Nevertheless, the view that “decentralized technol-
ogy” is the vital force that will redress the problems of surveillance capital-
ism, whereas law and regulation enacted by elected officials stands in the
way of this “solution,” gains even more strength as a form of response to
criticisms of digital technology—criticism that digital rights organizations
appear to take on board, only to deploy against themselves when push
comes to shove.
Among the most concerning aspects of digital politics is the rise of NGOs,
think tanks, and academic and quasi-academic centers that present them-
selves as defenders of “digital rights.” Some of the most influential U.S. orga-
nizations include EFF, CDT, and Fight for the Future, but there are many
others active throughout the world. In contrast to NGOs whose remits are
90 The Forms and Functions of Cyberlibertarianism
Institute, Fight for the Future, and the Center for Democracy and Tech-
nology. The charters of most of these foundations read remarkably simi-
larly. They make heavy use of the terms “open” and “free”; associate both
“democracy” and “innovation” with a “free internet”; focus on topics like
digital rights management and the resistance of some countries to unregu-
lated internet access in the name of democracy; and presume the beneficial
effects of unregulated internet technology around the world. They often
assert as fact the notion that social media is responsible for destabilizing or
altogether dismantling authoritarian regimes.
What they typically do not feature is any sustained attempt to discuss
the meaning of words like “open,” “free,” and “democracy” outside of the
digital context, or their historical significance in the United States; the
work of writers and thinkers on which democracies are based; or critical
topics such as the nature of sovereignty, the rule of law, the interpretation
of the U.S. Constitution, or other constitutional forms of government.
The relationship of the “internet rights” they pursue so vigorously to other
existing rights is also not addressed, nor is there any effort to invite discus-
sion of what these terms might mean and how they might be used. Despite
the fact that a glance at world history shows that the meanings of these
terms are anything but clear, they are presumed to be so by these organi
zations. This is nowhere more apparent than in the discussion about the
potential for internet technology to unseat authoritarian regimes. It is com-
pletely assumed that the United States can correctly identify which regimes
are authoritarian and that, once they are so labeled, has the right to unseat
them by implementing communication regimes that directly contravene the
laws and regulations of those other countries. These advocates frequently
recommend and even implement foreign policy directives that violate our
own democratic procedures and the sovereignty of other nations in the
name of democracy and freedom.
These organizations share more than a commitment to a narrow and
oddly defined set of principles. While it is easy to assume that nonprofit
organizations with words like “freedom” in their names and .org web
addresses would be independent of corporate interests, in fact most of these
institutions, like the World Wide Web Consortium itself, are the product
of partnerships and funders either partly or wholly formed by corporations
directly involved in the production of digital technology. Additionally, many
of their board members and advisers are employees of these corporations.
The Forms and Functions of Cyberlibertarianism 93
EFF’s position on the future internet: ‘private, not public . . . life in cyber-
space seems to be shaping up exactly like Thomas Jefferson would have
wanted: founded on the primacy of individual liberty and a commitment
to pluralism, diversity, and community’” (2018b, 138). While the final clause
gives a mild nod toward liberal values, there is no mention of democracy,
and even the invocation of Jefferson seems designed in anarcho-capitalist
fashion to endorse property ownership as the core of citizenship.
EFF has cultivated a reputation as a civil rights organization, sometimes
nodding toward democratic and non-rightist values, occasionally advocating
for user privacy, but more frequently promoting the interests of business
and “technology” while attacking critics using all the tools of cyberlibertari
anism. Its focus on “innovation” (see below) and the novel, digital-based
notions of “free expression” and “privacy” (see chapter 6) do not align well
with the actions of organizations with more explicit commitments to civil
rights. As Levine wrote in a separate piece reflecting on an EFF “privacy”
campaign that aligned precisely with Apple’s marketing strategy, “The
truth is that EFF is a corporate front. It is America’s oldest and most influ-
ential internet business lobby—an organization that has played a pivotal
role in shaping the commercial internet as we know it and, increasingly,
hate it. That shitty internet we all inhabit today? That system dominated
by giant monopolies, powered by for-profit surveillance and influence, and
lacking any democratic oversight? EFF is directly responsible for bringing
it into being” (2018a).
EFF’s mission all along was “privatization.” After it cooperated too
closely with the government in helping shepherd the Communications
Law Enforcement Assistance Act (CALEA) through Congress in 1994, EFF
policy director Jerry Berman went on to found CDT. EFF’s constituents
were not concerned about surveillance per se when it came to CALEA.
Rather, the issue was that the government was allowed to participate.
When EFF reconstituted itself after Berman’s departure, it shifted its focus
from internet service providers to the major technology companies like
Google. In 2004, when Google announced the launch of Gmail, internet
users raised serious concerns about privacy and surveillance from corporate
power. EFF remained quiet and “took an optimistic wait-and-see attitude”
(Levine 2018a).
What spurred EFF into action was not Gmail but the threat of legisla-
tors regulating it:
The Forms and Functions of Cyberlibertarianism 95
Here’s where EFF showed its true colors. The group published a string of
blog posts and communiqués that attacked Figueroa and her bill, painting
her staff as ignorant and out of their depth. Leading the publicity charge was
[EFF staffer Donna] Wentworth, who, as it turned out, would jump ship
the following year for a “strategic communications” position at Google. She
called the proposed legislation “poorly conceived” and “anti-Gmail” (appar-
ently already a self-evident epithet in EFF circles). She also trotted out an
influential roster of EFF experts who argued that regulating Google wouldn’t
remedy privacy issues online. What was really needed, these tech savants
insisted, was a renewed initiative to strengthen and pass laws that restricted
the government from spying on us. In other words, EFF had no problem with
corporate surveillance: companies like Google were our friends and protec-
tors. The government—that was the bad hombre here. Focus on it. (2018a)
true. When the relevant legislation has passed, the effects have been not just
less than EFF and others suggest, but largely nonexistent. Yet when real
threats to democracy and citizens emerge directly from companies, often
with proven harms or at least strong evidence of them, EFF demurs. This
was especially clear during the fallout of the Cambridge Analytica scandal:
if they have campaigns related to Facebook, or have proposals for any kind
of legislation that would address the ways Facebook and other companies
surveil and monetize your every move. Not one does—the most they’ve
done is write blog posts and start initial conversations. And Fight for the
Future, a powerful and fierce grass-roots group that championed the win-
ning fight to pass net neutrality protections in 2014, only just launched an
online petition and set of broad demands that puts the onus on tech compa-
nies to reform themselves. Compare this level of engagement with these issues
to the media, which has treated the weeks since the Cambridge Analytica
The Forms and Functions of Cyberlibertarianism 97
Following the election of Donald Trump, the ACLU was flooded with dona-
tions. The organization received a record-breaking $24 million in donations
in a single weekend in January 2017 and raised more than $80 million total
between November 8, 2016, and March 2017. In January 2017, it was an-
nounced that the ACLU would be partnering with the powerful Silicon
Valley startup accelerator Y Combinator, a relationship that would include
funding and mentoring from the startup. ACLU Executive Director Anthony
Romero said in a statement that “[b]eyond financial contributions, the Silicon
Valley community can help organizations like ours harness recent member-
ship surges and spread the word about what the ACLU is doing to protect
people’s rights from violations by the Trump administration.” Some critics
observed that one member of Trump’s administration and major Trump
donor, billionaire Peter Thiel, is a part-time partner in Y Combinator. (Franks
2019a, 126)
Without other corporate ties, this use of excess cash would not be note-
worthy. But given the pattern Franks and other critics have noted, this looks
like people giving money to the very cause they believe they are opposing.
Given Silicon Valley’s role in aiding the Trump campaign and other anti-
democratic efforts in recent years, and Y Combinator’s notoriously right-
wing-friendly politics (see chapter 7), it seems ludicrous that people would
give money to technology startups in the name of protecting civil rights.
Cyberlibertarian dogma and political action have the power and influence
to turn genuine concern for minorities’ civil rights into something like its
opposite.
Academic and quasi-academic bodies such as Stanford University’s Inter-
net Observatory, the MIT Media Lab, the Berkman Klein Center at Harvard
University, and quasi-independent organizations such as Data and Society
are nearly as concerning as NGOs like EFF and ACLU. These organizations
dominate discussions of digital technology and exert a remarkable amount
of influence over public discourse. Their affiliates frequently exchange roles
100 The Forms and Functions of Cyberlibertarianism
with one another and with the digital rights NGOs, and they all have sig-
nificant relationships with the most powerful elements of the digital tech-
nology industry. The centers’ model is similar to those used decades ago
for tobacco science and fossil fuel research, which have only been disrupted
after years of concentrated work by politicians, faculty, and some univer-
sity administrators. What these organizations share is a commitment to
cyberlibertarian principles and a resistance to analyses that land on aboli-
tionist conclusions.
The MIT Media Lab is the most telling of these institutes. While it is
now known as a resource for digital experimentation and an authority on
political and ethical issues raised by digital technology, its history and
operations suggest something very different. The Media Lab provides a
fundamental link between the military–industrial origins of Silicon Valley
and its reinvention as cyberlibertarian populist “counterculture.” It is also
a major link between what Fred Turner calls the “new communalist” vision
of Stewart Brand and other “apolitical” 1960s counterculture agitators and
more recent digital activism. Brand heard founding director Nicholas
Negroponte’s plans for the Media Lab before it opened, and in 1986 “he
taught at the Lab, met with its various scientists, attended classes and brief-
ings, and ultimately began to draft a book about” it (Turner 2006a, 177),
which was published in 1987 as The Media Lab: Inventing the Future at MIT
(Brand 1987). Yet as Brand explained, the Media Lab “was a direct descen-
dant of the Rad Lab” at MIT through its “postwar incarnation, the Research
Laboratory of Electronics (RLE)” (Turner 2006a, 177). The Rad Lab (short
for Radiation Laboratory) was devoted to military uses of technology, espe-
cially radar. As the RLE’s own history journal puts it, “To protect the secrecy
of its sensitive work, it was called the Radiation Laboratory. The name
conjured thoughts of atomic and nuclear physics, a safe and acceptable
field of scientific investigation at that time. It also served as a decoy for the
laboratory’s real work on sophisticated microwave radar” (Fleischer 1991).
Nicholas Negroponte is a key figure for cyberlibertarian politics. Negro-
ponte “was a longtime ARPA contractor and had worked on a variety of
military computer initiatives at MIT” (Levine 2018b, 130). Negroponte’s
work at MIT was concentrated in the Architecture Machine Group, which
took for granted a cybernetic approach to computer development that,
while describing itself as humanist, “constantly invokes an imagined future
human that doesn’t really exist partly because it’s part of an ever-receding
The Forms and Functions of Cyberlibertarianism 101
future but also because this imagined future human is only ever a privileged,
highly individualized, boundary-policing, disembodied, white, western male
human” (Emerson 2016). Negroponte’s spearheading of the One Laptop
per Child program has been widely criticized for its neocolonial attitude
toward non-Western peoples and its wholesale embrace of messianic digi-
tal utopianism sans any thick account of the lives and experiences of the
people he purported to help (Fouché 2012; Golumbia 2009, 124–25; Selwyn
2013, 127–46).
Of course this “humanism” was funded and produced for corporate
and military power, even if that power was held at arm’s length: “With an
annual budget of $6 million a year, the Lab had almost one hundred spon-
sors at the time of Brand’s visit, each of whom had paid a minimum of
two hundred thousand dollars to join. The sponsors were not allowed to
demand that any particular research be done on their behalf. Rather, they
were buying permission to watch as the eleven different subdivisions of the
Lab went about exploring the possibilities of human–machine interaction
and multimedia convergence; a sponsor could later act on any insights that
emerged” (Turner 2006a, 178).
It is hard to see how such an institution, despite its virtues, can be con-
sidered an independent academic center dedicated to the dispassionate
analysis of core issues. Further, “the Media Lab was actively building real
digital artifacts and networks. Like the products Brand had once reviewed
in the Catalog, the Media Lab’s digital newspapers and Lego robots could
be bought and used, so to speak, at least by corporate clients. And like the
Catalog itself, the lab served to link representatives of relatively disconnected
groups—in this case, corporate, academic, and technical—into a single
functioning network” (Turner 2006a, 178). Despite its overt commitment
to a commercial and even defense-oriented mission starkly at odds with
nonconservative political beliefs, the Media Lab was able to fuse its multi-
farious interests into a technological vanguard that successfully obscured
its politics. By the late 2010s the Media Lab was widely understood to be a
leader not just in building demos that corporations and the military might
use but in analyzing the culture and politics of digital technology, despite
having no particular disciplinary expertise related to those issues.
Joi Ito, an entrepreneur and venture capitalist with a long association
with digital utopianism, succeeded Negroponte as director of the Media
Lab. Oddly, despite the fact that it is ostensibly an academic center, its new
102 The Forms and Functions of Cyberlibertarianism
leader had never completed a college degree (Markoff 2011). When Ito took
over in 2011, there was no hint of the Media Lab being a center for policy
analysis or critical thinking. MIT provost L. Rafael Reif, who hired him,
described Ito as “an innovative thinker who understands the tremendous
potential of technology and, in particular, the Internet, to influence educa-
tion, business, and society in general” (Markoff 2011). Cyberlibertarianism
has several executive seats at the table; abolition is not invited.
Yet, with an entrepreneur’s eye toward market demands, by the mid-2010s
Ito had become increasingly aware of the general public’s growing concern
about the impact of digital technology on people, culture, and politics. The
Media Lab has pivoted; in addition to its proto-commercial activities, it
now positions itself as the leader on “ethics,” particularly what many call
“AI ethics.” No less a figure than Barack Obama (whose administration was
already very deeply enmeshed with digital technology providers, especially
Google; see Morozov 2011a) lauded Ito as an “expert” in AI and ethics in
2016. Yet Rodrigo Ochigame, an academic who worked at the Media Lab
as a graduate student in 2018 and 2019, writes that “the discourse of ‘ethical
AI,’ championed substantially by Ito, was aligned strategically with a Silicon
Valley effort seeking to avoid legally enforceable restrictions of controver-
sial technologies. A key group behind this effort, with the lab as a member,
made policy recommendations in California that contradicted the conclu-
sions of research I conducted with several lab colleagues, research that led
us to oppose the use of computer algorithms in deciding whether to jail
people pending trial” (2019).
Ochigame was one of several Media Lab associates who went public with
this characterization of the organization in 2019, in the wake of discoveries
that Ito had not only taken donations from disgraced and convicted sex
offender Jeffrey Epstein, but “had a deeper fund-raising relationship with
Epstein than it has previously acknowledged, and it attempted to conceal
the extent of its contacts with him.” Epstein made several donations to
the Media Lab after his 2008 conviction for soliciting prostitution from a
minor: “Although Epstein was listed as ‘disqualified’ in MIT’s official donor
database, the Media Lab continued to accept gifts from him, consulted him
about the use of the funds, and, by marking his contributions as anony-
mous, avoided disclosing their full extent, both publicly and within the uni-
versity.” Further, “Ito disclosed that he had separately received $1.2 million
from Epstein for investment funds under his control, in addition to five
The Forms and Functions of Cyberlibertarianism 103
group to be much more substantive and not just ‘white washing.’ I think it’s
just taking the trajectory that these things take.”
There is little reason to doubt the sincerity of this naivete, which is shared
by many who consider themselves defenders of digital rights but lack the
knowledge or wisdom to understand how industry capture works. How-
ever, those who have the political and economic power to drive that cap-
ture and build these relationships can hardly use this excuse. They are part
of two of Mirowski’s shells: an inner, more private funding shell, strongly
allied with the “dark money” described by Mayer (2017), which is fully
aware of its goals and strategies; and an outer, more public comment and
policy shell, whose members may well believe they are empowered to act
independently, only to realize far too late, if at all, that “the trajectory that
these things take” is inscribed from the outset.
Although the Media Lab and its Epstein scandal serve as public remind-
ers of how technology promoters and their supporting industries can dom-
inate public and political discourse, this is only the tip of the iceberg.
Focusing exclusively on the domain of “AI ethics,” Ochigame explains that
in addition to MIT and Harvard,
many other universities and new institutes received money from the tech
industry to work on AI ethics. Most such organizations are also headed by
current or former executives of tech firms. For example, the Data & Society
Research Institute is directed by a Microsoft researcher and initially funded by
a Microsoft grant; New York University’s AI Now Institute was co-founded
by another Microsoft researcher and partially funded by Microsoft, Google,
and DeepMind; the Stanford Institute for Human-Centered AI is co-directed
by a former vice president of Google; University of California, Berkeley’s
Division of Data Sciences is headed by a Microsoft veteran; and the MIT
Schwarzman College of Computing is headed by a board member of Ama-
zon. During my time at the Media Lab, Ito maintained frequent contact
with the executives and planners of all these organizations. (2019)
One noteworthy individual in this mix is danah boyd, founder and president
of Data & Society and a partner researcher at Microsoft. For many years
boyd was a fellow of the Berkman Klein Center at Harvard, another center
for digital advocacy. In a speech she gave in accepting a 2019 “Trailblazing
The Forms and Functions of Cyberlibertarianism 105
Technology Scholar” award from EFF, and in the wake of the Epstein scan-
dal, boyd noted that she got her start at the Media Lab, called for “a Great
Reckoning in the tech industry,” and noted that she “benefited from men
whose actions have helped uphold a patriarchal system that has hurt so
many people” (2019).
While boyd admits that at the Media Lab she “did some things that still
bother me in order to make it all work” and “grew numb to the worst parts
of the ‘Demo or Die’ culture,” she notes that the Epstein scandal requires a
response that verges on abolitionism: “How we respond to the calls for jus-
tice will shape the future of technology and society. We must hold account-
able all who perpetuate, amplify, and enable hate, harm, and cruelty. But
accountability without transformation is simply spectacle. We owe it to
ourselves and to all of those who have been hurt to focus on the root of the
problem. We also owe it to them to actively seek to not build certain tech-
nologies because the human cost is too great” (2019).
Even though she encourages her audience to take account of “your own
contributions to the current state of affairs. No one in tech—not you, not
me—is an innocent bystander. We have all enabled this current state of
affairs in one way or another,” she is curiously silent on what that means and
what her own role may have been. After all, boyd’s own work can be seen
as opposing abolitionist approaches. In fact, her 2014 book It’s Complicated
is a signal work that helped social media to spread among especially vulner-
able young people in a largely unregulated fashion. The publisher’s blurb
reads, “Ultimately, boyd argues that society fails young people when pater-
nalism and protectionism hinder teenagers’ ability to become informed,
thoughtful, and engaged citizens through their online interactions. Yet de-
spite an environment of rampant fear-mongering, boyd finds that teens
often find ways to engage and to develop a sense of identity” (boyd 2014).
Characterizing the concerns of those not directly connected to the tech-
nology industry as “an environment of rampant fear-mongering” is a cyber-
libertarian agitation for which boyd might be seen as taking responsibility
in 2019. It is questionable whether an individual with long-standing ties to
technology companies (Microsoft and Google), and individuals and insti-
tutes with problematic industry relationships (the Media Lab and Berkman
Klein Center) should continue to direct an industry-funded research cen-
ter that positions its work as independent. Data & Society is a frequent
source for ameliorative “scholarship” like boyd’s book, which is welcome to
106 The Forms and Functions of Cyberlibertarianism
technology promoters. Such work has the effect of pulling attention away
from ordinary scholars who lack the connections and financial power to
publicize themselves the way boyd and Data & Society can.
These matters became more pointed in 2022, when activists and scholars
uncovered exactly the kind of problematic relationship at Data & Society
itself, directly associated with boyd, that she seemed to be taking responsi-
bility for in 2019. This involved a text-based suicide intervention service
called Crisis Text Line (CTL), founded in 2013 by Nancy Lublin, who de-
scribes herself on her personal website as “innovator, advisor, investor, and
entrepreneur.” In early 2022, Politico reporter Alexandra S. Levine wrote an
investigative piece on CTL, drawing attention to its relationship with a
“for-profit spinoff” called Loris.ai that “uses a sliced and repackaged ver-
sion of that information to create and market customer service software.”
CTL, as Lublin described it, was a “tech startup.” Ethicists and former
CTL volunteers questioned the service’s ability to obtain “informed con-
sent” from individuals who were considering suicide as a precondition for
using the service. CTL and Loris.ai executives claimed that the relation-
ship between the two services was “ethically sound,” as they were separated
by “a necessary and important church and state boundary.”
Levine noted that “Crisis Text Line also has a data, ethics and research
advisory board that includes Reddit’s vice president of data, Jack Hanlon,
and medical experts affiliated with Harvard, Yale, Brown, and other health-
focused institutions. (None are volunteer crisis counselors.) Until recently,
the chief data scientist in charge of that committee was Crisis Text Line
co-founder Bob Filbin, who left for Meta last fall” (2022). Soon after, tech-
nology writer Joanne McNeil wrote a detailed piece for Motherboard about
CTL’s relationship with the industry. Following the publication of Levine’s
story, the FCC requested information from CTL about its connections, and
CTL “announced that it has ended its ‘data-sharing relationship’ with the
for-profit subsidiary.” McNeil noted that CTL “also took a page from the
Silicon Valley playbook in its attack on existing public services,” because
the FCC itself had been developing a plan to offer text-based suicide preven-
tion services, which went into effect in mid-2022. In 2020, during its delib-
erations about starting the service, the FCC received a letter urging it not
to do so because “a duplicative government-run text line could confuse those
who are in crises.” An existing service, CTL, was already providing a version
of the work and was able to leverage the “awesome power of technology—
The Forms and Functions of Cyberlibertarianism 107
and extractive data providers, but also the murky relationship between seem-
ingly independent academic (or partly academic, in boyd’s case) and re-
search centers and public discourse about technology ethics. Furthermore,
when viewed in the context of boyd’s relationship with MIT Media Lab
and her own remarks in 2019 about holding “accountable all who perpetu-
ate, amplify, and enable hate, harm, and cruelty” and the “great reckoning”
the tech industry must experience, the fact that she wrote the next year to
the FCC attempting to get them to forestall releasing a public service in
favor of one with for-profit ties is profoundly disturbing. These events
must raise real questions for anyone who takes at face value the claims to
independence and impartiality one ordinarily expects of academic com-
mentary on technology. Someone concerned with the appearance of impro-
priety would surely consider whether they are the right person to lead a
major “independent” research organization with corporate ties whose sub-
stantial budget gives it remarkable influence over public discussion. But this
solution does not seem to be one boyd is willing to consider.
Today, we are familiar with the tactics used by many in industry to twist
independent research and activism, especially academic research, to serve the
interests of for-profit entities. Research on this subject has focused most
closely on tobacco science and climate change (Oreskes and Conway 2010),
two cases where academia has had some limited success in distancing itself
from industry, often under legislative and other political pressure. Research
has also focused on pharmaceuticals and other parts of the for-profit med-
ical industry, a problem whose scope researchers have only begun to map
(Fabbri et al. 2018; Forsyth et al. 2014). However, due to the diffuse nature
of the impact of digital technology, we have not even begun to map the
myriad ways personal, political, and corporate interests manifest them-
selves in digital advocacy. It is not even clear that we have anything close to
a robust way of accounting for what those interests might be. Writers like
Turner and Yasha Levine, and more recently Jill Lepore (2020) and Evan
Goldstein (2020), have only begun to scratch the surface.
So have public advocacy efforts like the Tech Transparency Project and
Tech Inquiry, along with more general organizations like Source Watch,
which attempt to trace the influence of corporate money and dark money on
both politics and independent/academic research. However, the influence
of these investigators is limited due, in part, to the public’s disinclination
to ask questions about the most prominent voices who assign themselves
The Forms and Functions of Cyberlibertarianism 109
the roles of protectors of digital rights and digital ethics. Even many jour-
nalists and those who work in technology are reluctant to investigate. As
we have seen with examples like SOPA/PIPA, the question of whether
industry might be driving public discussion seems nearly off the table for
many, no matter how thoroughly industry’s involvement may be proved.
Similarly, when the Tech Transparency Project (2017b) documented how
much influence Google had over the minor amendment to the Communi-
cations Decency Act Section 230 to combat sex trafficking (see chapter 3),
this had virtually no effect on public understanding of the law.
The Stanford University Internet Observatory, which positions itself as
“a cross-disciplinary program of research, teaching and policy engagement
for the study of abuse in current information technologies, with a focus on
social media” (“About the Internet Observatory” 2024), is headquartered at
the research institution with the most direct responsibility for incubating
Silicon Valley. Its director is Alex Stamos, former Facebook chief secur-
ity officer, and its big data architect and chief technology officer is David
Thiel, another former Facebook employee. Despite this, it is consulted as
an authority on privacy abuses by technology companies. When it comes
to the press and public attention, the relationships between EFF, CDT,
Fight for the Future, and ACLU and these companies seem to have little
impact. In fact, these organizations are often turned to first, rarely with any
attention paid to these conflicts of interest. These institutions have defined
the ethics and politics of technology, often with an express eye toward pro-
tecting corporate interests and economic “innovation” first and foremost.
Silicon Valley and worldwide conferences like the annual Chaos Commu-
nication Congress in Germany. For years, Julian Assange was a major pres-
ence, discussing the philosophy that led to WikiLeaks. Jim Bell, proponent
of “assassination markets” (see chapter 7), one of the most obviously destruc-
tive ideas in the digital world, may well be its longest-standing member.
Timothy C. May, often referred to as the founder of the movement, is
thought by many to have played a major role in the development of Bitcoin;
his writings on the need for “untraceable” money “outside the state” can be
found even in the earliest versions of the group’s manifestos.
And when they have not been the primary drivers, they have been at the
ready to defend positions that line up with those of the more moderate
actors whose statements we have been examining. The list of figures prom-
inent in digital activism from the early 1990s to today (see, e.g., “Cypher-
punk” 2022) is saturated with members of the Cypherpunks mailing list
and self-identified crypto-anarchists. They figure, even today, as senior per-
sonnel in a surprisingly large number of “digital rights” organizations.
The cypherpunks are more than happy if the world perceives them as
politically “neutral,” which they mean in the same illegible sense that tech-
nology is said to be “neutral.” But their putative neutrality is not just tilted
toward the far right. To the contrary, the cypherpunks have from the begin-
ning been saturated in far-right philosophy. The “Cyphernomicon” explic-
itly identifies the philosophy as a “form of anarcho-capitalist system” (May
1994, 2.3.4). Anarcho-capitalism—a far-right philosophy associated with
figures like Murray Rothbard, David Friedman (son of Milton Friedman,
and considerably more extreme than him), Patri Friedman (David’s son, and
even farther to the right than his father), and Hans-Hermann Hoppe—is
one of the most obvious sites where a version of “radical libertarianism”
bleeds without friction into outright fascism.
Crypto-anarchism is largely based on computer technology, rather than
political theory, which means that its politics, as with all forms of cyber
libertarianism, can be vague and incoherent at times. It is no doubt true,
as May himself says, that the cypherpunk community is composed of “a
lot of radical libertarians, some anarcho-capitalists, and even a few social-
ists” (1994, 1.1). Proportionally those numbers seem constant, as does May’s
evident surprise that socialists would find themselves among what is overtly
a far-right crowd. Yet there is a persistent effort in digital evangelism to
portray this movement as either politically neutral or somehow compatible
The Forms and Functions of Cyberlibertarianism 111
to conspiracy theorists like Alex Jones and David Icke does not register
with much of the public. Assange directly intervened in a U.S. election
through cooperation with Roger Stone, who himself used Jones and other
conspiracy theorists for just that purpose (LaFraniere 2020). This is not an
exaggeration: Assange is a proto-Nazi political provocateur whose overt
anti-Semitism, anti-Black racism, climate change denial, misogyny, hatred
for democracy, and support for authoritarian political regimes (see chapter 7
for discussion of Assange’s career-long leanings toward fascism), including
his effort to distort public opinion in the 2016 U.S. presidential election,
routinely fail to penetrate the minds of observers who see as dispositive his
use of digital tools and antiestablishment affect. Assange is one of the key
figures for the syncretic politics of cyberlibertarianism. His identification
with digital freedom allows his supporters to overlook his behavior and
statements that contradict his overt politics.
Something similar is true of Glenn Greenwald. As Wilentz explains,
Greenwald gained prominence as a civil rights lawyer who defended Nazis
based on free speech absolutism. Although this position has a respectable
history in U.S. politics, it must raise questions for further exploration and
has been questioned by political commentators on the left (Delgado and
Stefancic 2018; Golumbia 2021). He supported the Iraq War in the early
2000s, despite the falsehoods promulgated by the U.S. intelligence establish-
ment to advance the operation. Later, he used these same falsehoods to
bolster his own political program. When he became one of the designated
spokespeople for Edward Snowden’s cache of documents about the capabili
ties of the U.S. National Security Agency and other parts of the intelligence
apparatus, Greenwald completed a rebranding that had been underway
for a while. Whereas his alignment with the political right had been overt
prior to this, now he described himself as an “activist journalist.” However,
unlike almost any other activist in public life, he refuses to state on which
side of the political aisle he sits. Unlike any other figure on the left, Green-
wald routinely appears in right-wing media but does not challenge its poli-
tics. Indeed, he frequently appeared on Tucker Carlson Tonight, which many
considered to be the leading broadcast media source for fascist and white
supremacist politics in the United States (Blow 2021). Greenwald rarely chal-
lenged Carlson, and certainly not about his use of racist conspiracy theories.
In fact, Greenwald gave the impression of indulging them (Higgins 2022;
Lemieux 2021; Loomis 2018; Robinson 2021), perhaps even supporting them
outright.
The Forms and Functions of Cyberlibertarianism 117
Snowden is the least overtly partisan of the three, although that may be
more appearance than fact. Snowden’s supporters rarely acknowledge that
he was not an employee of the NSA at the time he began leaking docu-
ments to Greenwald, Assange, Laura Poitras, Barton Gellman, and others,
but of Booz Allen Hamilton, a powerful defense contractor (Eichenwald
2013; Shaw 2018). Snowden’s animus for the U.S. government and his con-
spiratorial dismissal of its claims to oversee intelligence programs through
democratic means stops short of what many not on the right have long
criticized: the outsourcing of governmental functions to private actors who
are less fully accountable to the public than are government employees.
Snowden worked for Booz Allen precisely because the lack of strong over-
sight over private contractors gave him the freedom to access information he
already planned to release. Booz Allen CEO Ralph Shrader stated that “the
only reason he became an employee of Booz Allen was to gain access to
information. There was nothing about Booz Allen that was attractive other
than we were a vehicle. That’s his [Snowden’s] words” (Aitoro 2014). In fact,
Snowden was in contact with Greenwald prior to taking the job at Booz
Allen (U.S. House Intelligence Committee 2016, 14). As even sympathetic
portraits of Snowden emphasize, “since 9/11 and the enormous influx of in-
telligence money, much of the NSA’s work had been outsourced to defense
contractors, including Dell and Booz Allen Hamilton” (Bamford 2014).
These facts might seem irrelevant were it not for the explicit and implicit
political analyses Snowden offers. They are explicit because Snowden has
always identified as a right-leaning libertarian, with all the syncretism and
incoherence attached to that position. They are implicit because Snowden
accepts (and has expressed publicly) the right–libertarian belief that gov-
ernmental power is particularly noxious and violent, so that corporate
power is an entirely different thing on which we need not dwell. According
to this familiar right-wing trope, “tyranny” is only ever attached to govern-
ment and particularly democracies. As the philosopher Tamsin Shaw put it
in a critical analysis of Snowden:
Cynicism about the rule of law exists on a spectrum. At one end, exposing
government hypocrisy is motivated by a demand that a liberal–democratic
state live up to its own ideals, that accountability be reinforced by increasing
public awareness, establishing oversight committees, electing proactive poli-
ticians, and employing all the other mechanisms that have evolved in liberal
democracies to prevent arbitrary or unchecked rule. These include popular
118 The Forms and Functions of Cyberlibertarianism
protests, the civil disobedience that won civil rights battles, and, indeed, whis-
tleblowing. At the other end of the spectrum is the idea that the law is always
really politics in a different guise; it can provide a broad set of abstract norms
but fails to specify how these should be applied in particular cases. Human
beings make those decisions. And the decision-makers will ultimately be
those with the most power. (2018)
This is not just haggling over the niceties of broadly opposed political
frameworks. Shaw points out the political foundations for this second
kind of antidemocratic cynicism:
On this view, the liberal notions of legality and legitimacy are always hypo-
critical. This was the view promulgated by one of the most influential legal
theorists of the twentieth century, Carl Schmitt. He was a Nazi, who joined
the party in 1933 and became known as the “crown jurist” of the Third
Reich. But at the turn of the millennium, as Bush took America to war,
Schmitt’s criticisms of liberalism were undergoing a renaissance on both the
far right and the far left, especially in the academy. This set of attitudes has
not been limited to high theory or confined to universities, but its congru-
ence with authoritarianism has often been overlooked.
The cynicism of the figures around Snowden derives not from a meta-view
about the nature of law, like Schmitt’s, but from the view that America, the
most powerful exponent of the rule of law, merely uses this ideal as a mask
to disguise the unchecked power of the “deep state.” Snowden, a dissenting
agent of the national security state brandishing his pocket Constitution, was
seen by [Guardian editor Alan] Rusbridger as an American patriot, but by
his chosen allies as the most authoritative revealer of the irremediable depth
of American hypocrisy.
The Forms and Functions of Cyberlibertarianism 119
In the WikiLeaks universe, the liberal ideal of the rule of law, both domes-
tic and international, has been the lie that allows unaccountable power to
grow into a world-dominating force. (2018)
Putin has benefited from the appearance of being Snowden’s protector, pre-
senting himself as a greater champion of freedom than the United States.
In their book Red Web: The Kremlin’s War on the Internet, the Russian inves-
tigative journalists Andrei Soldatov and Irina Borogan [2015] recounted the
experiences of human rights activists who were summoned via an email pur-
portedly from Snowden himself, to a meeting with him at Moscow airport
120 The Forms and Functions of Cyberlibertarianism
when he surfaced there with Sarah Harrison, to find they were joining the
heads of various pro-Kremlin “human rights” groups, Vladimir Lukin, the
Putin-appointed Human Rights Commissioner of Russia, and the lawyers
Anatoly Kucherena and Henri Reznik. It was clear to the independent activ-
ists that Kucherena had organized the meeting. Kucherena is a member of
the FSB’s Public Council, an organization that Soldatov and Borogan say
was established to promote the image of the Russian security service; he is
also the chairman of an organization called the Institute for Democracy and
Cooperation, which has branches in New York and Paris and was set up at
Putin’s personal instigation, the authors tell us, for the purposes of criticizing
human rights violations in the United States. This institute publishes an
annual report on the state of human rights in the United States. Using mis-
leading moral equivalences to attack American hypocrisy is one of the most
common tactics in Putin’s propaganda war. (Shaw 2018)
Shaw goes on to talk about the rare occasions when Snowden does appear
to criticize Putin. Snowden
barely deviat[es] from Putin’s information agenda even as Putin has insti-
gated extraordinarily repressive measures to rein in Internet freedoms in
Russia. When Snowden agreed, for instance, to appear as a guest questioner
on a televised question-and-answer session with Putin, he posed the Russian
president a question that heavily criticized surveillance practices in the US
and asked Putin if Russia did the same, which gave Putin an opening to
assert, completely falsely, that no such indiscriminate surveillance takes
place in Russia. Earlier this year, Snowden’s supporters trumpeted a tweet in
which he accused the Russian regime of being full of corruption, but Putin
himself will use such accusations when he wishes to eliminate undesirable
government actors. To be sure, Snowden is in a vulnerable position: he is
notably cautious in his wording whenever he speaks publicly, as someone
reliant on the protection of Putin might be.
about human rights from a location in which he enjoys far fewer of those
rights than he did back home, precisely because he is in a privileged posi-
tion to benefit from authoritarian power.
Nassim Nicholas Taleb (2022) draws unexpected attention to Snowden’s
de facto political theory in a wide-ranging discussion prompted by Russia’s
invasion of Ukraine, focusing on the broad differences between states formed
around nationalism and those formed around democracy. “A system seems
all the more dysfunctional when it is transparent,” he writes. “Hence my
attacks on someone like Edward Snowden and his acolytes, who exploit this
paradox to attack the West for the benefit of Russian plotters.” Snowden,
he says, is an “impostor” who “wants to destroy the system rather than
improve it”; he is therefore among those who “do not realize that the alter-
native to our messy system is tyranny: a mafia-don like state (Libya today,
Lebanon during the civil war) or an autocracy.” As Taleb notes, Snowden
recently claimed regarding governmental measures to deal with Covid-19
that “as authoritarianism spreads, as emergency laws proliferate, as we sac-
rifice our rights, we also sacrifice our capability to arrest the slide into a less
liberal and less free world” and that “what is being built is the architecture
of oppression” (Dowd 2020). Of course we should be concerned about all
abuses of democratic governance powers, but little of substance has turned
up regarding overreach related to Covid-19. Meanwhile, conspiracy theo-
ries, many of which have originated in Russia, have played a central role
in exacerbating the problem and in the resulting large number of unneces-
sary deaths worldwide. Notably, Snowden has made few comments about
Russia’s invasion of Ukraine, claiming that people outside the region are
“being made to feel [the way they feel] by the information bubbles that they
consume” (“Edward Snowden” 2022) without any reflection from him on
who is generating those bubbles and why.
Taleb, Shaw, and earlier critics like Wilentz are right that Snowden’s cri-
tiques of democracy and democratic uses of technology are similar to right-
wing conspiracy theories that claim to target authoritarianism but instead
support authoritarians. Unlike Alex Jones, though, Snowden is taken seri-
ously by many who think he supports democracy. It is no surprise that
Snowden’s speeches feature in many right-wing, apocalyptic conspiracy the-
ories and even proliferate in videos made by the promoters of those theories
(e.g., Destination Hub 2022a, 2022b, 2022c; Eye Opener 2022). However,
122 The Forms and Functions of Cyberlibertarianism
It is true that every government must have the power to surveil all tele-
communications (and to some extent, even physical, nonelectronic com-
munications) in which its citizens engage. Snowden’s leaks demonstrate
that the U.S. government has these capabilities, as do some of its allies.
This is an important fact for individuals and politicians to know and under-
stand, although it seems odd to present it as a revelation or suggest it is
unique to the United States. The one case that seems to identify abuse,
although identified not by Snowden but by defendants in a terrorism con-
spiracy trial, United States v. Moalin, illustrates the problems raised by
Snowden’s disclosures.
Moalin concerned “four U.S. residents convicted of conspiring to pro-
vide material support to a foreign terrorist organization and conspiring to
launder money from 2007 to 2008. On appeal, the defendants argued that
evidence used against them at trial was derived from an allegedly unlawful
electronic surveillance program run by the National Security Agency (NSA)
at the time of the investigation and should have been suppressed” (Hanna
2020). The case became well-known in 2020 due to the Ninth Circuit Court
of Appeals ruling that the “bulk phone records program violated FISA and
acknowledging that it also was likely unconstitutional.” But less noticed in
the coverage was the fact that the Court “affirmed the defendants’ convic-
tions after determining that evidence used against the defendants at trial
had not been tainted by information the NSA had collected through the
program.” The NSA discontinued the bulk collection program in the wake
of Snowden’s disclosures, which is one of the parts of those disclosures that
most genuinely seems to qualify as whistleblowing. Yet even in a case where
the defendants were convicted of just the sort of crime intelligence agen-
cies are supposed to be looking for, the Court did eventually rule that their
communications were not used this way. Had they been procured illegally,
the resulting intelligence would have been inadmissible.
While the Moalin decision was celebrated as a vindication of Snowden,
its consequences suggest that Snowden’s actions are not as obviously under-
standable as whistleblowing as his supporters seem to believe. In another
case, Hepting v. AT&T, originally brought in 2008, a wide group of plaintiffs,
backed by EFF, claimed “the government enacted a massive domestic spying
dragnet,” in part when the “federal government conspired with telecom-
munications companies like AT&T to reroute internet traffic to a secret
room in San Francisco where communications were reviewed and stored
124 The Forms and Functions of Cyberlibertarianism
by the National Security Agency” (Renda 2021). Part of the Ninth Circuit’s
ruling had to do with the constitutionality of the data collection under the
Foreign Intelligence Surveillance Act (FISA), a particular hobbyhorse of the
conspiratorial subset of Snowden supporters. Regardless of whether FISA
gave the intelligence agencies too much power, the courts have been satis-
fied that it is (or was, in some cases) constitutional. Further, and even worse
for those who see Snowden as a whistleblower, the plaintiffs in Hepting,
attempting to constitute a class for purposes of challenging the consti
tutionality of any governmental data collection program, were unable to
establish standing in the case. That is, they were unable to show how the
program in question had harmed them, or even that they had been tar-
geted by it. The program may well sound bad, provided one rejects out of
hand the notion that democratic governments can surveil everything but
should only do so with proper warrant.
Despite Snowden’s revelations regarding his ostensible main interest being
the invasive overreach of intelligence surveillance programs, it is notable
that much of what he released is not about that topic at all. In one of the
most detailed listings of Snowden’s disclosures, the nonprofit legal publica-
tion Lawfare identifies “tools and methods, overseas USG locations from
which operations are undertaken, foreign officials and systems that NSA has
targeted, encryption that NSA has broken, ISPs or platforms that NSA has
penetrated or attempted to penetrate, and identities of cooperating com-
panies and governments” (“Snowden Revelations” 2014). Most of the listed
items describe capabilities one would be surprised to learn major world
governments lack: “a ‘Google-like’ search engine that accesses phone calls,
emails, and other forms of online communication of foreigners and US citi-
zens”; the fact that “NSA has developed technology to compare satellite
imagery with a photograph taken outdoors to determine where the pho
tograph was taken”; that “NSA penetrated the network of Mexico’s Sec
retariat of Public Security to collect information about drug and human
trafficking along the US–Mexico border”; that “NSA and GCHQ have
monitored the communications of several Israeli officials, including the
Prime Minister and Defense Minister”; that “NSA has collected draft email
messages written by leaders of the Islamic State of Iraq.” The revelations
do not indicate if the data collected were used or stored for later analysis.
We can object to states acting in this manner, but it is hoped that we do so
from a position of understanding what governments must do, rather than
horror directed only at democratic governments spying.
The Forms and Functions of Cyberlibertarianism 125
this is the exact shape the most virulent strain of contemporary right-wing
propaganda takes. Here Snowden’s early history as an anarcho-capitalist
online troll is relevant. Prior to becoming an international celebrity whis-
tleblower, Snowden attacked social security; wrote in favor of the second
amendment and gun ownership and against Barack Obama’s suggestion
that his administration might revive an assault weapons ban; and supported
Ron Paul and a return to the gold standard (Wilentz 2014). “Contrary to
his claims,” writes Sean Wilentz, “he seems to have become an anti-secrecy
activist only after the White House was won by a liberal Democrat who,
in most ways, represented everything that a right-wing Ron Paul admirer
would have detested.”
In the wake of his relocation to Moscow and the apparent exhaustion
of his cache of documents about the U.S. intelligence apparatus, Snowden
has stayed true to that right-wing libertarian ethos. He believes that demo-
cratic government is inherently illegitimate but that authoritarian govern-
ment somehow promotes “liberty”—although this latter prong of his belief
system, as in all anarcho-capitalism, need never be addressed. As George
Packer put it:
Above all, Snowden is a soldier of the internet, “the most important invention
in all human history.” He has said that he grew up not just using it but in it,
and that he learned the heroic power of moral action from playing video
games. “Basically, the internet allowed me to experience freedom and explore
my full capacity as a human being,” Snowden told Greenwald when they met
in Hong Kong. “I do not want to live in a world where we have no privacy and
no freedom, where the unique value of the internet is snuffed out.” (2014)
His more recent disclosures have nothing to do with the constitutional rights
of US citizens. Many of them deal with surveillance of foreign governments,
including Germany and Brazil, but also Iran, Russia, and China. These are
activities that, wise or unwise, fall well within the NSA’s mandate and the
normal ways of espionage. Snowden has attached himself to Wikileaks and to
Assange, who has become a tool of Russian foreign policy and has no inter-
est in reforming American democracy—his goal is to embarrass it. Assange
and Snowden are not the first radical individualists to end up in thrall to
strongmen.
The Forms and Functions of Cyberlibertarianism 127
the world. Civil rights certainly do not depend on property. Nor do other
civil rights flow from this Austrian-inflected notion of privacy as private
property (a theme that would fit neatly into the theories of arch-right-wing
theorist Hans-Hermann Hoppe; see chapter 7). But as most of us know,
the United States does not even have a right to privacy formalized in the Bill
of Rights. Clearly the framers of the Constitution did not think the free-
doms of speech, press, assembly, and so on proceeded from the right to
privacy—unless they are taken to mean that rights only attach to those who
own private property, as Snowden seems to suggest.
Further, what Snowden means by “privacy” diverges from the under-
standing of most ordinary people, legislators, and legal scholars (see chap-
ter 6 for full discussion of privacy in this context). Snowden’s concept of
privacy, like that of the cypherpunks and today’s digital rights activists, is
best understood as the absence of government or governance. Between his
advocacy for unbreakable encryption and fully anonymizing technologies
(see chapter 5), Snowden writes and talks about privacy much as it’s de-
scribed in the “Crypto-Anarchist Manifesto”—as a weapon that should be
deployed by the most technologically sophisticated among us to increase
our own personal power (and wealth), making oversight and regulation of
our activities impossible. Indeed, while Snowden has criticized Bitcoin in
the past in part for its lack of privacy features (Riley 2015), he has spoken
out strongly in favor of cryptocurrency tokens that incorporate “privacy by
design” (Chipolina 2021) such as zcash (Brockwell 2022) and Monero
(Prasanna 2022). George Packer writes that “Snowden looked to the internet
for liberation, but it turns out that there is no such thing as an entirely free
individual. Cryptography can never offer the absolute privacy and liberty
that Snowden seeks online” (2014). While in the strictest political sense this
is true, the fact is that technology can be used to obviate significant func-
tions of governmental oversight. To right-wing political actors this can look
like “privacy and liberty,” though to anyone committed to democracy it
looks like authoritarian, quasi-feudal, and organized crime–like institutions.
Marcy Wheeler is an independent journalist and adviser to academic
and legislative projects on cybersecurity. A fierce opponent of illegitimate
government surveillance and critic of intelligence agencies, she originally
welcomed the Snowden disclosures and celebrated him as a whistleblower
(Levy 2013). She was similarly enthusiastic about WikiLeaks. For a brief time
in 2014, at the height of the Snowden story, she was senior policy analyst
The Forms and Functions of Cyberlibertarianism 129
S T R AT E G IC DIG ITAL D EN I AL
1. I thank Frank Pasquale who long ago brought this remarkable essay to my
attention, specifically for its relevance to digital technology advocacy.
The Forms and Functions of Cyberlibertarianism 131
“whether the topic is tobacco, food and drug safety, or privacy legislation,
these groups employ the same rhetorical devices to delay and stop con-
sumer reform.”
Hoofnagle explains that those who use denialist tactics, as Mirowski
argues regarding climate denialists, “are not seeking a dialogue but rather
an outcome” (1). That outcome is remarkably robust: to block “almost any
form of consumer reform” (2). Hoofnagle lists an array of tactics, starting
with “no problem”: “Whatever consumer reform being debated is unnec-
essary. This is because there is no problem” (3). Next, industry attributes
any problems it is forced to acknowledge to “bad apples” (4); demands that
the public “wait and see” whether actual problems result (5); insists that
“consumer freedom” is at stake, which is inherently a value more important
than safety or efficacy of a given product (6); and reverts to delay when
other methods fail (7).
All these tactics should be familiar to observers of digital rights advocacy,
to say nothing of technology companies and other technology providers.
Yet some cards in the denialists’ deck have been perfected by technology
advocates: that “competition,” which is to say the free market, “solves
all problems” (8); and that regulation of technology is always unwelcome
because it “stifles innovation,” which alternates with the contrary asser-
tion that technology “can’t be regulated.” As Hoofnagle notes, the fact that
these two assertions are incompatible only makes clear that “this exercise
isn’t about being cogent, it’s about stopping whatever intervention the
denialist opposes.”
We frequently see the majority of Hoofnagle’s deck strategies in digital
denialism. Yet there is a host of new, more specifically digital tactics being
developed to stave off whatever the new technology scandal might be. We
could fill a substantial volume just documenting and explaining each of
these rhetorical devices. But to make way for a focus on the key pseudo-
political issues on which cyberlibertarian agitation rests, we will survey
these tactics briefly, pausing only to examine in detail a few of the more
interesting ones as illustrations of the general denialist moves.
As Microsoft President Brad Smith has asked in his new book, is technology
a tool or a weapon? Until quite recently, the answer for most people would
have been the former—that it is a valuable tool that makes our lives and
society better. But in the last several years, views have shifted, particularly
among opinion-leading elites who now finger “Big Tech” as the culprit
responsible for a vast array of economic and social harms. Termed the
“techlash,” this phenomenon refers to a general animus and fear, not just of
large technology companies, but of innovations grounded in IT.
The report goes on: “Techlash manifests not just as antipathy toward con-
tinued technological innovation, but also as active support for policies that
are expressly designed to inhibit it. This trend, which appears to be gain-
ing momentum in Europe and some U.S. cities and states, risks seriously
undermining economic growth, competitiveness, and societal progress. Its
policies are not rational, but the techlash has created a mob mentality, and
the mob is coming for innovation.” Phrases such as “opinion-leading elites,”
“general animus and fear,” “not rational,” and “mob mentality” all point
at the sub rosa ad hominem nature of the “techlash” tactic. The idea that
democracies and critics are irrationally pursuing regulation of technology
companies due to “fear” of “innovation” obviates any need to evaluate or
respond to specific criticisms. Terms like “moral panic” (a favorite of Jeff
Jarvis; see Jarvis 2019) and “technopanic” (another of his favorites; see Jarvis
2013a) are virtual synonyms for “techlash” when used this way.
The Forms and Functions of Cyberlibertarianism 133
published Tech Panic: Why We Shouldn’t Fear Facebook and the Future, which
accused critics of being motivated by irrational “fear” and thus unworthy
of serious attention. When all else fails, such fearful, tech-hating commen-
tators can always be dismissed as “Luddites.” However, it is ironic that the
historical Luddites were not “technophobes” but thoughtful critics of tech-
nology concerned with core matters of civil and human rights (Loeb 2018b,
2021b; Mueller 2021).
with the presumption that its existence proves that the concerns it men-
tions are absurd. Cory Doctorow responded to a statement by U.S. attor-
ney general Eric Holder in 2014 that the FBI needs some kind of access to
encrypted systems: “The arguments then are the arguments now. Govern-
ments invoke the Four Horsemen of the Infocalypse (software pirates,
organised crime, child pornographers, and terrorists) and say that unless
they can decrypt bad guys’ hard drives and listen in on their conversations,
law and order is a dead letter.” By labeling these concerns—notice that
they are not the same as either of May’s lists: drug dealers and money laun-
derers have been replaced by software pirates and organized crime, although
plausibly the latter category includes both of the omitted ones—as the four
horsemen, Doctorow writes as if he has demonstrated they are invented.
This has two important effects: it provides encryptionists with a rhetorical
tool that avoids engaging with the facts and arguments being offered; and
it gives weight to the conspiratorial view that democratic governments lie
to justify expansive powers they want for unstated reasons. In the age of
people characterizing Covid-19 vaccine mandates as “like the Holocaust,”
we can see many examples of the natural endpoint of this kind of antigov-
ernment agitprop.
Cyberlibertarianism uses tropes to prevent rational discussion of con-
straints on digital technology. In this case, the trope is a kind of criticism,
much like I am offering throughout this book. Yet there is a vital difference.
Cyberlibertarian tropes demand that we disregard the opinions of dedi-
cated researchers and commentators, particularly those associated with
democratic governments. As with all conspiracy theorists, their occasional
suggestions that we “do our own research” are conditioned by pointing only
to sources that reinforce their agnotological views. Further, they license the
total rejection of evidence that supports assertions that they disagree with.
Forget software piracy, a phenomenon that only exists in the networked
digital age and is therefore without question a real thing. Look at some
of the other horsemen: money laundering, drug dealing, and child pornog
raphy. Advocates and experts report dramatic increases in these spheres,
largely fueled by anonymized and encrypted digital technology. To take
only the most glaring example, child protection advocates continue to see
explosions in the creation and distribution of child sexual exploitation mate-
rials online (Dance and Keller 2020; Solon 2020). This is almost always
associated with the “Dark Web” (i.e., the network made available through
the use of Tor; more on that below) and encrypted messaging apps like
136 The Forms and Functions of Cyberlibertarianism
WhatsApp, which EFF and others argue should be made entirely impen-
etrable to law enforcement. One searches in vain for serious acknowledg-
ment from digital rights advocates that this is a real problem, let alone one
that follows naturally from the technology itself. There is no discussion
among “four horsemen” promoters that maybe this one trope can’t be dis-
missed out of hand. In fact, it continues to turn up with regularity on the
Cypherpunks mailing list.
Even worse, the encryptionists have developed a political theory that
considers absolute opacity to government oversight as the most fundamen-
tal human right. They can spin any concern about harms done to ordinary
persons as inconsequential compared to what would happen if govern-
ments were allowed to enforce laws. This leads ideologues like Doctorow
to make grandiose counterfactual statements like “the police have never
had the power to listen in on every conversation, to spy upon every inter-
action” (2014). On the contrary, properly warranted, governments have had
that power over every form of electronic communication, especially tele-
phony, since they went into wide use.
When Andrew Lewman stepped down as executive director of the Tor
organization, he stated that “95 percent of what we see on the onion sites
and other dark net sites is just criminal activity. It varies in severity from
copyright piracy to drug markets to horrendous trafficking of humans and
exploitation of women and children” (O’Neill 2017). Clearly Lewman was
in a unique position to know what he was talking about. Nevertheless, armed
with the four horsemen trope, digital rights promoters attacked Lewman,
claiming he was the one threatening human rights by daring to point out
that the main use for these tools, especially if they are beyond democratic
oversight, is to enable the worst kinds of antisocial behavior in our world.
It is hard not to connect the dots and suggest that some of those pro
moting these tools as fundamental to human rights intend to use them for
just the purposes mentioned in the four horsemen trope—and that they
are able to use the strategy to forestall oversight of these activities by the
authorities we entrust to prevent and prosecute them.
computer scientist credited with “inventing” the World Wide Web (Berners-
Lee 1999), implored us that we must “act now to save the internet as we
know it” in the face of FCC commissioner Ajit Pai’s intention to repeal the
2015 net neutrality rules (see chapter 6), which would “upend the internet as
we know it” (Berners-Lee 2017). Apple cofounder Steve Wozniak and for-
mer FCC member Michael Copps offered the same warning: “Ending net
neutrality will end the internet as we know it” (2017). New York attorney
general Eric Schneiderman said the same thing on the MSNBC program
All In with Chris Hayes (Hayes 2017). AccessNow, another “digital rights”
organization that takes substantial funding from major digital technology
companies like Google, Reddit, and X, declares in a more anodyne tone
only that “the internet as we know it is at risk” (White 2017). Well before
this, Democratic senator Al Franken warned in 2010 that the FCC needed
to implement net neutrality rules to “save the internet”; without them, “the
internet as we know it is still at risk.”
The related phrase “save the internet” is no less flexible, invoked to pro-
mote various pieces of net neutrality legislation and related policies. The
Free Press organization, which claims to not “take a single cent from busi-
ness, government or political parties” (“About Free Press” 2024), has main-
tained the SavetheInternet.com domain since at least 2007 and conducts
campaigns under that name. Although the current incarnation of Free
Press has a solidly left political orientation, it is worth noting that in its
earliest and most public appearances, it was a coalition that included many
figures whose political commitments are much less clear (e.g., Lawrence
Lessig, Craig Newmark, Glenn Reynolds, the Christian Coalition of Amer-
ica, Susan Crawford, and Tim Wu; see “Save the Internet: Join Us” 2007).
Despite its admirable work on many fronts, Free Press repeats many of the
same tropes that we find across the political spectrum when it comes to
digital technology. I am arguing that these tactics gather their primary force
not from a literal interpretation of the words found in them, but rather as
political organizing tools used by many of digital technology’s most power-
ful entities (especially Google, Facebook, Amazon, and X) to distort policy
and regulation in their name.
As David Newhoff (2012) notes, this time regarding identical rhetoric used
in the SOPA and PIPA battles (recounted in chapter 1), these campaigns
resemble the carefully engineered right-wing opposition to Obamacare
organized around the idea of “death panels.” If opposition to “death pan-
els” were a serious belief, it would entail concern about the life and health
138 The Forms and Functions of Cyberlibertarianism
weaker. Alas, it hasn’t worked out that way. The paradox of the Internet is
that it has enabled greater control by authoritarians and fueled greater
disorder in open democracies” (Ignatius 2021a). This would be a “paradox”
if that “idealistic dream” had actually existed. But whose dream was it, and
where and by whom was it expressed, and with what authority? The most
obvious subjects are people who explicitly rejected democracy and embraced
authoritarianism, and said so.
Ignatius published another editorial the next day, in which he summarizes
the views of Chris Inglis, President Biden’s new “national cyber director”:
“His idea is that through new technology and better security, cyberspace
can again become a zone of enrichment and freedom, rather than of risk
and authoritarian control” (2021b). In both pieces Ignatius repeats the same
homilies about “internet freedom” (see chapter 6) while relying on the words
of the same experts (including Stanford Internet Observatory director and
ex-Facebook CSO Alex Stamos) most responsible for spreading cyberliber-
tarian denialism.
In an essay in The Verge called “We Have Abandoned Every Principle
of the Free and Open Internet,” Russell Brandom declares that he “feels
sad writing all of this down. These were important, world-shaping ideas.
They gave us a specific vision of how networks could make society bet-
ter—a vision I still believe did more good than harm” (2017). The world-
shaping ideas he mentions are all familiar. In addition to “free” and “open,”
he declares that anonymity and free speech are being lost. He also believes
that the values of “decentralized ownership” and “permissionless innovation”
are at stake. Brandom cites the work of computer scientist and personal
computing pioneer J. C. R. Licklider and internet pioneer Tim Berners-Lee.
However, we have no reason to think that either Licklider or Berners-Lee
had a credible vision of “making society better.” Instead, their visions were
focused on making computer networks better, and any ideas they had about
society proceeded from their understandings of machines. If one believes, as
do most non-right-wing critics, that the inherent tendency of technocratic
reason is toward the right, then any political consequences of these inventors’
views remain unclear. However, such inclinations will only point toward a
better society if one accepts the political right’s understanding of society.
Berners-Lee has been one of the signal promoters of false nostalgia. In
a 2018 profile, he walks a fascinating fine line between declaring that the
democratic dream was built into digital technology to begin with, and
The Forms and Functions of Cyberlibertarianism 141
suggesting the opposite: “‘We demonstrated that the Web had failed instead
of served humanity, as it was supposed to have done, and failed in many
places,’ he told me. The increasing centralization of the Web, he says, has
‘ended up producing—with no deliberate action of the people who designed
the platform—a large-scale emergent phenomenon which is anti-human’”
(Brooker 2018). “As it was supposed to have done” according to whom,
and why? It is conceivable he is referring to himself as among the “people
who designed the platform,” and it may be true that he dreamed of a net-
work that could produce democracy. Berners-Lee invented HTTP while
working as a computer scientist at CERN. The idea that political transfor-
mations should come from computer science is already deeply troubling.
Equally troubling is the influence of far-right ideas on the development
of computer technology to that point, along with critical positions devel-
oped by progressive critics. Berners-Lee had an idiosyncratic dream that
the web would make the world more democratic, informed by syncretic
and inchoate ideas, many of which had their origins on the far right. Per-
haps he took “no deliberate action” to create a tool that would “fail human-
ity,” but that does not excuse willful ignorance of the tool’s politics.
“Innovation”
When all else fails, one of the primary strategies that moves beyond rheto-
ric and into actual policy is to claim that innovation is at risk. EFF has long
placed innovation alongside “digital privacy” and “free speech” as one of its
three primary values. This makes EFF stand out among nonprofits that
claim to be focused on civil and human rights, but fit right in with far-right
organizations associated with political libertarianism. Most of the major
digital technology companies demand that legislators think carefully about
“interfering” with innovation. However, these claims are often expressed
xenophobically or nationalistically. In the digital age, this has been most
frequently seen in claims by U.S. tech companies that Congress must not
regulate technology for fear that China will surpass them on one score of
innovation or another. Hoofnagle already pointed to the reliance on “stifling
innovation” as a key tool of denialism. “The denialist will argue that the
intervention will stifle innovation,” he writes; “arguments include ‘this is
just a tool,’ and ‘you’re banning technology’” (2007, 8).
Innovation is particularly pernicious because, unlike other denialist strate-
gies that may give regulators, legislators, and other observers pause because
142 The Forms and Functions of Cyberlibertarianism
A Marxist might quibble about whether profit is the best measure of real
innovation, but Vinsel and Russell are right to draw attention to Joseph
Schumpeter’s theory of “creative destruction”—itself derived from Karl
Marx’s theories about the foundations of capitalist production—as a guid-
ing feature of Silicon Valley thinking (see also Godin and Vinck 2017).
The Forms and Functions of Cyberlibertarianism 143
In 2013, the FAA proposed rules to create “test sites” for the use of com-
mercial drones in commercial airspace. Given the dangers associated with
air traffic and the heavily regulated nature of commercial airspace, it is not
hard to see why the FAA would be concerned about the potential for dan-
ger. Rather than prohibiting drones from commercial airspace altogether,
the regulator proposed a series of test sites where drone developers and
proponents could experiment with the technology and how it interacts with
commercial aircraft. This already seems a significant concession to indus-
try, since the potential for problems are manifest. Yet the Mercatus Center–
funded Technology Liberation Front (TLF) would have none of it. In a
filing with the FAA, the TLF noted that Google’s chief internet evangelist
and digital technology pioneer Vint Cerf had recently equated “permis-
sionless innovation” with the “open internet,” worrying about “policies that
enable government controls but greatly diminish the ‘permissionless inno-
vation’ that underlies extraordinary Internet-based economic growth to say
nothing of trampling human rights” (Cerf 2012). The TLF authors wrote,
“Like the Internet, airspace is a platform for commercial and social innova-
tion. We cannot accurately predict to what uses it will be put when restric-
tions on commercial use of UASs [Unmanned Aircraft Systems] are lifted.
Nevertheless, experience shows that it is vital that innovation and entrepre
neurship be allowed to proceed without ex ante barriers imposed by regula-
tors” (Dourado 2013). Such sentiments from both Cerf and the TLF could
not be clearer: the only values that matter are “innovation” and “entrepre-
neurship.” Despite Cerf ’s rhetorical nod toward them, the only “human
rights” that matter are those of technology itself, as realized in “innovation
and entrepreneurship.” Never mind that what is at stake in the FAA’s remit
is the safety of everyone who travels by air, not the success of venture capi-
talists or commercial startups. Reasonable regulation that allows for eco-
nomic development is anathema to technology, and the logic undergirding
that comes straight from beliefs about the nature of the “internet”—that
is, from cyberlibertarianism.
CHAPTER 3
T
he rightward tendency of cyberlibertarianism is accompanied
by a specific legislative and regulatory (more properly, deregu-
latory) program that has been highly effective for almost two
decades. Climate denial, for example, relies on “scientists” who are not con
sidered experts on climate science by anyone but themselves. Discussions
of digital ethics and politics, on the other hand, tend to be dominated by
authority figures seen as having “real” expertise, even by technicians work-
ing in the field (including engineers and computer scientists who openly
espouse cyberlibertarian ideas). Their authority in the political sphere derives
more from working with computers than from representing easily definable
political positions. Their expertise is repurposed for activism, and the benefi
ciary of that activism is the computer as the locus of rights. Although these
authority figures publicly portray themselves as defenders of civil rights,
they actually promote specific deregulatory governmental programs.
In the United States, cyberlibertarian advocacy is galvanized around
Section 230 of the Communications Decency Act of 1996. Section 230
advocacy is one of the places where industry influence is most visible. One
of the most prominent speakers in the field is Daphne Keller, director of
the Program on Platform Regulation at Stanford’s Cyber Policy Center. A
former longtime Google employee, until 2015 she was the company’s asso-
ciate general counsel for intermediary liability and free speech. She has also
taken money from the Koch Foundation to address “free expression in a
digital world” (Charles Koch Foundation 2020). Though Eric Goldman,
professor of law at Santa Clara University, has fewer direct industry con-
nections than Keller, the pair often play tag-team in defending Section 230
147
148 Deregulation and Multistakeholderism
against all comers, turning it into a blanket ban on regulation that must be
preserved no matter the cost.
Along with Mike Masnick of TechDirt, Goldman and Keller are among
the most reliable media talking heads and congressional testifiers. Their posi-
tions, while sometimes nuanced, typically align with whatever Google hap-
pens to be promoting at that moment. The Tech Transparency Project noted
in 2017 that when the SESTA/FOSTA bills, which offered only slight revi-
sions to Section 230, were proposed, “almost all the nonprofits, academics
and policy groups that have adopted a public stance against the bill have
received financial support from Google” (2017b). Keller, Goldman, and
Masnick are all included in this list. (Masnick is funded by the “innovation”-
focused Copia Foundation he founded, which has taken money from Google
and the VC film Andreessen Horowitz; the piece notes that Goldman is
a faculty scholar at a center in Santa Clara that “received $500,000 from
Google in 2011 as part of a cy près legal settlement the company made over
privacy violation.”) Listed as well are EFF, CDT, ACLU, “right-of-center
groups such as the Heritage Foundation and R Street Institute; left-of-center
and free speech groups such as the New America Foundation; and even
academics at some of the leading institutions in America such as Harvard’s
Berkman Klein Center.”
Section 230 is only the tip of the iceberg here. The TTP “identified 330
research papers published between 2005 and 2017 on public policy matters
of interest to Google that were in some way funded by the company”
(2017a). The report noted that even though academics received support
from Google in more than half of the cases, a quarter of those authors did
not disclose their funding relationship. In cases where the funding was in-
direct even fewer disclosed appropriately. This is an operation of immense
sophistication and influence. The TTP rightly notes that “Google has paid
scholars millions to produce hundreds of papers supporting its policy inter-
ests, following in the footsteps of the oil and tobacco industries.” Of course
this only talks about Google; Facebook, Amazon, X, and all other digital
tech corporations employ similar strategies for shaping public discourse.
Outside the United States the situation is even more difficult to assess.
The question of how the world should govern and regulate digital technol-
ogy internationally, of great interest to technology companies, is addressed
by the same actors who dominate the Section 230 discussion. They are joined
by a group of “internet governance” experts who come from engineering
Deregulation and Multistakeholderism 149
T HE HIS TO R Y O F SE C T I O N 2 30
Until the late 2010s, few Americans had heard of Section 230 of the Com-
munications Decency Act (CDA) of 1996. In the later years of the Trump
administration, that changed for many reasons, and Section 230 became a
lynchpin of domestic cyberlibertarian political strategy. In the early 2020s,
both Republicans and Democrats in Congress expressed significant con-
cerns about the law, suggesting that it required revision but disagreeing
about what changes should be made. Legal scholar Danielle Keats Citron
explains the genesis of the CDA in 1995 congressional deliberations and
how Section 230 came to be added to the law:
At the time, online pornography was considered the scourge of the age.
Senators James Exon and Slade Gorton introduced the CDA to make the
internet safe for kids. Besides proposing criminal penalties for the distribu-
tion of sexually explicit material online, the Senators underscored the need
for private sector help in reducing the volume of noxious material online.
In that vein, Representatives Christopher Cox and Ron Wyden offered an
amendment to the CDA entitled “Protection for Private Blocking and Screen-
ing of Offensive Material.” The Cox–Wyden amendment, codified in Section
230, provided immunity from liability for “Good Samaritan” online service
providers that either over-or under-filtered objectionable content. (2018)
victims have no leverage to insist that they do so. Rebecca Tushnet put it
well a decade ago: Section 230 ensures that platforms enjoy ‘power without
responsibility.’” She goes on:
Twenty years ago, federal lawmakers could not have imagined how essential to
modern life the internet would become. The internet was still largely a tool
for hobbyists. Nonetheless, Section 230’s authors believed that “if this amaz-
ing new thing—the Internet—[was] going to blossom,” companies should
not be ‘punished for trying to keep things clean.’ Cox recently noted that
“the original purpose of [Section 230] was to help clean up the Internet, not
to facilitate people doing bad things on the Internet.” The key to Section
230, explained Wyden, was “making sure that companies in return for that
protection—that they wouldn’t be sued indiscriminately—were being respon-
sible in terms of policing their platforms.”
a single court ruling in May of that year [that] threatened to smother the
internet in its crib. Prodigy, an early provider of online services, was found
to be legally liable for a defamatory anonymous posting on one of its mes-
sage boards. The ruling had chilling implications: If websites could be sued
over every piece of content that someone didn’t like, the internet’s growth
might come to a halt. Cox read about the Prodigy ruling on a flight from
California to Washington and had one thought: I can fix this!
“A light bulb went off,” he told me recently. “So I took out my yellow
legal pad and sketched a statute. Then I shared it with Ron.”
That statute eventually became Section 230. In hindsight, the concept is
ridiculously simple: Websites aren’t publishers. They’re intermediaries. To
sue an online platform over an obscene blog post would be like suing the
New York Public Library for carrying a copy of Lolita. For a young internet
facing a potential avalanche of speech-squelching lawsuits, Cox and Wyden’s
provision was a creative workaround—a hack—that allowed this new form
of communication to grow into the thriving network of commercial enter-
prises we know today. (Zara 2017)
and so on, has hung on to this day. As a result, debates about Section 230
frequently refer to it as a matter of “intermediary liability.” However, the
question whether this accurately describes platforms remains open.
Cox and Wyden developed their “hack” for digital technology without
going into too much depth on the fundamental question. The dilemma they
saw was solved by giving platforms what many refer to as “two immuni-
ties.” This interpretation was largely due to the 1997 Fourth Circuit Zeran
v. America Online decision, which quickly established the terms under
which Section 230 would be interpreted in the decades that followed. As
Section 230 proponents Berin Szóka and Ashkhen Kazaryan put it recently:
The language of the law and the interpretation of the Fourth Circuit
court in Zeran were both determinative factors in what Section 230 means
for the contemporary internet. Contrary to the opinions of dogmatists like
Malcolm, most scholars and lawyers (Jain 2020; Kosseff 2020, 2019, 92–
96; Sylvain 2020) believe that the Zeran interpretation was not inevitable
and is itself critical to the functions of Section 230 under U.S. law. Describ-
ing Wilkinson’s disagreement with the arguments offered by Zeran’s attor-
ney Leo Kayser, Jeff Kosseff writes:
the laws in much of the rest of the Western world. This would allow any
person or company who is unhappy with user content to bully a service
provider into taking down the content, lest the provider face significant legal
exposure. But Wilkinson disagreed with Kayser. He concluded that Section
230 immunizes online platforms from virtually all suits arising from third-
party content. This would forever change the course of Internet law in the
United States. (2019, 94–95)
Kayser then asked for further review, and his request was denied:
Zeran requested that all the judges on the Fourth Circuit—rather than just a
panel of three—rehear the case, but the Fourth Circuit denied the request. He
then asked the United States Supreme Court to rehear the case, but in June
1998 it refused to do so. He was stuck with Judge Wilkinson’s ruling that he
could not sue America Online over the anonymous user’s posts. Wilkinson’s
interpretation of Section 230 was so broad that it exceeded the standard First
Amendment protections afforded to publishers. Zeran turned Section 230 into
a nearly impenetrable super–First Amendment for online companies. (95)
Kosseff and others argue that Wilkinson’s ruling plays a key role in the
Section 230 doctrine that followed. However, Kosseff also finds that one
of Section 230’s cosponsors, Ron Wyden, expected that outcome: “When
Congress passed Section 230, did it intend to create the sweeping immu-
nity that Wilkinson provided in Zeran? Absolutely, Wyden told me in 2017.
‘We said very bluntly that we thought it would freeze innovation if some-
body who owned a website could be personally liable,’ Wyden said. ‘There
was not a lot of rocket science there.’” The claim is that Cox, Wyden, and
other members of Congress, along with President Clinton, passed Section
230 into law as part of the Communications Decency Act of 1996 with the
expectation that courts would rule as Wilkinson did. They intended to pro-
tect “innovation” as a matter of free expression. Section 230, like many other
cyberlibertarian tenets, turns out to be primarily about protecting business
interests, and only secondarily, if at all, about citizen or political interests.
While some commentators feel that the Zeran interpretation of Sec-
tion 230 was inevitable and even intended by the law’s authors, others,
especially analysts with a left-leaning perspective, disagree. Legal scholar
Olivier Sylvain analyzes the Zeran decision this way:
Deregulation and Multistakeholderism 153
The immunity under the CDA, codified at 47 U.S.C. § 230, gives such
intermediaries cover largely because courts have read the protection broadly.
And they have had good reason to. The first operative provision of the
statute states that “[n]o provider or user of an interactive computer service
shall be treated as the publisher or speaker of any information provided by
another information content provider.” Congress’s reference here to “pub-
lisher or speaker” draws from defamation law doctrine, where a defendant
publisher is as liable for republishing reputation-damaging material as its
author. When enacted in 1996, Section 230(c) was intended to bar courts
from holding providers liable for publishing information that could harm
users’ reputation.
This was and remains an idiosyncratic and exceptional treatment under
law. Newspapers and book imprints, for example, remain as liable for pub-
lishing unlawful classified advertisements or opinion editorials as the original
authors are. Legislators in 1996 expressed the view that providers of online
services and applications were different—that they should not be held to
account for the massive amounts of third-party user content that they host
and publish. Parroting the emergent ethos among technologists and internet
free-speech activists, legislators in this period found that imposing liability
on online intermediaries for failing to screen or remove all offending content
would exact a “chilling” toll on all users that is far greater than it would be
for traditional publishers. In such a world, providers would censor any con-
tent that they rightly or wrongly believe exposes them to liability. Section
230 relieves intermediaries of that heavy burden in the interest of promoting
entrepreneurship and freedom of expression online.
Most legislators in 1996, however, could not have anticipated that the
internet would permeate public life or that intermediaries would engineer
practically all our online conduct. They did appreciate, however, that the
protection could not be absolute. Section 230 specifically provides that the
immunity recedes when the provider in question “is responsible, in whole
or in part, for the creation or development” of the offending information.
Congress also wrote in a “Good Samaritan” safe harbor to incentivize pro-
viders to mind their users’ “objectionable” content. (2018, 210–11)
Sylvain goes on to explain that “courts read Section 230 extremely broadly
in spite of how it is written. They hold that the provision immunizes net-
worked services and online applications from liability for publishing the
154 Deregulation and Multistakeholderism
illegal content of their users. So, under current law, a social media company
cannot be held responsible for allowing a user to post compromising pri-
vate photographs of his ex-girlfriend publicly. A search engine cannot be
called to task under law for displaying the advertisements of third parties
that sell copyrighted ringtones” (215). Legal scholars Danielle Keats Citron
and Benjamin Wittes echo these concerns: “The broad construction of the
CDA’s immunity provision adopted by the courts has produced an immu-
nity from liability that is far more sweeping than anything the law’s words,
context, and history support. Platforms have been protected from liability
even though they republished content knowing it might violate the law,
encouraged users to post illegal content, changed their design and policies
for the purpose of enabling illegal activity, or sold dangerous products. As
a result, hundreds of decisions have extended § 230 immunity, with com-
paratively few denying or restricting it” (2017).
While Zeran in particular stresses that Section 230 appears to grant digital
platforms immunity from laws that might otherwise apply to them, the
law was intended to have two related effects that are at cross-purposes. One
was to encourage platforms to moderate problematic content. Congress
“hoped to encourage the companies to feel free to adopt basic conduct
codes and delete material that the companies believe is inappropriate”
(Kosseff 2019, 2). But it was also intended, Kosseff says, to “allow tech
nology companies to freely innovate and create open platforms for user
content. Shielding Internet companies from regulation and lawsuits would
encourage investment and growth, they thought” (2–3).
One of the most fascinating aspects of Section 230 is that even the most
well informed commentators do not agree about what it means. This con-
fusion, while typical of the popular understanding of many laws and tech-
nical issues, is so magnified that one wonders whether it is part of the law’s
central function. These twenty-six words that “create the internet” in that
they create a fog around questions of the legality of digital technologies
nearly impenetrable for anyone, including judges and legislators, so that
their effects are felt in ways that obscure their specific legal consequences.
In 2020, the digital tech trade organization Internet Association surveyed
Deregulation and Multistakeholderism 155
more than five hundred cases spanning two decades. Their report claimed
that “far from acting as a ‘blanket immunity’—Section 230 only served as
the primary basis for a ruling in 42 percent of the decisions we reviewed”
and that “the law continues to perform as Congress intended, quietly pro-
tecting soccer parents from defamation claims, discussion boards for nurses
and police from nuisance suits, and local newspapers from liability for com-
ment trolls” (Banker 2020, 2). The need for the survey implies the confu-
sion surrounding Section 230.
In Jamie Bartlett’s 2017 BBC documentary Secrets of Silicon Valley, Jeremy
Malcolm, a senior policy analyst at EFF, is featured in a segment where
he advocates for both “internet freedom” and “multistakeholderism.” Mal-
colm explains the importance of the legislation to EFF’s worldview; when
asked by Bartlett to read the “key line” in Section 230, Malcolm replies:
“I think if we didn’t have this, we probably wouldn’t have the same kind of
social media companies that we have today. They wouldn’t be willing to take
on the risk of having so much unfettered discussion.”
Bartlett: “It’s the key to the internet’s freedom, really?”
Malcolm: “We wouldn’t have the internet of today without this. And so, if
we are going to make any changes to it, we have to be really, really careful.”
means. They develop and find places where regulation may apply, but the
letter of the law, legal precedent, or other factors put it beyond reach.
The title of Jeff Kosseff ’s 2019 book, The Twenty-Six Words That Created
the Internet repeats and expands on Malcolm’s claim in ways very typical
among the law’s most ardent supporters: that not just social media com
panies (most of which did not exist in 1996 when Section 230 was passed),
and not just the World Wide Web, but the internet itself could not function
without it. Appearing to endorse the claim that the law is necessary, Kosseff
writes that “YouTube, Facebook, Reddit, Wikipedia, Twitter, and eBay . . .
simply could not exist without Section 230” (4). Yet in the same paragraph
Kosseff rightly notes that those companies operate in many (indeed, almost
all) countries, few of which have anything like Section 230 protections, and
yet the platforms do not come crashing to the ground. In none of them
does the internet “break.” We rarely hear stories of internet companies fac-
ing the kinds of consequences we are told would be omnipresent in the law’s
absence. Even if Section 230 “created” the internet, the internet persists
quite robustly where the law does not exist.
It is not particularly notable that a law with many consequences is vague
and confusing. Yet it is hard to overlook that those who most credit Section
230 with having “created the internet” are also those who most frequently
chastise academics, activists, lawyers, politicians, and ordinary citizens for
being “ignorant” about the legislation. This kind of agitation is especially
odd given how digital rights activists insist we need the internet due to
vague ideals such as the “democratization” of information or knowledge
(see chapter 6). It is curious to claim that digital technology is funda
mentally necessary for democracy because of its ability to create a better-
informed citizenry, and yet in the same breath claim that the citizenry and
politicians are too poorly informed about the “law that created the inter-
net” to speak credibly about it.
Rep. Christopher Cox (R-Calif.), who cowrote Section 230 with Rep.
Ron Wyden (D-Ore.), testified before the Senate in 2020 that “notwith-
standing that Section 230 has become a household name, a complete under-
standing of how the law functions in practice, and what it actually does, is
harder to come by” (Cox 2020, 10). The law exists in a penumbra of uncer-
tainty. Cox’s account of the history and function of Section 230 is detailed
and consistent with those of most scholars and activists. He explains the
reasoning that spurred him and Wyden to put forth their amendment:
158 Deregulation and Multistakeholderism
speech platforms, the “platform” enabler is rarely held liable for the content
of the speech presented or for moderating that speech. In some techno-
logical contexts, this is so obvious as to be unquestioned by almost every-
one. For example, telephone companies are rarely held liable for activities
conducted over the phone system, even if those activities go well past the
typical speech torts of slander or libel, up to criminal activities that inher-
ently involve speech but gain no speech protections (e.g., arranging mur-
ders or other violent crimes). Similarly, the owner of a performance hall
is rarely held accountable for speeches made by event participants. Even if
there may be outlier cases where the owner or event organizer has been
held partly responsible for the actions of a speaker or performer, such par-
ties have rarely been held responsible for the actions of the audience to
a performance or speech. This appears to be reasonable parallel to digital
platforms.
But that parallel may be misleading. Other platforms primarily orga-
nized around expression have considered the regulation and governance
of content important for the flourishing of the medium and its fit into
democratic principles. Radio and television are the most obvious examples,
which required the provision of frequencies for their analog histories and
needed ownership rights to function. As a condition of licensing channel 2
in the New York metropolitan area or AM band 760 in the Detroit metro-
politan area, say, the providers of content and technical services pledged to
moderate their output and were thus liable for any breaches.
Meanwhile, the vagueness surrounding Section 230 may serve more im-
portant functions than are immediately obvious. The claim that nobody
understands the law is particularly common among the law’s most ardent
supporters, such as Keller, Goldman, and Masnick. Masnick in particular
has made a career of mocking any and all presentations on the law, espe-
cially those by legislators and legal scholars. He suggests that “you just read
the law, because it seems that many people who are making these mistakes
seem to have never read it” (2020b). But he fails to note that understand-
ing the law’s effects requires at least a review of its complex jurisprudential
history, especially Zeran vs AOL, whose application of the law some thought
inevitable and others continue to think overbroad. While Masnick is right
to point out some widespread misunderstandings (e.g., “Because of Sec-
tion 230, websites have no incentive to moderate” or “A site that has polit-
ical bias is not neutral, and thus loses its Section 230 protections”), most of
160 Deregulation and Multistakeholderism
his “corrections” are similar to those made by legal scholars like Citron,
Franks, Wittes, and Sylvain (e.g., “Section 230 is a massive gift to big tech!”
“Section 230 means these companies can never be sued!” “Section 230 is
a get out of jail card for websites!” or “Section 230 gives websites blanket
immunity!”), all of which suggest that the largest internet companies expe-
rience significant benefits from Section 230 that law cannot or should not
regulate.
Despite Masnick and others insisting that critics of Section 230 simply
need to “read the law,” the fact remains that Section 230 plays several roles
in the propagandistic framing of digital technology and its power, which
the cyberlibertarian frame helps us to unpack. Section 230 provides legal
protections in some cases but not in others; even courts disagree about
when and how it should apply. However, the larger effect is to create the
appearance that these companies should be outside the purview of law.
Despite the fact that law is law and is largely understood to function via
judicial and regulatory action, law also has other cultural and political func-
tions. Masnick and others promote a “veil of ignorance” around Section
230, which has resulted in everyone from judges to legislators to litigants
being hesitant to do anything that might touch on the apparent untouch-
ability of internet companies. The repetitive nature of these attacks on
critics for being “ignorant” is itself a mark of the power of Section 230 and
the way cyberlibertarianism functions to preserve and expand the power of
digital technology and its platforms.
Mary Ann Franks, who has studied the law closely both as a legal scholar
and practitioner, writes that Section 230 “dramatically ramps up” a “morally
hazardous conception of the First Amendment—allowing giant corpora-
tions to risk the safety, security, and wellbeing of billions of people in the
pursuit of profit” (2019a, 181). Carrie Goldberg, a victims’ rights litigator
in New York, writes that “until 2018, the courts’ application of Section 230
of the CDA was so broad that even websites facilitating online child sex
trafficking and exploitation were able to do business with impunity” (2019,
47). She details her experience suing the makers of the dating app Grindr
on behalf of Matthew Herrick, an actor–model who was targeted for ongo-
ing and horrifying abuse by an ex-partner, Oscar Juan Carlos Gutierrez.
After the couple broke up, Herrick discovered that Gutierrez was posing
fake profiles for him on Grindr and other dating apps, where he “eagerly
invited men for fisting, orgies, and aggressive sex. Fake Matt instructed
Deregulation and Multistakeholderism 161
SE C T IO N 2 30 IN A G LO B AL F R AME
One would be hard-pressed to find any such admissions, let alone surveys
of facts on the ground, in the debates between Section 230 supporters and
critics. However, even brief reflection shows that there are options avail-
able beyond the binary Johnson and Castro identify.
In late 2019, despite controversies over Section 230, technology firms,
not human rights activists, pressed Congress to include Section 230–like
provisions in the U.S.–Mexico–Canada Agreement (USMCA) on trade. By
this point in the Trump presidency, those on the left were concerned about
Deregulation and Multistakeholderism 163
how Section 230 protections seemed to aid the right in attacking democracy,
while the right had the opposite concern—that Section 230, through its
“encouragement” for various forms of content moderation, was “censoring
conservative speech.” President Trump was trying to repeal Section 230
entirely during the USMCA deliberations. These efforts continued until the
end of his term in late 2020, when he vetoed the annual National Defense
Authorization Act because it failed to repeal Section 230 (Hatmaker 2020).
The bill had such bipartisan support that both houses of Congress and
representatives of both parties were able to overturn his veto with healthy
margins (Mazmanian and Williams 2021).
On the surface, this development would indicate bipartisan support for
Section 230. However, the USMCA negotiations showed the opposite. Tech-
nology companies and their representatives, including figures familiar from
Section 230 advocacy, were the main voices demanding an inclusion of Sec-
tion 230–like provisions in the USMCA. Yet independent critics—including
legal scholar Danielle Citron, computer scientist Hany Farid (developer of
the main algorithm used to moderate child abuse imagery), and Gretchen S.
Peters, executive director of the Alliance to Counter Crime Online—were
represented at the October 2019 hearings on the trade agreement (U.S.
House Committee on Energy and Commerce 2019). These critics had lit-
tle good to say about the proposed international expansion of Section 230
protections and much to say about how and why the provision might be
revised in the United States. Further, industry representatives—including
Google’s global head of intellectual property Katherine Oyama, Reddit
cofounder and CEO Steve Huffman, and EFF’s legal director Corynne
McSherry—were unable to provide cogent explanations for the new doc-
trine or its expected outcomes.
Prominent congressional members of both parties voiced opposition
to the inclusion of Section 230 protections in the USMCA: “The Demo-
cratic chairman and ranking Republican on the House Energy and Com-
merce Committee—which oversees tech companies—complained that it
was wrong to include the legal shield in the trade agreement at a time when
some lawmakers were still debating the U.S. version. ‘We find it inappro-
priate for the United States to export language mirroring Section 230 while
such serious policy discussions are ongoing,’ said the letter from Chairman
Frank Pallone (D-NJ) and Rep. Greg Walden (R-OR).” Speaker Nancy
Pelosi (through a spokesperson) indicated that there were “concerns in the
164 Deregulation and Multistakeholderism
G LO B AL P O L IC Y AND G O VER N AN CE :
M U LT IS TAK EHO L DERISM
it was still isolated inside ARPANET in the early 1970s. ICANN’s main role
is registering domain names (e.g., google.com, amazon.com) and managing
associated activities like the provision of top-level domains like .com and
.net. Since early internet activities, including ARPANET, occurred largely
in the United States and were at first thought to be narrow in scope, the
provision of domain names emerged organically among the community
of technologists. However, as the power and influence of the internet and
later the World Wide Web increased, the service became increasingly im-
portant, and the political consequences of the IANA and ICANN struc-
ture became topics of significant debate.
As Milton Mueller, one of the most prominent voices in internet gover-
nance, puts it: “ICANN was one of the most prominent and important mani
festations of the way the Internet was transforming the relationship between
people and their governments. ICANN’s original institutional design marked
a revolutionary departure from traditional approaches to global governance.
It significantly reduced the power of national governments and existing
intergovernmental organizations over communication and information
policy” (2010, 60–61; emphasis in original). Mueller offers four “structural
facts about ICANN” that made it groundbreaking: it was “set up to meet
the need for global coordination of unique Internet names and addresses”;
it “was one of the few globally centralized points of control over the Internet”; it
“represented a privatization of significant aspects of the global governance
function”; and it “was supervised by and accountable to a single sovereign
and the world’s only remaining superpower, the United States” (61; empha-
sis in original).
It is easy to see how this latter fact, in particular, can generate significant
critique from any number of quarters and from those of different political
orientations. Mueller is one of many to point out that if ICANN were
to serve a global purpose, then there was an evident contradiction in its
doing so as a U.S.-based NGO by virtue of exclusive contracts with the
U.S. federal government, in partnership with a U.S.-based corporation
(Verisign). Mueller is right to note that “sovereigns outside the United
States perceived [ICANN] as a threat to their authority.” At the same time,
Mueller writes that “ICANN could be, and has been, criticized from a cyber-
libertarian perspective as a new form of centralized control over the Inter-
net and a sharp departure from the earlier Internet’s freer, self-governing,
168 Deregulation and Multistakeholderism
and technically neutral administration” (64). Unlike the rest of his analy-
sis, here Mueller acknowledges that the challenge to national sovereignty
suggested by ICANN was salutary not merely because it pushed back on
U.S. domination of the digital space, but because it highlighted how digi-
tal technologies would inevitably challenge national sovereignty itself.
Multistakeholderism is at times offered as a prescriptive and at other times
a descriptive term for internet governance. As with many other topics at
the intersection of digital technology and politics, the meaning of multi-
stakeholderism is subject to continual shifts. Some of these shifts appear
more or less deliberate on the part of interested participants, which can
make clear reasoning about its role difficult. Laura DeNardis, director of
the Internet Governance Lab at American University, rather than advocat-
ing for multistakeholderism, argues that “the very definition of Internet
governance is that it is distributed and networked multistakeholder gover-
nance, involving traditional public authorities and international agreements,
new institutions, and information governance functions enacted via pri-
vate ordering and arrangements of technical architecture” (2015, 23).
A report from the nonprofit Institute for Multi-stakeholder Initiative
Integrity, which engaged with multistakeholder processes and the institu-
tions and individuals they intersect with, found that multistakeholder initia-
tives (MSIs) are “tools for corporate-engagement rather than instruments
of human rights protection” (MSI Integrity 2020, 6). The report also states
that they “should not be a substitute for public regulation” (7).
Between 2010 and 2017, MSI Integrity developed an evaluation tool for
MSIs along with an associated research methodology. The tool was devel-
oped hand in hand with more than forty international standard-setting
MSIs. Many of these organizations appear to have commitments to global
civil society and democratic principles that are less questionable than those
of digital technology advocates. The Rainforest Alliance, Forest Stewardship
Council, Global Sustainable Tourism Council, and Infrastructure Transpar
ency Initiative, among others, look like organizations that exist to empower
citizens and raise concerns about rights violations: “Many MSIs were formed
in response to the exposure of major industry-wide human rights abuses,
which prompted demands to address the underlying governance gap that
enabled the abuse” (26). In general, MSIs “evolved into a default response
to business-related human rights crises as a compromise between non-
regulation and mandatory regulation” (33).
Deregulation and Multistakeholderism 169
However, he also argues that it “evades the key axes of national sovereignty
and hierarchical power”—where “national” and “hierarchical” become syn-
onymous and unacceptable, versus a networked form of power insulated
from hierarchy and abuse. Mueller, echoing Carl Schmitt, descries this basic
form of multistakeholderism advocated for by sustainability advocates like
Hemmati, as it tends toward “a simple-minded communitarianism that
implies that all political, economic, and social conflicts can be resolved if
everyone involved just sits down and talks about them together. By focus-
ing almost exclusively on the interaction or dialogue among stakeholders,
it tends to evade or ignore issues of rights, access, power, and related issues
of institutional design. It invites private sector and civil society actors to
‘participate’ in decision-making process, leaving their precise role or author-
ity over the process indeterminate” (264–65). Mueller contradicts himself
by demanding hierarchy, despite his belief that it should be avoided at all
costs. He also suggests that multistakeholderism did not originally subor-
dinate itself to democratic forms, which is incorrect. To the contrary, democ-
racy is “all talk,” just as Schmitt famously stated. The new anti-hierarchy
“network” is a thing of action, which is ironic given that Mueller views digi-
tal technology as primarily a communications medium, therefore enabling
talk in the broadest sense. For this reason, it should be put beyond the reach
of democratic polities.
Rather than multistakeholderism or another paradigm he calls “access to
knowledge,” Mueller recommends “denationalized liberalism.” Recall that
in Mueller’s terminology, liberalism is always economic:
and the technical organizations/people who run it, and not to social sci
entists and the like” (Gurstein 2013). Perhaps so, but given the wide remit
of the committee, it seems unfair to limit the input of those not specifically
invested in building technology. Most troubling, Gurstein asked “to be
pointed to the specific document and authoritative reference where this
definition was presented (as for example by the UN itself ).” He received
no reply. Upon further inquiry, he was told that the rejection was due to
conversation with “many individuals from Civil Society and the Business
community (including their focal points)”—with presumably much stress
on the latter (and its incursions into the former), since the less technically
focused parts of civil society were among Gurstein’s supporters. It is no sur-
prise that the criteria for membership in the T/A group was able to further
restrict itself to “individuals who have technically built the Internet” and
“the community of organizations and individuals who are involved in the
day-to-day operational management of the Internet and who work within
this community.” Although this group would seem to include individuals
like Gurstein and organizations like the ones in which he participated, it
could now be redefined to include only a small set of technicians.
Gurstein goes on to provide his own interpretation of this set of events:
What this means I think is that the prevailing and self-determined definition
of the T/A stakeholder group includes probably no more than 3–400 people
in the entire world, all of whom have some professional association with
the technical management of the Internet (the alphabet soup of technical
Internet governance organizations—ICANN, the Internet Registries and a
few others in standards organizations), perhaps at least 80% of whom are
from developed countries and at least 80% of those being US based, at least
80% being male (it is probably much higher given the absence of women in
these kinds of technical roles) and from sad experience having essentially no
knowledge or interest in matters that stretch beyond their narrow highly
technical realm.
It further means that the group representing the T/A stakeholder “com-
munity” is able to design its own “restrictive covenant” (define who is a
member and who is not), exclude whomever it wishes on whatever basis
suits it and moreover is not accountable or required to have any degree of
transparency in its internal operations, decision making procedures, internal
governance structures and so on. Notably, this group functions in an area of
Deregulation and Multistakeholderism 177
Thus a body was created and designed to represent all “stakeholders” re-
garding how technology would impact lives, livelihoods, and rights all over
the world, convened by the single institution with the most responsibility
for such tasks. Yet instead of increasing participation and representation as
intended by the original calls for multistakeholderism, the process is turned
on its head. The most powerful get seats at the table, the least powerful get
no seat at all, and there is no transparency. What had been at best a flawed
approach to democratic representation becomes “a fundamental challenge
to what we understand as democratic governance and governing processes.”
Gurstein was never shy in describing the real point of multistakehold-
erism and its function, which he saw as undermining democracy. It is no
accident that the post from which the foregoing quotations are taken was
titled “Multistakeholderism vs. Democracy.” In a subsequent post he elab-
orates on the explicitly antidemocratic character of multistakeholderism,
at least as applied in the digital technology space:
example, those without a “direct” stake in the outcomes but who neverthe-
less might as a consequence of their simple humanity be understood to be
impacted by the decisions being taken.
To me it is quite clear that “democratic governance” and “multi-stakeholder
governance” are internally in contradiction with each other. At their core,
democracy as in the “rule of the people” is one form of government and
multi-stakeholderism as in “the rule of ‘stakeholders’” is another and com-
peting form. I don’t think that they can be reconciled. (Gurstein 2014b)
following Christchurch, but concerns herself mainly with whether the deci-
sion process used for the Christchurch Call qualifies as “multistakeholder.”
She claims that “the Christchurch Call commitments were drafted and
signed with no participation from civil society,” implicitly suggesting there
are bodies outside of democratic governments that must be included in deci-
sion making. In her view, the failure to include this amorphous group, which
includes many actors with corporate ties they do not need to disclose when
providing “advice,” “is a way to differentiate a democratic mindset from an
authoritarian mindset.” One notes no specific recommendations made by
Badii about the actual Christchurch Call, but instead a series of procedural
imperatives that seem designed to blunt the Call’s impact.
In an editorial written after a racially motivated mass murder in Buffalo,
New York, in May 2022, Center for Countering Digital Hate (CCDH)
CEO Imran Ahmed notes that the shooter mentioned the Christchurch
terrorist by name in his manifesto. However, “despite pledges made in the
wake of the Christchurch terrorist attack and subsequent, ideologically
driven extremism attacks,” the most recent CCDH report “found that the
big social media companies were collectively failing to act on 89% of posts
that advocated the great replacement theory,” a cornerstone of far-right
and white supremacist ideology. Given the apparent agreement among cor
porations and governments to take actions that seem not to have been
taken, it is hard to avoid guessing that the multistakeholder civil society
groups represent the part of the digital–industrial alliance determined to
subvert democratic regulation of technology.
IN T ERNE T G O VERNANCE IN T HE EU R O PE AN UN I O N
In recent years, the European Union (EU) has been the global governance
body most attentive to the destructive and antidemocratic aspects of digi-
tal technology. Many of their efforts have been effective. And because in-
ternet companies with global reach want to continue to operate there, EU
regulations have frequently been adopted worldwide.
At every stage, with every proposal, the global internet community,
including its apparently nonprofit exponents, has used every trick in the
digital denialist book to forestall these measures. In nearly every case, tech-
nology promoters have distorted, exaggerated, and outright lied about the
Deregulation and Multistakeholderism 181
intent and likely effects of such laws. And as in the SOPA/PIPA debate and
so many other campaigns, they have rarely been held to account for their
deceptive practices.
Some of these conflicts show how digital promoters use a U.S.-centric
conception of human rights law to manipulate and negate such laws in
other democracies. The EU and its member nations have long had much
stronger privacy and reputation protections enshrined in human rights
law. In the digital age, the right to manage one’s public profile and protect
one’s reputation has been severely challenged. This is due to the persistence
and availability of misinformation, as well as derogatory yet correct infor-
mation that may be unduly influential despite having occurred long ago.
Thus, in the EU, as well as some other nations, there exists a formal or de
facto “right to be forgotten” (RTBF) that allows individuals, especially pri-
vate citizens, to have some control over how widely information about them-
selves is disseminated (Mantelero 2013; Da Baets 2016). It is not clear that
all accurate information should be made perpetually available. Some coun-
tries have laws that prohibit employers from considering certain kinds of
criminal convictions in employment decisions after set time periods. How-
ever, persistent digital information about such convictions can in practice
obviate such legislation.
In the early 2010s, a variety of laws and court cases brought preexist-
ing reputation rights into focus. These questions were not focused on the
direct provision of information, but instead on the role played by search
engines in providing links to information. Such services publicize data
about which an individual may have privacy or reputation rights. Frank
Pasquale provides an anonymized example of the problem:
The German legal system had already determined that Jane Doe had a cause
for action against anyone who spread apparently false information about
her. However, Google’s search engine continued to spread it. The com-
pany, which is neither a publisher nor distributor of data, was not neces-
sarily reachable by the same laws she was able to successfully use against
those who originally posted the information.
Predictably, Google and its many defenders attacked both the judicial
decisions and the EU legislation as gross abrogations of what they called
“human rights.” Although RTBF refers only to the question of linking to
information, not deleting it, opponents continued to portray the law as
an attack on the ability to publish information in the first place. Many
public opponents accepted this misleading framing. Google asserted that
its own “free speech” rights (see chapter 6) trump the explicit, legally estab-
lished rights of privacy and reputation ascribed to private individuals by
the EU and its member states (Powles and Chaparro 2015). Jimmy Wales
and the Wikimedia Foundation both claimed that RTBF would attack
both the real human right to free speech and the important values (but not
enumerated human rights) to “history” and “truth.” They did not note that
RTBF protects specific rights enshrined in law and constitutions (Jones
2016; Powles 2014).
EFF published a position paper opposing the EU’s attempts to make
Google delink certain entries from its search engine because it “threatens
rights in the United States to publish and receive information, including
information about government activities” (Greene et al. 2016, 2). EFF’s jab
at “government activities” displays its typical hostility toward democratic
governance, misleadingly deployed in an inapplicable context to promote
antigovernment conspiracism. EFF also insists that its version of U.S. free
speech absolutism should apply to democratic polities, especially the EU,
that have other ways of protecting speech. Just as predictably, TechDirt has
continued to remind readers of its opposition to RTBF because it “stifles a
free press and free expression” (Masnick 2019). The site publishes on aver-
age ten stories each year attacking the EU and other countries that attempt
to manage individual privacy and reputation rights.
Despite this outsize coverage of the issue, in general the actual effects
of RTBF have been neither disastrous nor widespread. In response to both
the court cases and EU law, in 2014 Google created an advisory council to
take requests from citizens who wanted their search listings delisted. The
Deregulation and Multistakeholderism 183
body included Google CEO Eric Schmidt, Google’s chief legal counsel,
and Wikipedia cofounder and RTBF opponent Jimmy Wales (Sullivan
2014). Notably, analysis of the results of the advisory council showed that
“delisting is already contingent on personal data not being in the public
interest” (Powles 2015, quoting van Alsenoy and Koekkoek 2015; emphasis
added). Examples of RTBF requests Google accepted included the follow-
ing: “A victim of physical assault asked for results describing the assault
to be removed for queries against her name” and “Links to the fact that
someone was infected with HIV a decade ago.” Meanwhile, rejected requests
included these examples: “A public official asked to remove a link to a stu-
dent organization’s petition demanding his removal” and “Elected politi-
cian requesting removal of links to news articles about a political scandal
he was associated with” (examples from Powles 2015, quoting van Alsenoy
and Koekkoek 2015).
While reasonable people will disagree about specific cases, it is clear
RTBF was not being used to prevent linking to (let alone publication of )
information in the public interest. Instead, it was being used to realize
rights that are established in EU law. Rather than accepting the fact that
EU states have used their democratic powers to recognize these rights, digi-
tal advocates change the terms of argument, issue misleading statements
about the nexus of rights in the conflict, and use a variety of other dishon-
est tactics to defeat democratic oversight.
To be sure, RTBF is part of a wider rights regime, which includes a
broader “right to erasure” that targets not only search engines but also
“data collectors” such as Facebook, Google, Uber, Apple, data brokers, and
“security” companies such as Palantir. As a recent survey of these efforts
by legal scholar Jef Ausloos documents, the EU’s efforts “are emblematic of
a broader issue in today’s information society: the lack of (meaningful)
control over the collection and use of one’s personal data. This absence of
control becomes particularly evident in an online environment dominated
by a handful of private companies that find themselves at the central nerve
points through which most (personal) data flows online” (2020, 3). Despite
the fact that the extent of their practices is not well understood by the
public (Zuboff 2019), internet companies and their apparently indepen-
dent advocates consistently frame EU attempts to regulate technology—so
as to secure rights to privacy, reputation, and control of one’s own data—as
instead being about the denial of human rights.
184 Deregulation and Multistakeholderism
in 2020, which took effect in 2020, EFF rarely expressed open enthusiasm.
Instead, it pointed out “room for improvement,” especially around another
cyberlibertarian frame issue, “interoperability” (Doctorow and Schmon
2020). It also expressed concern for a key cyberlibertarian antigovernment
technological value, encrypted messaging systems (see chapter 5), fearing
that legislation like DMA might have “unintended consequences, such as
creating incentives for companies to compromise on the security of users’
communications” (Stoltz, Crocker, and Schmon 2022). As always, EFF frets
about the unintended consequences of legislation and governmental action
while overlooking the same issue with regard to unregulated technological
development.
Mike Masnick and TechDirt, characteristically, went even further, helping
to seed a great deal of online commentary with talking points that echoed
and in some cases surpassed those of the Mercatus Center. They referred to
GDPR as a “ghastly, dumb, paralyzing regulation” that “poorly advances
the important policy goals purportedly prompting it” due to the “enor-
mous costs of compliance,” pointing for support to the work of Daphne
Keller (Gellis 2018). Even though “there’s no quarrel that user privacy is
important” and that the GDPR has a “noble mission,” it is only “some-
thing to praise” in that GDPR encourages “privacy by design,” which is to
say industry and technological self-regulation. Despite the many actions
taken under GDPR, including many large fines and notices to change
practices directed at service providers large and small in the years since
it became law, in 2022 Mike Masnick doubled down on these criticisms
and virtually replicated the Federalist Society talking points from Thierer
four years earlier. “Multiple studies have shown that it hasn’t lived up to
any of its promises,” he writes, citing only one study from 2018 that focused
exclusively on GDPR’s short-term effects on venture capital investment in
technology (Jia, Jin, and Wagman 2018), while claiming that GDPR has
“actually harmed innovation.” “And don’t get me started,” he goes on, “on
how the GDPR has done massive harm to free speech and journalism.”
Focusing solely on a set of real concerns raised by European Data Protection
supervisor Wojciech Wiewiórowski about enforcement of GDPR not being
strong enough (Manancourt 2022), Masnick concludes that the law has
“failed” and now the EU “wants to make it worse.”
Digital rights and technology advocates have consistently misrepresented
EU legislation and regulatory efforts, typically walking a fine rhetorical line
186 Deregulation and Multistakeholderism
according to which they seem to support the goals of regulation but sow-
ing doubts about the actual effects and implementation. They play fast and
loose with facts and pretend to champion core human rights values while
subtly (and not so subtly) undermining them in the same breath. Even when
they endorse the legislation, they rarely create internet-wide campaigns to
promote it, unlike the campaigns they create to oppose legislation that they
find offensive to their core values, especially innovation and anti-copyright
(see chapter 5). They also rarely reflect on the nationalistic orientation of
their advocacy, which frequently supports the power of U.S.-based tech-
nology giants like Facebook, Google, and Apple, against the economic in-
terests of other countries. Cyberlibertarian dogma is characterized by such
advocates’ ability to speak out of both sides of their mouths, emphasizing
real interests while scolding democratic polities for protecting the enumer-
ated rights of their citizens.
PAR T II
MY THS OF CYBERLIBERTARIANISM
CHAPTER 4
N
o sentiment is more universally relied on in cyberlibertarian
discourse than the belief that networked digital technology is
the second coming of the printing revolution. According to
this story, the printing revolution was responsible for producing democ-
racy, science, and other central virtues of modern civilization. The second
printing revolution, then, will produce these again—or perhaps more or
better versions of them, and new virtues besides. Because it will offer such
benefits, any opposition to the development of digital technology must be
rejected out of hand.
The printing revolution metaphor is central to cyberlibertarian dogma,
most frequently used as a demand and warning. We must view digital tech-
nology as having overwhelmingly positive effects on society. Computers
are as world-shattering as the printing revolution was, so we must be cau-
tious about managing technology, lest we interfere with the good that is
coming for us.
In this chapter and the next two, I refer to the rhetorical strategies and
metaphors found in cyberlibertarian dogma as “myths,” much as critical
theorist and activist Per Strömbäck does in his pamphlet 21 Digital Myths:
Reality Distortion Antidote (2016). Some of the myths discussed below are the
same as, or part of, the ones Strömbäck identifies. Myths are generally under-
stood to be false, which is why the term is appropriate. But the general
falsehood of myths does not imply there is nothing accurate about these
strategies and tropes. To the contrary, it is often the case that stories sur-
rounding comparisons, such as the printing press myth, so overwhelm the
relevant facts that it becomes difficult to reason about the topic in question.
189
190 Digital Technology and the Printing Press
mid-1990s (Pettit 2012, 95), claims that “the period from the late Renaissance
to the beginning of the twenty-first century will be seen as dominated and
even defined by the cultural significance of print”; that the printing revo
lution “affected not merely the material appearance of information and
knowledge dissemination but also, in the process, the very nature of cog
nition”; and that today, in an “analogous but inverse manner,” “the book”
is being reduced to “just another option in a wealth of different media modes
and permutations.” The printing revolution is thus framed as “opening” a
historical epoch that is now “closing”: “The opening of the Gutenberg
Parenthesis meant the closing of privileged production and consumption
of textually communicated knowledge, statement and information, the clos-
ing of the Gutenberg Parenthesis symmetrically implies the opening up to
a completely new and so far only partially glimpsed—let alone understood—
cognitive situation” (Sauerberg 2009, 2).
Although Sauerberg is a literature professor, he focuses his remarks on
canned and epochal generalizations rather than more sober and scholarly
descriptions of the phenomena in question. Among the most glaring of these
are his assertions about “the book” being the primary and privileged mode
of information sharing due to the printing press. However, most scholar-
ship, including Eisenstein’s, notes that non-book printed materials were at
least as influential, if not more so, than books during much of the post-
Gutenberg history. Additionally, there is a scarcely noticed elision of all
other forms of media, so as to claim that the rise of the internet-connected
computer marks a particularly significant change in the media environment
of most ordinary people. Rather, it is one in an extended sequence of changes
that extends as far back in history as one looks. It is hard not to notice that
the “closing” of the Gutenberg Parenthesis seems to imply that some, per-
haps most, of the benefits of the printing revolution will end as well.
Business leaders, too, are among those who repeatedly draw attention
to what was until recently an esoteric academic subject. Venture capitalist
Jennifer Carolan, offering a typical sales pitch for virtual reality, relies on a
chance remark by cognitive psychologist Steven Pinker to assert that “the
first major empathy technology was Guttenberg’s printing press, invented in
1440. With the mass production of books came widespread literacy and the
ability to inhabit the minds of others” (Carolan 2018). From this it follows
that the technologies in which she invests can also increase “empathy.”
Mike Masnick of TechDirt consistently brings up all the cyberlibertarian
194 Digital Technology and the Printing Press
Catholic Church. Protestant congregants “truly had ‘skin in the game’ and
were participants in their faith because of the Reformation, rather than
subjects to the rulers of their faith” (Pattillo 2021), describing Protestant
denominations very different from those we see in today’s world. Another
doubles down on revisionism, making the economic base of the analysis
more explicit: what “Johannes Gutenberg and Satoshi Nakamoto gave us
were new methods to permissionlessly transport information across time
and space, outside the purview of the establishment” (Anil 2022).
Finally, the long-standing, Koch-funded libertarian think tank Founda-
tion for Economic Education embeds its encomium to Satoshi and Bitcoin
in a long-form discussion based in Murray Rothbard and Friedrich Hayek.
Commentator Wendy McElroy praises Bitcoin for being “profoundly rev-
olutionary” because it “does not directly confront governments or corrupt
institutions; it sidesteps and obsoletes them” (2017). She writes that “few
things short of Gutenberg’s printing press have offered such freedom and
opportunity to the individual,” before invoking Voltaire to insist that “finan-
cial self-interest” is the core of an “extreme religious toleration” with which
democratic governance is inherently at odds.
As in so much cyberlibertarian discourse, scholarly research is repurposed
and reanalyzed so that the scholars who did the research become disqualified
as experts in their own areas of specialization, in favor of industry apolo-
gists with commercial interests in the specific interpretation of the scholar-
ship that benefits them. Facts and expertise are trusted only insofar as they
support the pro-technology agenda on offer. This simultaneous reliance on
and disparagement of knowledge and inquiry should be a serious cause for
concern among anyone thinking about the fate of inquiry in the digital age.
T HE VARIE T IE S O F MEDIA R E VO LU T I O N
media regulation worked well, and that deregulation weakened the ability to
serve democracy. Deregulation was undertaken as an industry-based effort,
often funded by dark money, to turn media toward the interests of wealth
and power. This leaves the printing press as the historical event far enough
back in time that can justify the story that deregulation is the most demo-
cratic approach to media change. However, few people are familiar enough
with that history to see that the story is false even in that case.
The advent of computing is different from more recent media revolu-
tions in that some scholars have described it as epochal. This means it is a
change that causes a rupture or break in human history of a dramatic sort,
so that it makes sense to talk of history before and after the change. The
best-known of these figures is Marshall McLuhan, a wide-ranging synthetic
thinker whose popular persona exceeds his scholarly reputation, especially
in recent years. McLuhan developed many concepts that remain touch-
stones for digital culture, such as the “global village” and the idea that “the
medium is the message.” His reputation in academia has suffered in no small
part because of his propensity for hyperbolic statements. In Understanding
Media ([1964] 1994), the book of his that has maintained the strongest
scholarly reputation in recent years, epochal changes proliferate. He cata-
logs inventions as various as the electric lightbulb, roads, clocks, bicycles,
housing, clothing, and comics as forms of media, and in each case ascribes
world-changing effects to their development. It would be hard to argue
against the general proposition that such developments change us. It is
easier to take issue with McLuhan’s relentless pronouncements of dramatic
historical ruptures to accompany not merely each invention, but the vari-
ety of changes that elements like clothing, housing, and roads have under-
gone. It becomes hard to avoid the sense that dramatic change is a part of
life, and that by emphasizing profound difference McLuhan significantly
downplays the remarkable continuities in human experience.
The book of McLuhan’s that directly preceded Understanding Media
was called The Gutenberg Galaxy: The Making of Typographic Man (1962).
Like Understanding Media, it describes ruptures and epochs wherever it
looks. Unlike the later book, it confines its focus to two historical breaks
caused by a relatively limited number of technological changes. The first is
the “abstracting or opening of closed societies” that resulted in the devel-
opment of “the phonetic alphabet, and not of any other form of writing or
technology” (8). But as the title suggests, McLuhan’s main interest is in the
198 Digital Technology and the Printing Press
The speed-up of the electronic age is as disrupting for literate, lineal, and
Western man as the Roman paper routes were for tribal villagers. Our speed-
up today is not a slow explosion outward from center to margins but an
instant implosion and an interfusion of space and functions. Our specialist
and fragmented civilization of center–margin structure is suddenly experi-
encing an instantaneous reassembling of all its mechanized bits into an
organic whole. This is the new world of the global village. The village, as
Mumford explains in The City in History, had achieved a social and institu-
tional extension of all human faculties. Speed-up and city aggregates only
served to separate these from one another in more specialist forms. The
electronic age cannot sustain the very low gear of a center–margin structure
such as we associate with the past two thousand years of the Western world.
Nor is this a question of values. If we understood our older media, such
as roads and the written word, and if we valued their human effects suffi-
ciently, we could reduce or even eliminate the electronic factor from our
lives. (92–93)
Even on its own terms, none of this sounds like a recipe for global har-
mony or what most of us understand as peaceful democratic rule.
McLuhan remains critical to those who proclaim the beneficence of the
digital revolution because he is the closest thing to a serious thinker who sees
200 Digital Technology and the Printing Press
Eisenstein’s clear critique of McLuhan and praise for the historical scholar-
ship Dane also recommends (Eisenstein 2005, 316).
Two of the major works of print culture studies to have emerged in wake
of Eisenstein’s book—Michael Warner’s The Letters of the Republic (1990)
and Adrian Johns’s The Nature of the Book (2000)—more directly accuse
Eisenstein of being a technological determinist in the mold of McLuhan.
In both cases, Eisenstein had detailed exchanges with her critics regarding
these allegations, in which she reiterated her rejection of technological de-
terminism (Baron, Lundquist, and Shevlin 2007). Indeed, toward the very
beginning of The Printing Press as an Agent of Change she stresses the use of
the indefinite article in the book’s title: “As an agent of change, printing
altered methods of data collection, storage and retrieval systems and com-
munications networks used by learned communities throughout Europe.
It warrants special attention because it had special effects. In this book I
am trying to describe these effects and to suggest how they may be related to
other concurrent developments. The notion that these other developments
could ever be reduced to nothing but a communications shift strikes me as
absurd” (1979, xvi). The critical fact that emerges from these exchanges for
the study of digital media is that none of these scholars accepts the notion
that technological change leads to specific, identifiable social changes—
even in retrospect. Thus the desire of evangelists like Jarvis and Shirky to
turn Eisenstein’s work so that it makes not just predictions but descrip-
tions of what inevitably will happen thanks to digital technology beggars
the imagination.
Jarvis articulates these themes exclusively through reference to Elizabeth
Eisenstein’s work. In Public Parts (2011d), he repeatedly references the print-
ing revolution, Johannes Gutenberg’s invention of the printing press, and
Eisenstein’s work as the principal underpinnings for his claims about the
beneficial and transformational nature of the digital revolution. The new
form of publicness entailed by digital technology “is at the heart of a reorder-
ing of society and the economy that I believe will prove to be as profound
as the one brought on by Johannes Gutenberg and his press”; “Gutenberg
empowered Martin Luther to smash society apart into atoms, until those
elements re-formed into new societies defined by new religions and shifting
political boundaries” (10); “today, with our new tools for making publics,
the people can again become proprietors of media (see: blogs) and the pub
lic sphere (see: revolutions in the Middle East). We all have our Gutenberg
202 Digital Technology and the Printing Press
presses and the privileges they accord” (79). The significant ambiguities in
this last sentence seem strategic.
The reference to the so-called Twitter revolutions suggests that the second
Gutenberg age is celebrated specifically for its political and social effects,
which are understood as directly parallel with effects of the first printing
revolution. Thanks to Gutenberg, “books made it possible to gather, com-
pare, analyze, and spread new information”; this “made the scientific revolu-
tion possible” (83, citing Dewar and Ang 2007); printed books “disturbed
the elite” (83); they “enabled Thomas Jefferson to collect all the laws of
Virginia”; books “took knowledge that had been diffuse and easily lost in a
few handmade copies and made it permanent, consistent, and accessible to
many” (84).
Claims like these have been widely promulgated since the early days of
computing and especially the internet. In a famous 1998 report, RAND
Corporation prognosticator James Dewar wrote, “Changes in the informa-
tion age will be as dramatic as those in the Middle Ages in Europe. The
printing press has been implicated in the Reformation, the Renaissance and
the Scientific Revolution, all of which had profound effects on their eras;
similarly profound changes may already be underway in the information
age” (2). There are at least three problems with assessments like these that
deserve attention. First, they demonstrably misrepresent most scholarship
on the printing revolution and its consequences. Second, they make unjusti-
fied assumptions about the societies into which digital technology is being
introduced. Finally, they make inferences about alterations to our own global
cultures that pretend both that the past has happened (e.g., since the print-
ing revolution “created democracy”) and that it hasn’t (since the digital revo-
lution will “create democracy,” even though we already have democracy).
In turn, the supposed parallel between the printing press and the inter-
net is used to defuse any criticisms of digital technology by those who do
not buy into cyberlibertarian narratives. In keeping with cyberlibertarian
denialism, Jarvis talks about the printing revolution in almost exclusively
positive terms, even though much of print culture scholarship rejects, or at
least resists or complicates, this triumphalist narrative. Some, most famously
McLuhan’s student Walter Ong (1982) and his teacher Harold Innis (1950,
1951), who raised real questions about the social and political consequences
of the printing revolution, are not mentioned by Jarvis or other digital
utopians.
Digital Technology and the Printing Press 203
T HE WO RL D BEF O RE PR I N T AN D
T HE WO RL D BEF O RE DIG ITAL T E CH N O LO GY
Perhaps the most serious problem with the analogy between the printing
revolution and the development of digital technology is that it requires an
inaccurate parallel between European societies prior to the advent of the
printing press and world culture prior to the advent of the computer. What-
ever they were or were not, the enormous changes wrought by the printing
press took place in part because printing and all other mass media and media
reproduction and distribution technologies did not exist yet. Prior to the
printing press, books were rare and available to a select few. Moreover, their
form and content differed from what we have come to think of as books
(Diringer 1982). The Christian Bible existed, but mainly in Latin, and only
priests and some aristocrats had copies. Scientific and philosophical texts
existed in extremely small numbers, but tended not to have tables, charts,
maps, and graphic images that were identical from copy to copy, as the print-
ing revolution allowed. According to advocates of the printing revolution
as a model for digital technology, the ability of masses of people to access
knowledge that was previously unavailable to them led to a series of explo-
sions in various forms of knowledge and the practices used to produce them.
204 Digital Technology and the Printing Press
It is hard to see how we might draw any useful parallel between Europe
in the 1400s, 1500s, or even 1600s and the twentieth century. Despite the
cant of internet promoters, it is hard to look at our world even in the 1950s
and declare credibly that it was suffering from a lack of information. The
availability of knowledge in 1950 or 1990 (depending on where one declares
that the information age begins) is nothing like what it was in Europe in the
1500s. The availability of printed matter, radio, television, films, telephones,
and other technologies have dramatically changed the world since the early
twentieth century. However, the impact of these changes, at least in met-
ropolitan centers, cannot compare to the shattering effect of the printing
revolution, which occurred in a world where almost nobody, even the small
number of wealthy and powerful people who would be able to read books,
had access to media beyond the spoken word. As Eisenstein writes, “Sev-
eral studies . . . have illuminated the difference between mentalities shaped
by reliance on the spoken as opposed to the written word. The gulf that
separates our experience from that of literate elites who relied exclusively
on hand-copied texts is much more difficult to fathom. There is nothing
analogous in our experience or that of any living creature within the Western
world at present” (1979, 9). Technology promoters seem to have forgotten
that almost all the benefits they claim for the printing press depended on
its introduction into a world that did not have anything like it. Less obvi-
ously, they implicitly compare the world before the printing press to our
own time, which removes the printing press from its historical and geo-
graphical context.
Most scholars attribute the rise of the printing press to Martin Luther’s
use of movable type during the Protestant Reformation. This historical
development could only have happened because the Catholic Church uni-
laterally controlled access to information for many of its congregants.
More generally, it depended on the relative unavailability of reading mate-
rial to the ordinary person prior to the printing revolution, which was due
to a complex and interlocking set of rules, technologies, and social prac-
tices. The critical fracture between Catholicism and Protestantism in this
regard is said to be that Protestants, including Luther, wanted everyone to
have access to the text of the Bible. To democratize belief and knowledge,
they used the printing press and translations of the Bible from Latin into
the languages spoken by Europeans in the sixteenth century.
Digital Technology and the Printing Press 205
Those who advocate the printing press metaphor as a rule never explain
what it is about our day that resembles that of Europe—let alone the rest
of the world—prior to Gutenberg, so that we can understand just what
dramatic changes we should expect thanks to digitization. This argument,
at its crudest, is based on a crypto-libertarian conspiracy theory. It implies
that our polities are equivalent to monarchical regimes ruled over by a
unitary Catholic Church, and that we have yet to experience democratic
governance or have access to information. Although these counterfactual
assumptions are often only implied by arguments rather than stated explic-
itly, they still influence a great deal of digital advocacy. This advocacy, in
turn licenses the destructive animus that is so pervasive in cyberlibertarian
discourse. This discourse aligns uncomfortably with many aspects of extreme
capitalist and even fascistic politics. After all, it is no exaggeration to say
that the most vocal proponents of the printing–internet parallel argue for
the elimination of many of the most salient features of the printing revolu-
tion even as they try to convince us of print’s unalloyed beneficial effects.
a much more complex picture. That the wide range of historical changes
sometimes attributed to the printing revolution “could ever be reduced to
nothing but a communications shift,” she writes, “strikes me as absurd”
(1979, xvi; emphasis in original). Many promoters of digital technology
make exactly the kind of “absurd” mistake Eisenstein cautions against, and
this despite not having anything like the historical knowledge she did on
which to ground an analysis. Jarvis insists that before social and cultural
developments, parallel to those caused by the printing press, even occur,
we know that they will. We also know that these changes will happen very
soon, despite the changes we point to happened over many hundreds of
years. Worse still, we know that the changes will be largely or even over-
whelmingly positive, despite having no agreement about what positive
change might be. This knowledge of the generally positive nature of change
permits us to ignore or dismiss evidence of negative changes occurring.
While the printing revolution is now understood and praised specifically
because of the roles print played in changes of world-historical proportions,
we need not—must not—wait for such events to occur when understand-
ing and regulating digital technology.
Eisenstein was initially inspired by McLuhan. However, she eventually
criticized him for presenting arguments similar to those offered by today’s
cyberlibertarians: “Developments that have been unfolding over the course
of five hundred years, affecting different regions and penetrating to dif
ferent social strata at different intervals, are randomly intermingled and
treated as a single event” (1979, 40). To the contrary, Eisenstein takes it as
proven that the “Italian cultural revival” that we now call the Renaissance
was “already under way” when the printing press was introduced (170);
“not at all did printing make the Petrarchan revival [part of the Northern
Renaissance] ‘possible’; Gutenberg came too late for that” (178); and a
more reasonable thesis is that the “preservative powers” of print may be
“related to the emergence of a so-called permanent Renaissance” (181), that
is, a series of cultural changes that were less transitory than other “medieval
classical revivals” (182) in Europe, of which the “Italian Renaissance” is one
among many. That is, arriving at the tail end of one revival (itself caused
by a host of factors, most of them solidly cultural and economic; 173–74),
printing helped make that one more lasting. But note how remote that is
from the claims of Jarvis (and others) about digital media producing a
“new Renaissance.”
208 Digital Technology and the Printing Press
The only change left standing is the Protestant Reformation, itself a com-
plex set of historical events (Cameron 2012). It is routinely portrayed as an
unambiguous good. One particular aspect of the Reformation is typically
highlighted for praise: the idea that the Catholic Church maintained a
“monopoly on knowledge” through the Latin liturgy and its tight control
over the availability of books, especially the Bible, well past the period when
Latin was widely spoken in Europe; and that Martin Luther disrupted this
monopoly by using Gutenberg’s press to distribute copies of the Bible and
other religious texts translated into vernacular languages. This in turn licenses
a metaphor that may be too seductive in its reach: those who publish or
produce information are cast as monopolists like the Catholic Church,
and everyone who uses disruptive information technology is cast on the
side of “democratization of knowledge” and “anti-monopoly.” A little of this
story goes a long way. It may be true in broad outlines, but the details make
the metaphorical lessons much harder to parse.
It is certainly not the case, for example, that the Reformation was a global
change that everyone agrees, in retrospect, was welcome. The Catholic
Church did not go away. In many countries it persists quite strongly to the
present day, and the vast numbers of Catholics then and now may well not
see the Reformation as a positive event at all. Perhaps more pointedly, the
Catholic Church is often viewed as a deeply problematic institution today
due to its role in promoting colonialism and violence, suppressing dissent,
and demanding conformity.
It is truly difficult to look at the history of the various sects of Protes-
tantism and see them as notably different in this regard. Colonial history
is almost equally divided between Protestants and Catholics, with many of
the bloodiest wars fought between powers whose rivalry was fueled in part
by this religious clash. The later history of these two branches of Christian-
ity is not much different, at least in terms of ascribing to one or the other
tendencies toward the worst aspects of human conduct. Both Catholics and
Protestants have colonized and enslaved; both have also protested and re-
sisted each practice. Antidemocratic politics, including Fascism and Nazism,
are found among congregants in both branches. Martin Luther himself
was a notorious anti-Semite and used Gutenberg’s press to distribute a
wide variety of hateful material about Jews (see “Martin Luther and Anti-
semitism”). This pattern might be taken more seriously by those who rec-
ommend the print–digital parallel—Protestant attitudes toward Catholics
Digital Technology and the Printing Press 209
the digital age from a revolutionary technology he in the same breath tells us
is outmoded, old-fashioned, and on its last legs, despite the fact that it con-
tinues to do what it has long done. The internet revolution is the antithesis
of the printing revolution but also its apotheosis: “Gutenberg empowered
Martin Luther to smash society apart into atoms, until those elements re-
formed into new societies defined by new religions and shifting political
boundaries. With the Industrial Revolution—of which Gutenberg himself
was a first faint but volatile spark—the atoms flew apart again and re-formed
once more, now in cities, trades, economies, and nations” (2011b, 10).
This is wildly different from the way scholars of print culture, including
Eisenstein, frame the benefits of the printing revolution. In fact it is hardly
praises the printing revolution at all, but rather deflects the advent of print
into the kind of “creative destruction” described by Joseph Schumpeter and
the “disruptive innovation” championed by Clayton Christensen. What is
actually being lauded is not anything about human life benefiting from
technological change, but more the ability of waves of technological changes
to generate profits for their inventors. Typically, this is done by putting
other companies out of business. In Christensen’s most classic cases, the
disruptor can undercut the prices of existing businesses without necessarily
providing equal, let alone better, products and services. Print is not cele-
brated for “smashing apart” what preceded it; the digital revolution now
functions not as an intensified version of the book, but instead as an atom
smasher, tearing apart the social order just because it can.
The internet is frequently celebrated as the second coming of print, even
as the same speakers dismiss the very qualities that make print so cele-
brated by its true admirers. Jarvis is repeating “print is dead” because it is
so widely used today. It is hard not to notice the stark incompatibility of
this phrase with the notion that the internet is terrific because it is like
print. It is not just “print” in the abstract that cyberlibertarians like Jarvis
proclaim as now passé. Books are too long (Shirky 2008b wrongly claims
that “no one reads War and Peace. It’s too long, and not so interesting”; see
also the Juskalian 2008 interview with Shirky), reading and writing in long
form are burdensome, and publishers and libraries exist to deny access to
information.
Jarvis offers an extremely thin explanation for why he has chosen to
write, publish, and sell books if all the infrastructure of print and publish-
ing, which themselves incorporate huge amounts of digital technology, is
Digital Technology and the Printing Press 211
bit of the “deep thinking” contained in the book should have already been
made much more widely available than a book ever could. Why did Simon
& Schuster waste so much money on publicizing a useless product, and
pay Jarvis himself to write it? Why did Jarvis write and publish it if books
are useless?
This is not mere hairsplitting. Jarvis relies on Eisenstein more than any
other thinker to describe and define the printing revolution, whose bene-
fits are so notable that we should welcome a second transformation that
somehow intensifies and makes more available those advances. Yet Eisen-
stein is careful in detailing what those benefits were, and many of them are
incompatible with, or outright dismantled by, the digital revolution. One
of her chief breakthroughs was enumerating and examining these changes
at a specific level. It is curious that those who champion computers and the
internet as the heir to print rarely consider Eisenstein’s central achievement
and her principal thesis about the printing revolution.
Some of the changes she attributes to the printing press certainly seem
to be magnified or intensified by aspects of the digital environment. Since
the cultural conditions necessary for large-scale developments like the Re-
naissance, the Reformation, the scientific revolution, democratic revolutions,
and Enlightenment are complex, ascribing them to discrete technological
factors is difficult even for Eisenstein. Thus, specific qualitative shifts en-
gendered by the press may be more relevant than inevitable historical trans-
formations when assessing the potential effects of digital media. The effects
she describes as most directly attributable to the printing press include
“wide dissemination: increased output and altered intake” (1979, 74); stan-
dardization, the development and reinforcement of more-or-less identical
typefaces, copies of books, images of various kinds, spelling, and many other
cultural phenomena, including entire languages (80); the “rationalizing,
codifying, and cataloging” of data in ways that had not before been possi-
ble (88); a related “new process of data collection” (107) thanks to which
the sequence of “corrupted copies [characteristic of scribal book culture]
was replaced by a sequence of improved editions” (109); the “preservative
power of print,” which Eisenstein declares is possibly its “most important”
feature (113); the “amplification and reinforcement” of “stereotypes and
sociolinguistic divisions,” a consequence in part of standardization, and in
general a more negative than positive effect (126); largely operational and
“unevenly phased” shifts, with “social and psychological” consequences,
Digital Technology and the Printing Press 213
In the wake of the Snowden revelations, free software advocate Eben Moglen
developed an especially elaborate and symptomatic version of the printing
press myth:
For the last half-thousand years, ever since there has been a press, the press
has had a tendency to marry itself to power, willingly or otherwise. The
existence of the printing press in Western Europe destroyed the unity of
Christendom, in the intellectual, political and moral revolution we call the
Reformation. But the European states learned as the primary lesson of the
Reformation the necessity of censorship: power controlled the press almost
everywhere for hundreds of years.
In the few places where the European press was not so controlled, it fueled
the intellectual, political and moral revolution we call the Enlightenment
Digital Technology and the Printing Press 215
including the very multinationals that exert most dominance over the
internet itself, are not only not going away, but are in fact strengthened by
the proliferation of digital technology, how can we imagine that the “more
democratic” internet will reverse this trend?
Moglen’s sweeping judgments are incoherent, and his specifics are often
demonstrably at odds with the facts. Indeed, they often point in the oppo-
site direction from the one he appears to intend. Moglen tells us that the
printing press “destroyed the unity of Christendom.” It is unclear from
which political orientation one needs to sit to see the replacement or sup-
plementation of the Catholic Church with a variety of Christian sects as
a metaphorically meaningful cultural advance for our own age, especially
as time went on. Moglen says that “in the few places where the European
press was not so controlled . . . it fueled the Enlightenment and the French
Revolution.” This is truly a bizarre rereading of history.
Although many of the authors whose works we typically consider part
of the Enlightenment emerged from England, the specific target of the
American Revolution, they were not suppressed in England by censorship
(which was a common practice in that country at the time). If it is merely
the availability of printed material that fuels democratic revolutions, why
is it that Locke’s writings were unable to generate the same revolution in
England they did in the Americas, despite being widely distributed in both
places? France, which Moglen mentions specifically, remained a Catholic
country up to and through the French Revolution. It was somehow vul-
nerable to political enlightenment but not religious reformation. France
continued to enforce as much if not more state censorship than did other
European countries. Despite the inclusion of press freedom in the Decla
ration of the Rights of Man in 1789, the proclamation was issued after the
French Revolution had already begun. The press in France remained cen-
sored until well into the nineteenth century. Sweden was the first European
nation to declare and enact anti-censorship principles in 1766. Although
Sweden remains a monarchy, it experienced a series of relatively peaceful
reforms that led to a constitutional–parliamentary democratic system that
is very similar to the UK system. The monarch became largely a figurehead
in Sweden in 1974.
Moglen omits Gutenberg’s home country, Germany, which had an explo-
sive press from the seventeenth century onward. Germany, like Sweden,
experienced the most obvious effects of the Reformation firsthand, but did
Digital Technology and the Printing Press 217
apparent irony) notes that free software “runs Google and Facebook” and
“performs flawlessly every day in every bank, insurance company, engi-
neering firm, supermarket and pretty much everywhere else,” while noting
that “every government and every deep pit of misbegotten wealth on earth
is trying to centralize and thus control the net that is humanity’s future”
(2011). Free software is designed to offer no impediment to commercial
usage of its products and in general gives away both the products and the
labor associated with them to corporations. Companies, including the very
“centralized services for social networking and publication,” that “are oper-
ated by businesses that can be coerced, bought or bribed” are given access
to free software. Why should we expect that free software will at some point
in the future lead away from commercial development or otherwise desta-
bilize concentrated private capital? There is already a great deal of evidence
that this does not work. Capital and concentrated business power are already
using free and open-source software maximally in their for-profit environ-
ments. If it is this bad after only a few decades, what is going to cause a
turnabout, producing the logically and empirically improbable outcome
in which the widespread availability of software anyone can use and inspect
is going to provide resistance to corporate power?
Relying on the same fallacious equation of software with speech that
informs a great deal of the rhetoric of digital enthusiasts (see chapter 6),
Moglen paradoxically tells us that the promise of the printing press can be
realized “by dissolving the press” itself. Dismissing as unimportant the many
parts of the printing apparatus that have to do with promotion, distribu-
tion, and editorial interaction, Moglen repeats the canard familiar from
Shirky and Jarvis that having the “power” to post items to a blog is equiva-
lent to having a major publisher issue a book. He also suggests that major
publishers themselves are equivalent to government censorship, which would
come as a surprise to authors as prominent as Toni Morrison or Stephen
King. Intent on promoting the wonders of the “digital” and particularly its
(apparent) isolation from corporate influence, Moglen offers pronounce-
ments contrary to the facts he cites in the same essay: “Tiny computer sys-
tems will be scattered around every home and office in almost all societies,
ubiquitous as mobile phones, and much more powerful. With free soft-
ware inside, they can become ‘FreedomBoxes’: devices that assure each
individual user, each still-human being, of the right to communicate safely,
freely, without monitoring or control” (2011). Bizarrely, Moglen seems to
forget that he just told us that the manufacturers of the current commercial
Digital Technology and the Printing Press 219
versions of these devices rely heavily on free software already. They have
“free software inside,” and from a certain political–economic perspective
they are already “FreedomBoxes,” but that is only if one discounts the very
practices Moglen critiques.
Moglen accurately assesses corporate power in the digital age. However,
he offers only one solution to connect his political goals with the current
policy and legal architecture, and that is a familiar one: government should
not regulate hardware, software, and computer networks. This is coupled with
the now-familiar hatred of the U.S. government couched in the language
of surveillance, the NSA, and Edward Snowden, in which the level of emo-
tion far overwhelms the provision of facts. In more recent work, Moglen’s
antipathy for the U.S. government has only become more pronounced. For
instance, in a 2013 lecture series he frequently refers to the current system
as “totalitarian” while offering no explanations for how the “Freedom-
Boxes” he recommends can exist outside the government’s capability to
enforce the law by surveilling them. Nor does he explain how the freedom
he invokes so often is to be realized when he promotes a thorough distrust
of what the government says it is doing and what the evidence indicates.
Moglen uses the print–digital parallel to advance a revisionist, contradic
tory, and self-serving interpretation of history. He turns this interpretation
toward the promotion of his own vision of technological change, via his
ideal solution of free software. This promotion entails the widespread dis-
tribution of ever more digital devices, even if that “free software” is inher-
ently open to inspection and therefore human redirection and misdirection.
Therefore, his own solution results in the very problem he claims to solve.
Despite the anti-corporate nature of much of Moglen’s rhetoric, he ends up
holding a position almost identical to that of RAND analyst James Dewar:
“The above factors combine to argue for: a) keeping the Internet unregu-
lated, and b) taking a much more experimental approach to information
policy. Societies who regulated the printing press suffered and continue to
suffer today in comparison with those who didn’t. With the future to be
dominated by unintended consequences and a long time in emerging, a
more experimental approach to policy change (with special attention to un-
intended consequences) is soundest” (1998, 3). Do not regulate. The demo
cratic polity should act only through market forces, and market forces are
the only legitimate means of democratic governance. Do not consider digi-
tal tools and devices as even potentially harmful. Do not consider surveil-
lance an inherent feature of the network, but instead encourage the network
220 Digital Technology and the Printing Press
Why worry about the parallel with the printing press? Is it not true that, even
if it does not exactly parallel the printing press, and even if it eliminates
Digital Technology and the Printing Press 221
some of the specific benefits the printing press is supposed to have pro-
duced, the “internet” is a revolution in human communication? Which
means it is a revolution in how our most important political and social
institutions function, and for this reason it is unobjectionable to celebrate
it, right?
While there is some truth to these sentiments, they are deeply misguided
and misleading. The most strident claims made about the printing press–
internet analogy insist that our most fundamental practices and institutions
“are being radically transformed.” These claims, as with many commercially
backed technological transformations since the nineteenth century, emerge
prior to significant evidence of their veracity. In a fundamental way, they
put the cart before the horse. Rather than having the Protestant Reforma-
tion occur and then looking back and seeing the role of the printing press
in that event, it’s as if we are staring at the printing press itself and declar-
ing “The Reformation is coming any day now, and it will definitely be a
good thing.”
The view that our global society is in a parallel position to that of Europe
prior to the printing press is not merely contentious but much more likely
wrong than right. Attending to the rhetoric of the parallel and the specific
arguments it licenses is important, as it allows us to be more specific about
why we believe that our world has insufficient communication media that
are no longer capable of supporting democracy, among other things. Espe-
cially if we believe that print “produced” democracy in some sense, what
happened to reverse that, and why does the advent of the internet fix what-
ever “broke” print in the first place? What is the character of democratic
cultures worldwide—but especially in those regions where print is already
widespread—such that they no longer experience the benefits of print cul-
ture, or that digital culture will reintroduce these benefits? Why should we
accept this strange “second coming” of the printing press when it often
overtly expresses a desire to destroy many of the features of print that are
deemed politically necessary?
Related to, but conceptually distinct from this concern, is the deep ques-
tion of whether the “printing revolution” should be viewed as a technological
or social change. Of course it is both, but almost all advocates of the print–
digital analogy take it for granted that the most important effects of the
transition to print were technological, thus aligning these interests with the
general technological determinism that informs cyberlibertarian dogma. It
222 Digital Technology and the Printing Press
the mid-1600s. Was this the effect of printing, or was it not, since the colo-
nies eventually rebelled against British rule as well? Is that because print
was less effective, less meaningful, and less well distributed in Britain than
in the colonies? Or was there something other than print technology at
work, or at least also at work, in what we now see as one of history’s most
profound instances of democratization? For example, was it in part the re-
moteness of America from the levers of political control available in Great
Britain itself, a kind of remoteness that the “internet” has done a great deal
to eradicate?
Warner notes that despite Eisenstein’s frequent nods toward the mul
tiple causes of major historical events, even for her “politics and human
agency disappear . . . whether the agency be individual or collective, and
culture receives an impact generated outside itself ” (1990, 6). Such a view
“must suppose, therefore, that a technology could come about, already
equipped with its ‘logic,’ before it impinged on human consciousness and
before it became a symbolic action” (6–7). As Warner demonstrates, even
changes that we now understand as technological can be seen as shaped
by inarguably social factors. For example, “although printing was initially
another way of reproducing in quantity books that were already being
reproduced in quantity, at a certain point printing came to be specially
defined as publication, now in opposition to manuscript circulation” (8).
This was less of a technological change and more of a social one. The same
physical objects were now viewed in a different light. This is why the
worldwide distribution of printing does not produce uniform effects, why
“among the Chinese or the Uighur Turks, printing took on different defin-
ing features and had different ‘consequences,’” a fact that mitigates against
the view of print-revolution historians that “the technology of printing,
once ‘discovered,’ yielded the result of standardized mass production, with
its cultural symptoms” (9). Scholars have long understood that even what
we refer to as “printing” is a collection of disparate technologies and social
forms. The emergence of these technologies in world culture is far more
diffuse and complicated than the Gutenberg story suggests (see esp. Innis
1950, 1951).
Thus the “printing” that played an important role in the development
of the American republic was a social production, resulting from particular
thoughts and cultural attitudes about printing, society, politics, and forms
of government that were both independent of and intertwined with the
224 Digital Technology and the Printing Press
C
yberlibertarianism manifests as rhetorical tropes and strate-
gies that shape discussions of digital technology, government,
politics, and social issues. It is also a form of advocacy for a set
of issues. These issues are both less and more than what they seem. They
are less because they often distort the political contexts into which they
are said to be critical. They are more because the political and social work
they do is often obscured by digital advocacy itself. These issues are best
understood in the way “greenwashing” and “astroturfing” are understood:
as carefully crafted strategies that do a great deal of political work for their
advocates, but appear to do something very different to those not in the
know. Greenwashing in the case of plastics recycling, for example, advances
the power of the fossil fuels industry and its associates, while giving the
impression of advocating for environmental protections.
Clear threads emerge when the issues are grouped together. The most
obvious thread is that advocates take for granted the beneficial and pro-
democracy nature of digital technology, as seen in the promotion of Section
230, and work to advance its power. This is primarily in the form of corpo-
rate power but also, in some disturbing cases, outside of corporations, at least
on the surface and for the moment. This embrace of technological power
is underpinned by a discourse about democracy and democratic gover-
nance that is less obvious, but according to which, essentially, democracy
is impossible. The discourse suggests that all government is authoritarian
and all nongovernmental power is more or less welcome. The only excep-
tion is the core neoliberal use of governmental power to write preemptive
law, which appears to restrict governmental power to regulate technology
227
228 The Myth of “Free Culture”
The terms “open” and “free” are primary marketing labels for cyberliber-
tarianism as a movement. Many in the public take these terms as being
directly connected to core political notions of freedom and democracy,
often almost completely uncritically. Once one side of a debate or a par-
ticular way of looking at an issue has been labeled “open” it becomes almost
impossible to have reasoned debate about the issue at hand. The over-
whelming assumption is that anyone arguing against “open” is therefore
arguing in favor of “closed.” Among all the terms used in cyberlibertarian
discourse, these two are among the most likely to be subject to the con-
stant shifting of meaning described in chapter 2 as a key method used in
strategic digital denial.
Of course these rhetorical moves have long been noted by critics. “Open”
does not have a firm political definition, and the relationship of its rela-
tively clearer technical applications to politics is never obvious. Like many
other cyberlibertarian keywords, it often comes down to nothing more
than “absence of government.” As paradoxical as it may seem, the absence
of democratic oversight is often used as a synonym for “open” (or “democ-
racy” for that matter). Evgeny Morozov has been particularly clear about
The Myth of “Free Culture” 229
The Open Government and Open Data movements make it clear that
they entail political values, specifically that governmental operations should
be transparent or open to inspection by the public. However, they also
almost universally entail the value that governmental resources should be
made available for free, without any sort of remuneration. Often, these
resources are used by private corporations who neither return profit to the
source of the data nor make their own operations open or transparent in
the sense required of governmental programs. The matter is further com-
plicated by the fact that “open source” existed prior to its use in the digital
milieu. It meant information that is not proprietary to military or intel
ligence services. Typically, it was used within that context to distinguish
information published in public sources (such as newspapers and encyclo-
pedias) from that collected by intelligence agents. Thus some of the most
vociferous advocates of “open source” include writers like Robert David
Steele, whose Open-Source Everything Manifesto takes it as given that “open”
used in each of these instances is the same, so that “Open-Source Everything
is our path to peace, power, and prosperity . . . [it] reconnects us all to the
root power of the cosmic universe, restores our harmony, and unleashes our
inherent gifts of innovation, entrepreneurship, and generosity” (xix).
With his breathless claims of revolution and social transformation and
his exhortations to open source “everything,” Steele’s animus is reserved for
government and his positive doctrine is oriented toward efficient and un-
impeded flows of capital. Like so many others chronicled here, Steele begins
his analysis by invoking the usual cyberlibertarian suspects from both the
moderate political left and political right, and their promotion of what he
calls “collective intelligence”: Alvin Toffler, Matt Drudge (who is praised
for breaking the Monica Lewinsky story “when corporate media was hold-
ing back because of its incestuous relationship with the two-party tyranny
that strives to control what We the People can learn”; xiv), Glenn Reynolds,
Howard Rheingold, Pierre Levy, Lawrence Lessig, James Surowiecki, and
Clay Shirky. Steele sees these writers as all expressing the same point of
view, according to which “top–down governance has become corrupt and
does not work . . . [and] the nanny state attempts to micro-manage what it
does not understand” (xv). As Morozov has written, “‘Open government’—
a term once reserved for discussing accountability—today is used mostly
to describe how easy it is to access, manipulate and ‘remix’ chunks of gov-
ernment information. ‘Openness’ here doesn’t measure whether such data
232 The Myth of “Free Culture”
increase accountability, only how many apps can be built on top of it, even
if those apps pursue trivial goals” (2013c). Even more pointedly, in the same
piece he rightly notes that “a victory for ‘openness’ might also signify defeat
for democratic politics, ambitious policy reform and much else.” We must
add: when “open” defeats democratic politics, it tells us that it is a victory
for democracy.
Part of my brief here is to show how often this cluster of ideas is taken up
by figures on the right, even if that is not what their authors intend. While
authors are not liable for controlling what others do with their ideas, they
do have a responsibility to note how their ideas are being received, espe-
cially in the broad strokes. The few left-leaning writers mentioned above
seem to be either unaware or uninterested in the degree to which their
work is continually referred to as authoritative by those whose politics they
disagree with. It is possible to make one’s political views clear and demon-
strate that one’s analysis should not lead to results that are strongly favored
by one’s political opponents, especially when those opponents are as viru-
lent and powerful as today’s political right. These figures are unwilling to
change their views because they are conceptually unable to do so. The theory
they promote was designed to advance the cause of the political right in
general and neoliberalism in particular.
Nathaniel Tkacz (2012) provides a trenchant critique of the use of “open”
in digital rhetoric, demonstrating how its lineage can be traced from right-
wing thinkers to today’s digital theorists. He shows that the intellectual
frameworks applied by these writers are remarkably consistent, tracing the
preeminence of “open” in today’s discourse to the work of Karl Popper.
Noting that “Popper was not the first to write about the concept of open-
ness, nor even of the open society,” Tkacz observes that it was nevertheless
Popper’s two-volume Open Society and Its Enemies (1945) that drew popular
attention to these concepts. The achievement of these volumes, authored
while Popper was in exile during World War II, is to rewrite the history of
Western philosophy:
fate of a nation and its people, or alternatively the class inequalities produced
by capitalism, are no longer the primary concern. The question is no longer
about identity, race or class, but whether or not a social programme, that
is, a set of knowledges and related practices, is able to change. Social pro-
grammes based on unchallengeable truths—the so-called laws of history or of
destiny—emerge as the fundamental enemy, and what might be considered
radically different political programmes in a different frame of analysis—
communism and fascism—are made equivalent. The positive side of this
political equation, the open society, is one where totalising knowledge is
necessarily impossible. Openness is necessary because nobody can know for
certain what the best course for society might be from the outset, and at the
same time it is assumed that openness provides the best possible conditions
for producing knowledge and, therefore, making better decisions. (Tkacz
2012, 389)
Popper himself at least glimpsed that his youthful exaltation of tolerance for
unlimited criticism was unavailing in many circumstances that resembled
those the MPS was constructed to counter. For instance, in a long footnote
in Open Society he grants the plausibility of paradoxes of tolerance (“Unlim-
ited tolerance must lead to the disappearance of tolerance”) and democracy
234 The Myth of “Free Culture”
(“the majority may decide that a tyrant should rule”), but had little to offer
concerning how those paradoxes should be defanged. Yet around the same
time, Popper was already flirting with the Hayekian “solution”: membership
in the Open Society had to be prescreened to conform to a “minimum phi-
losophy”: but the principles of selection for that philosophy were never
made as explicit as they were by Hayek in practice. (71)
Popper was not fully on board with what later became the esoteric and
exoteric doctrines of the NTC. However, the theory and practice of those
closest to the theory’s development contain the paradoxical question of
“open.” Additionally, there are close conceptual connections between “open”
as what Tkacz rightly calls a “master category” and Hayek’s rewriting of
world political and intellectual history. In Hayek’s view, any hint of collec
tivist values is assigned automatically to a totalitarian system. Therefore, only
unregulated capitalism and concentrations of wealth and power can be con-
strued as freedom. Even the relative powers of these are irrelevant, much
like Aristotelian democracy is indistinguishable from the antidemocratic
political systems recommended by Plato (at least in some of his dialogues).
Mirowski draws attention to the curious symbiosis between “openness”
and the most rapacious commercial interests of our time. In Never Let a
Serious Crisis Go to Waste Mirowski discusses the concept of “murketing,” a
term he uses to point to techniques used by advertisers and other “modern
hidden persuaders [who] have gladly nurtured the conviction of the aver-
age person that he is more clever than those who seek to manipulate him
in order to render him all the more open to that manipulation” (2013, 140).
“One of the most fascinating technologies of faux rebellion in the modern
neoliberal murketing toolkit,” he writes, is “the construction of situations
in which the mark is led to believe she has opted out of the market system
altogether” (141). He mentions as exemplary “a new breed of ad agency
recruited unpaid volunteers to talk up products with which they were
unfamiliar among their friends and acquaintances,” noting that “it helps
if the initiating guerrilla cadres sport an edgy character, mime disdain for
their clients, and wax ironic about their faux rebelliousness, with names
like BzzAgent, the Ministry of Information, Bold Mouth, and Girls Intel-
ligence Agency” (142).
In Mirowski’s sense, murketing is almost a term for how individuals iden-
tify cyberlibertarianism as a political, countercultural movement, while in
The Myth of “Free Culture” 235
Few conversations are more frustrating than trying to get such open-source
social media advocates to explain what kinds of protections their imagined
utopian platform would have against the hundreds of data brokers that
Pasquale (2014, 2016; Citron and Pasquale 2014), Angwin (2014), and others
point to in their work. Instead, demonstrating how profoundly cyberliber-
tarian dogma emerges from reactionary principles, these conversations de-
generate into vituperation, “aggrieved entitlement,” and repetition of the
term “open source” as if it were a utopia-generating magic word.
In many ways, even more than “open”—and fitting with the vaguely lib-
ertarian commitments out of which it is born—“freedom” is the concept
most directly at issue in cyberlibertarianism. Cyberlibertarianism’s most
profound and disturbing effect is to promote a highly specialized concep-
tion of freedom that has little in common with most people’s understand-
ing of the term. This is not exclusive to cyberlibertarianism, but is part
of both the libertarian and neoliberal programs at many levels. The most
influential central figures within the innermost “shells” of this movement
seem to be aware that promoting alternative notions of freedom is one
of the most effective ways to gain widespread popular support for views
that benefit only a small minority. The advent of digital technology has
given these ideologues essential new tools. However, these tools are ideo-
logical rather than technological. They provide metaphors, narratives, and
ways of speaking that inspire significant numbers of people to take a wide
range of actions. Unfortunately, this support is then used to counter almost
directly their interests.
In the hands of the far right, “freedom” is often treated as nearly syn-
onymous with “liberty.” When the right wing uses it, this word is set apart
from its more general meanings and attached to holistic bodies of politi-
cal thought, so that it can sound reasonable to oppose Social Security or
Medicare on the grounds that they offend liberty, despite the lack of main-
stream political thought that would make such assessments coherent. Mark
Levin, a right-wing ideologue and talk show host, titled one of his best-
selling books Liberty and Tyranny: A Conservative Manifesto (2009). He
misleadingly claims that social programs enacted under the New Deal (6–7)
are a threat; in another discussion, he claims that “the Founders under-
stood that the greatest threat to liberty is an all-powerful central govern-
ment” (4). The right uses the term “liberty” to mobilize populist energy
against the very same democratically enacted structures and programs that
The Myth of “Free Culture” 237
fact that the term “free software” in modern usage sounds a lot more like it
means free beer than it does free speech. While “free speech” now refers
exclusively to libre-style freedom, “speech” and “software” are not analo-
gous nouns, since “speech” refers to an activity and “software” to a product.
“Free” is a keyword in many cyberlibertarian formulations, not just “free
software” but also “freedom of speech,” “free culture,” and “internet free-
dom.” It is also, of course, part of our ordinary political discourse, where
not just freedom across the board but freedom of speech, freedom of reli-
gion, and free markets are highly contested concepts. Today, “freedom” is
often used to refer to “a relative absence of law and regulation,” as in Berlin’s
concept of negative freedom. Even the core political concept of “freedom
of speech” is easily accommodated to “free as in free markets.” Stallman’s
free software is curiously silent about markets, despite his own expressed
hostility toward markets as a solution to social problems. This is especially
odd given that FOSS has turned out, perhaps paradoxically on first glance,
to be one of the greatest boons to concentrated financial and political
power in recent history.
The term “open source” has one of the oddest and most telling patterns of
development and usage of any in the cyberlibertarian lexicon. It has a spe-
cific, pro-capitalist, pro-business history: it was designed by far-right com-
puter advocates as a way to ensure that certain forms of apparently selfless
behavior by software developers could be harnessed for the extraction of
profit. Despite this, “open source” is often described as a kind of “commu-
nist” movement. However, the genealogy of this claim is difficult, perhaps
impossible, to ascertain. Additionally, its general outline is hard, maybe im-
possible, to reconcile with left-leaning political or economic imperatives.
One of the first writers to make this claim, oddly, is the same Richard
Barbrook whose “Californian Ideology” offers one of the most trenchant
critiques of cyberlibertarianism. In a paean published in 2000 to American
pragmatism as opposed to European “theoretical obsessions,” Barbrook de-
clares that “the same right-wing Americans” who “still virulently oppose
the public provision of welfare services considered indispensable in other
developed countries” nevertheless “are happily participating in the construc
tion of cyber-communism” (2000, 31). The basis for this assessment is the
The Myth of “Free Culture” 239
when so many pundits believed that the Net had almost magical powers”
(2005), he insists that it remains “hardly controversial” that “the sharing of
information over the Net disproved the neo-liberal fantasies of Wired. The
leading role of capitalist businesses within the open source movement was
incompatible with the anarcho-communist utopia.” In the late 1990s, “the
open source movement was the iconic example of non-commercial pro-
duction over the Net.”
Writing in a slightly more skeptical vein, literature and education scholar
Christopher Newfield shared that he “began to worry about open source
when the corporate world stopped worrying and learned to love open source”
(2013, 6). Newfield’s worry was triggered by the observation that Microsoft,
along with other major corporations, was shifting its attitude toward open
source, “acknowledging real overlap between the open and proprietary forms
of intellectual property that had been seen as polar opposites throughout
the 1980s and most of the 1990s” (7). Newfield describes a “coexistence and
stand-off between open source and corporate open source, between my right
to create and use, and your right to own and charge through the platform”
(11). He notes that in contrast to what he calls “corporate open source,”
“much of the current interest in open source borrows from Marxian and
autonomist traditions in Italy, France, and elsewhere that have been work-
ing for years on post-industrial labour systems, focusing in particular on
cognitive capitalism and immaterial labour” (10).
It is possible that “autonomous Marxism” is the part of the contemporary
left most interested in open source and most likely to see the movement
as compatible with its left (or progressive, or in Barbrook’s terminology
“communist”) goals. But this is a strange association that requires us to over-
look or reinterpret the words, beliefs, and practices of overtly right-wing
figures to make them compatible with left politics. Furthermore, the assess-
ment requires us to toss out the explicit, public justification for the crea
tion of open source. It offers a novel interpretation of left economics of
labor whose justification within that economic system is hard, perhaps im-
possible, to provide.
Barbrook, Newfield, and many others seem to have confused the work
of Richard Stallman’s Free Software Foundation (Stallman 2002) with open
source, which has led to their mistaken understanding of history. FS, as
discussed below, sounds like an effort to take software outside the capitalist
The Myth of “Free Culture” 241
Copyright may well be the signal cyberlibertarian issue. Mentioned in all the
founding documents, hallowed in slogans and battle cries, few causes have
rallied universal outrage so long as has copyright. However, its ubiquity
also points backward, as many of the characteristic rhetorical and political
moves occur here in great abundance. Almost everyone who identifies with
digital technology thinks that copyright is an unadorned evil. This is so
much the case that, with very few and only very recent exceptions, surveys
of the topic that are supposed to be neutral do not examine it from all sides.
Copyright defenders are often dismissed as self-interested, while attacks on
copyright are understood as democratic and selfless. Attacks on copyright
are “beyond left and right,” and attempts to situate them politically are
pushed aside.
Meanwhile, both the left and right demonize copyright without consid-
ering how they might agree on the legal doctrine surrounding intellectual
property. This occurs without any need for further elaboration or reflection
on the doctrine’s esoteric nature. Even worse, consideration of the cui bono
question regarding copyright is rarely found. Copyright is aligned with en-
tertainment companies and the “copyright lobby” (Falkvinge 2015; Khanna
2014; Masnick 2013a; Shapiro 2011), and this somehow prevents people
from asking whether there might not be any kind of “lobby” that benefits
from anti-copyright agitation. However, it has long been clear that many
of the most important technology companies benefit from intellectual
property being available to them without needing to pay creators for it.
Much as with free and open-source software, the anti-copyright discourse
can be read as a sustained discussion of the value of labor, with the bottom
line that labor should be entirely unpaid.
Much anti-copyright discourse positions itself as labor advocacy, even
when it comes from right-wing thinkers. This is similar to other rightist
propaganda campaigns, not least the decision to call legislation that makes
forming labor unions nearly impossible “right to work.” In hindsight, it
is strange that works by authors who are ostensibly left-leaning—such as
Siva Vaidhyanathan (2004), Cory Doctorow (2015), James Boyle (2008), and
Lawrence Lessig (2004, 2008), whose politics are so thoroughly drenched in
cyberlibertarianism as to be difficult to place on any conventional left–right
axis (see Mayer-Schönberger 2008; Mirowski 2017)—could present nearly
The Myth of “Free Culture” 243
per file has been reduced to such a low amount that it is unlikely to be
noticed. However, there are still a variety of “sunk” costs involved with
having the basic equipment required to copy, store, and use these files.
Masnick and others like him argue that “scarcity doesn’t exist. Instead, you
have abundance. You can have as much content as you need—and in that
world, it makes perfect sense that there’s no costs, because without scarcity
there need not be a cost. Supply is infinite, and price is zero” (Masnick
2006). The entertainment industry is the persistent target of this analysis.
What riles Masnick and his supporters, extending up to the Pirate Parties
who often cross-cite Masnick and TechDirt, is that they see copies of elec-
tronic files that they could download and copy themselves for free, but are
prevented from doing so by the content creators.
In every forum, we repeatedly read the complaint that “copying isn’t
stealing because the original thing still remains.” The implication is that
if anyone has created a “copyable” product, they have no right to charge
for it and must expect and tolerate that it will be widely copied without
charge. One thing they hate with particular fury is digital rights manage-
ment (DRM) technologies, technical means for ensuring that the only
copies made are authorized by creators. In this one area, cyberlibertarians
oppose a technical fix to a legal and social problem, which is interesting
because they usually champion such solutions. As always, the dominant
principle is me-first: I can protect my own machine by encrypting my
communications, so nobody should stop me from doing it; but I can see
that new game or music file and can’t use it due to DRM, so DRM should
be outlawed. (DRM creates an interesting conceptual conundrum for the
vague first principle of freedom in Stallman’s description of free software,
since it’s unclear why the use of such software would not be a freedom that
developers should enjoy even though it ends up violating some of the other
freedoms.)
TechDirt and the Pirate Parties are fascinating because they receive little
to no support from the industries they claim to help. Masnick argues that
giving content away for free will provide more income to creators. He
believes that the availability of “free” content itself requires content provid-
ers to give away their own content for free, since “saying you can’t compete
with free is saying you can’t compete period” (Masnick 2007a). Masnick
also believes that if content creators knew what was good for them, they
would be giving away their content at no cost. TechDirt has long targeted
The Myth of “Free Culture” 245
the hit HBO show Game of Thrones, which was widely regarded to be the
most-pirated property during 2012 and 2013. The site promotes the bizarre
view that “HBO-style shows owe a lot to piracy for their cultural domi-
nance, because, if they were actually as exclusive as HBO wants to pretend
they are, they would have had a much harder time gathering fans” (Beadon
2012). Site writers even claim that “piracy is at least partially responsible for
the success of the show” or even “helps [the] show survive” (Geigner 2013).
One wonders how it is that any movies or television programs were “cul-
turally dominant” or “gathered fans” prior to the internet, when piracy at
scale was virtually impossible.
In fact HBO had wildly successful shows (The Wire, The Sopranos) before
online piracy became as easy as it is today. Huge amounts of content are
available freely online both legitimately and illegitimately and do not achieve
the level of popularity that Game of Thrones has. And perhaps most criti-
cally, despite the condescending tone TechDirt’s writers take regarding HBO,
the company has a fiduciary duty to maximize its profits, legal and con-
tractual requirements to pay the many workers who create the show, and a
host of web-savvy developers who understand distribution technology. The
free content advocates assume that HBO does not know its own business
and fails to explore every avenue for maximizing its profits, which seems
absurd given HBO’s success. It is counterintuitive that an HBO executive
would suggest abandoning their subscription model and making the entire
show available for free. However, from the consumer-first, anti-labor per-
spective associated with political libertarians, such a decision would make
a strange kind of sense.
There is no doubt that some downloads have publicity value and that
illegal pirating has led to people signing up for HBO subscriptions. This is
because the show itself has been deemed of significant enough entertain-
ment value that many people want to watch it, and in far greater numbers
than other shows. The only legitimate cause for the show’s success is that it
is a great product. The notion that it would not succeed without piracy or
that it would be more successful if it were even more freely available stands
in stark contrast to common sense, business practice, and the evidence of
real experience.
In this sense, HBO can be said already to be implementing the policy
Masnick claims to promote: allowing some people to get copies of the mate-
rial for free but requiring those who want the entire package to pay. If HBO
246 The Myth of “Free Culture”
could earn more money by giving the show away more freely, it would be
required by its shareholders and fiduciary obligations to do so. The idea
that it is refusing to do so, knowing that it could increase profits from this
new strategy, solely to hold onto the power of “copyright monopoly” and
to demonstrate its arrogance over the fans who just want to appreciate the
show, challenges reason. It also verges on conspiracy theory of a particu-
larly ugly sort, since it accuses “media elites” of hoarding for themselves
something of culture or economic value just because they can. At the same
time it does not challenge the self-oriented, me-first, adolescent attitude
of cyberlibertarianism: I see it, it’s mine, not yours, no matter who made it
or how hard they worked on it.
It is remarkable to anyone thinking about the problem that the constitu
encies not represented in significant numbers among the pirate and free
content communities, including TechDirt, are the producers and distribu-
tors of content. TechDirt writers and commentators often speculate on the
arrogance of industrial and personal power they believe comes together in
the content industry’s refusal to give away its wares. They often see these
industries as working together in some conspiratorial fashion, significantly
or even primarily to prevent people who deserve (free) access to that con-
tent from getting it, even knowing that it would (in unspecified ways) ulti-
mately improve their profits. The apparent lack of knowledge displayed (or
pretended to be displayed, given TechDirt’s generally sophisticated engage-
ment with business practices) about the competitiveness of corporate prac-
tices is startling. The idea that there is a hugely profitable business model
out there that nobody is willing to try because refusing to do it makes
illegal downloaders angry, is absurd. It is suggested that upsetting the ille-
gal downloaders is more important to these businesses than making money.
In the United States, if a wildly successful business model exists in an estab-
lished industry that nobody has yet tried, one can rest assured that some-
body will try it, and it will work. The fact that it is not working is prima facie
proof that the free content supporters are wrong in their basic assertions.
Covid-19 provided a particularly telling illustration of how cyberliber-
tarian anticopyright agitation works. In early 2020, the Internet Archive
(IA) announced the creation of a program under which it would “suspend
waitlists for the 1.4 million (and growing) books in our lending library by
creating a National Emergency Library (NEL) to serve the nation’s dis-
placed learners” (Freeland 2020). The IA did not mention that many works
The Myth of “Free Culture” 247
in its lending library are still under copyright. The licenses under which it
and other libraries allow access to those works were negotiated with pub-
lishers and authors who deserve compensation for the use of those materi-
als. Rather than approaching any of those publishers and authors about
permission to use those works in this manner, the IA simply asserted that
the emergency allowed it to distribute unlimited numbers of the works
free of charge and free of restriction. As is typical of cyberlibertarian activ-
ism, the stripping away of ordinary rights (which largely accrue to indi-
vidual authors, not megacorporations like Disney) was presented as a social
good. Brewster Kahle, founder of the IA, stated that “the library system,
because of our national emergency, is coming to aid those that are forced
to learn at home. This was our dream for the original Internet coming to
life: the Library at everyone’s fingertips.” The project claimed support “from
over 100 individuals, libraries and universities across the world, including
the Massachusetts Institute of Technology.”
Kahle’s decision to create the resource without first consulting publish-
ers and authors speaks directly to the real investments of cyberlibertarian
anticopyright activism. The user comes first; the worker comes last, if at
all. The free provision of content benefits major corporations, especially
Google, which has itself been the target of litigation surrounding its distri-
bution of unauthorized copyrighted material, especially on YouTube (Kafka
2014), which caused it to change its practices and provide mechanisms for
copyright holders to request the removal of material to which they have
rights. Even librarians (see, e.g., Madigan 2016) and others whose job would
seem to be in part the preservation and support of the producers of cul-
tural materials can become the opponents of that very work. They claim in
truly counterfactual fashion that not just authors and publishers but librar-
ies are “gatekeepers” interested not in providing but in preventing access to
those materials.
Publishers and authors reached out to Kahle, offering several sugges-
tions for how the NEL might move forward while still respecting copyright
interests. Rather than working with them, Kahle rebuffed their attempts and
doubled down on what most authors and publishers considered a novel
and “invented” (so far as statutory or case law is considered) theory of
copyright, according to which a library “scans a print copy of a book they
have legally acquired, then makes the scan available to be borrowed in lieu
of the print book” (Albanese 2020). The faults in this “theory” are obvious.
248 The Myth of “Free Culture”
Libraries could photocopy endless copies of paper books and then distrib-
ute them without paying the publisher or author, rather than purchasing
as many copies as they would like to have in their collections. Libraries
constitute a long-standing and critical feature of book distribution, allow-
ing hundreds or thousands of people to gain access to books, typically for
free in most public libraries. Authors and publishers are compensated in
a small fraction of the income they would receive for selling a copy for
every reader. Libraries’ role in the current publishing ecosystem reflects real
generosity and openness, which is also seen in newer policies that allow a
limited number of readers to access digital books based on the number
of licenses the library purchases. Both the long-standing and newer digital
policies evolved from close and largely cooperative connections among cre-
ators, publishers, libraries, and readers.
Kahle had to assert that these authors and publishers did not under-
stand their own interests in their own work. When authors such as Colson
Whitehead, Neil Gaiman, Chuck Wendig, and Alexander Chee, whose
works were included in the NEL, along with organizations like the Authors
Guild objected to the project, Kahle accused them of just retweeting “what
they see on social media” (Dwyer 2020). Kahle refused to back down or
modify his position, resulting in a lawsuit filed by Hachette Book Group,
HarperCollins, John Wiley & Sons, and Penguin Random House, in co-
ordination with the Association of American Publishers, who called the
NEL an “opportunistic attack on the rights of authors and publishers”
(Albanese 2020). As copyright activist David Newhoff wrote, “What is
most galling about the IA in this regard is the pretense to public service
and largesse against the backdrop of a real emergency. One cannot be ‘gen-
erous’ with the labor and property of others, particularly those who are,
themselves, vulnerable to the economic hardship caused by crisis” (2020).
Yet, not only do agitators like Kahle somehow manage to make this “gen-
erosity” with other people’s labor seem reasonable, they also make it appear
as if it benefits the very people being harmed.
In his 2016 novel I Hate the Internet, Jarett Kobek claims that Jack Kirby
is “the central personage of ” his book “despite never appearing as a char
acter within its pages” (21). Kirby is the comic book writer and artist who
created most of the characters now world famous due to the blockbuster
Marvel films. Kirby was famously denied participation in the profits on
account of the work-for-hire agreements he was made to accept in his early
The Myth of “Free Culture” 249
A few years ago, the phrase crypto anarchy was coined to suggest the impend-
ing arrival of a Brave New World in which governments, as we know them,
have crumbled, disappeared, and been replaced by virtual communities of
250 The Myth of “Free Culture”
Denning’s essay was posted to the Cypherpunks mailing list and reprinted
in Ludlow’s volume of essays, Crypto Anarchy, Cyberstates, and Pirate Utopias
(2001). However, despite (or perhaps because of ) Denning’s role as a pro-
fessional in the computer science and information security communities,
her voice has been relatively silenced in ongoing discussions of encryption.
Although the views of ideologues like Tim May and Eric Hughes have been
taken as authoritative, they are separated from the political architecture
that underlies their ideas.
In this, encryption resembles the debates around the politics of crypto-
currency (Golumbia 2016). Cryptocurrency uses encryption in the block-
chain software, but its name and origins in the cypherpunk community
explain why it has become the default meaning of the stand-alone term
“crypto,” which used to refer to encryption. Both were developed by far-
right thinkers as means to bypass or invalidate democracy. However, both
cryptocurrency and encryption promoters who claim not to be part of the
extreme right, assert that these technologies can be used to promote the
political values of groups who are often diametrically opposed to those of
cypherpunks. It is a mark of the strength of cyberlibertarianism that most
of these non-far-right actors seem unaware of the political origins of these
technologies.
It is not hard to understand why encryption and antidemocratic politics
are so hospitable to each other. In 1997, encryption advocates described the
importance of encryption technology as follows: “Encryption is an essen-
tial tool in providing security in the information age. Encryption is based
on the use of mathematical procedures to scramble data so that it is ex-
tremely difficult—if not virtually impossible—for anyone other than author
ized recipients to recover the original ‘plaintext.’ Properly implemented
The Myth of “Free Culture” 251
goes that because John Jay, James Madison, and Alexander Hamilton chose
to publish their commentaries on the U.S. Constitution under assumed
names, everyone in a democracy has an absolute right to act with some level
of significant anonymity in an arbitrarily large number of social contexts.
This is an example of “completely different and exactly the same” (Golum-
bia 2013), where the manifest differences between digital technology and
earlier forms of media are at once embraced and denied.
Anonymity and encryption come together in the shape of Tor, the well-
known tool originally developed by the U.S. Naval Research Laboratory.
While the Tor system makes use of encryption technologies, it is primar-
ily a system for anonymization. Here that means something more robust
than making up a fake name for a social media account. As the Tor Project
explains the technology:
In the 1990s, the lack of security on the internet and its ability to be used for
tracking and surveillance was becoming clear, and in 1995, David Goldschlag,
Mike Reed, and Paul Syverson at the U.S. Naval Research Lab (NRL) asked
themselves if there was a way to create internet connections that don’t reveal
who is talking to whom, even to someone monitoring the network. Their
answer was to create and deploy the first research designs and prototypes of
onion routing.
The goal of onion routing was to have a way to use the internet with as
much privacy as possible, and the idea was to route traffic through multiple
servers and encrypt it each step of the way. This is still a simple explanation
for how Tor works today. (“About Tor” n.d.)
Note that what the developers refer to as “privacy” here, as in most digital
discourse (see chapter 6), goes well beyond privacy as that right has been
defined in most human rights and legal regimes.
The Tor network is often misunderstood because it has at least two
faces. This is obscured by the fact that in recent years, most users down-
load a single package to use Tor, which enables both functions. The first
function is the Tor network itself, the feature that hides the origins of net-
work traffic. The second function, “Hidden Services” (which technically
requires the first function), refers to websites hosted through the Tor net-
work, so that the actual host and hosting location are not accessible on the
open web. Although estimates vary, it is claimed that Hidden Services
The Myth of “Free Culture” 257
make up only a small part of Tor network traffic. To be clear: you can use
Tor without using Hidden Services, but you can’t use Hidden Services
without using Tor.
Most people who use digital technology have heard of Tor because it
provides access to the Dark Web, best-known for black markets in com-
modities like drugs and weapons. These Dark Web sites use Tor Hidden
Services. The most famous of them was the original Silk Road, run by
now-convicted drug trafficker Ross Ulbricht. The Silk Road’s URL was
something like silkroad7rn2puhj.onion, an address that did not use the
ordinary Domain Name System used by regular websites and therefore
would only resolve if a user were running Tor.
Most of the troubling activity on the Tor network takes place on the
Hidden Services, at least according to critics, including Tor’s ex-director
Andrew Lewman (O’Neill 2017). Yet when advocates describe the benefits
of Tor, they almost always fail to mention Hidden Services. The Tor Proj-
ect describes itself as “an effective censorship circumvention tool, allowing
its users to reach otherwise blocked destinations or content. Tor can also
be used as a building block for software developers to create new communi
cation tools with built-in privacy features,” that “individuals use Tor to
keep websites from tracking them and their family members, or to connect
to news sites, instant messaging services, or the like when these are blocked
by their local Internet providers,” “for socially sensitive communication:
chat rooms and web forums for rape and abuse survivors, or people with
illnesses,” and that “journalists use Tor to communicate more safely with
whistleblowers and dissidents. Non-governmental organizations (NGOs)
use Tor to allow their workers to connect to their home website while
they’re in a foreign country, without notifying everybody nearby that
they’re working with that organization” (“Tor Project”).
All these uses of Tor involve the Tor network, but not Hidden Services.
Although these methods anonymize network traffic, they do not inher-
ently anonymize the user. If I use an encrypted messaging app over Tor,
both the sender and receiver of the message know who I am, regardless
of whether my identity is made clear in the technological infrastructure.
When a user accesses a newspaper website that is censored in their country,
anonymity refers to the ability of the government to determine that the
access came from within their country, rather than the identity of the per-
son viewing the website. It is less about the identity of the person taking
258 The Myth of “Free Culture”
The Tor Project’s developers, promoters, and supporters insist on the need
for and acceptability of the technology by explicitly referencing human
rights discourse. In response to a wave of negative press reports in 2014,
Tor Project lead developer Roger Dingledine echoed the general paranoia
of the Tor community in a piece called “Possible Upcoming Attempts to
Disable the Tor Network.” Dingledine uses the typical cyberlibertarian strat-
egy of describing the most sympathetic, albeit hypothetical, user to legiti-
mize the technology’s existence: “The Tor network provides a safe haven
from surveillance, censorship, and computer network exploitation for mil-
lions of people who live in repressive regimes, including human rights activ-
ists in countries such as Iran, Syria, and Russia. People use the Tor network
every day to conduct their daily business without fear that their online
activities and speech (Facebook posts, email, Twitter feeds) will be tracked
and used against them later.” This is expert rhetoric, but it is much less
skilled in terms of providing hard data, conforming to logic, or exhibiting
clear thinking. The cases for criminal and harmful uses of Tor are as evi-
dent as those of “human rights activists.” Even easier to overlook is the
fact that Dingledine positions the Tor organization, itself closely tied to the
U.S. intelligence and diplomatic apparatus, as an entity that can accrue
to itself the power to intervene in the affairs of sovereign governments as
it sees fit. If we replace the ideologically loaded framing Dingledine uses
with others less favorable—“Tor enables fascist networks to coordinate
attacks on democratic governments without fear of detection”—which is
no less logically plausible than Dingledine’s version—we see the problems
with private actors deciding for themselves when and if laws apply. This
is no exaggeration: as is detailed in chapter 7, far-right terror networks are
among the most persistent and innovative users of encrypted and anony-
mizing tools, specifically to organize antidemocratic violence.
Yet Dingledine and other Tor advocates insist that Tor must exist spe-
cifically because of core democratic values:
Every person has the right to privacy. This right is a foundation of a demo-
cratic society. For example, if Members of the British Parliament or US
Congress cannot share ideas and opinions free of government spying, then
they cannot remain independent from other branches of government. If
journalists are unable to keep their sources confidential, then the ability of
the press to check the power of the government is compromised. If human
260 The Myth of “Free Culture”
This high-minded rhetoric sounds good, but it is turning the practice and
theory of democracy on their heads. “Privacy” as Dingledine describes it
has almost nothing in common with the right as it is understood in demo-
cratic polities. We need look no further than the attempted insurrection
against the U.S. government on January 6, 2021, to see how the British
Parliament and U.S. Congress cannot operate in the face of systems that
make it impossible for them to track threats. If what he means by “govern-
ment spying” is meant to indicate that Members of Parliament or members
of Congress should be using Tor for their ordinary communications among
themselves and their constituents, they would in many cases be violating
open records laws that exist precisely because democracy is based not on
“privacy” of governmental operations, but on their openness to examina-
tion by citizens.
I was involved in some of the press coverage of Tor at this time and
asked Dingledine on the Tor Project blog what he meant by “privacy” in
the above quote. I noted that despite his reference to its fundamental im-
portance to democracy, “the ‘right to privacy’ does not mean what you
assert it means here, at all, even in those jurisdictions that (unlike the US)
have that right enshrined in law or constitution.” Rather than answering
the question directly, Dingledine replied:
Live in the world you want to live in. (Think of it as a corollary to “be the
change you want to see in the world.”)
We’re not talking about any particular legal regime here. We’re talking
about basic human rights that humans worldwide have, regardless of par-
ticular laws or interpretations of laws.
I guess other people can say that it isn’t true—that privacy isn’t a universal
human right—but we’re going to keep saying that it is. (Dingledine 2014)
The Myth of “Free Culture” 261
Thus, in a single exchange, Dingledine goes from saying that the very
existence of a democracy depends on the “privacy” Tor provides, to stating
that he does not care about or intend to refer to the legal infrastructure of
the United States or any other country. Instead, he refers to a right outside
the law, whose exercise can and does vitiate the law, and so much the worse
for the law (and democracy) if it does. This is not democratic theory: it is
“natural rights” political philosophy, one of the most common elements of
far-right antidemocratic agitation, especially in our online age. While it is
no surprise to see such politics cloak themselves in the language of democ-
racy, it remains wholly antidemocratic.
NE T NEU T R AL IT Y
Routinely depicted as one of the most important civil rights issues of our
age, “network neutrality” is an obvious site for cyberlibertarian activism in
the United States. To call it bizarre would be a radical understatement.
While some of the principles advanced by net neutrality activists may make
sense and be preferable to alternatives, the fact remains that the vision
articulated by those activists is almost entirely at odds with the facts on the
ground. Net neutrality activism is best understood as tilting at windmills,
inventing fictitious enemies and nightmare scenarios that have little to do
with anything we can observe or that is even likely to happen. Further, the
basic principle advanced by net neutrality activists is difficult to reconcile
with the actual operation of digital technology. Like many cyberlibertarian
causes, such as SOPA and PIPA, net neutrality posits a clear and morally
righteous principle against a range of nefarious actors who want to “destroy
freedom.” However, the debate is hard to determine; to the extent that it
can be, it is out of step with obvious facts. As early as 2011, the digital stud-
ies scholar Kevin Driscoll was already documenting the wide range of topics
various actors described as NN (net neutrality), yet this remarkable poly-
semy has rarely been noted by the concept’s advocates. The energy expended
on net neutrality is disproportionate to any legible cause its advocates
claim to pursue. Therefore, one of its main effects is to galvanize political
support away from far clearer causes. It brings together a wide range of
political actors as a kind of “digital bloc” whose outrage is all too easily
channeled in other directions.
262 The Myth of “Free Culture”
The term was coined by legal scholar and digital technology expert
Tim Wu in two papers, “A Proposal for Network Neutrality” (2002) and
“Network Neutrality, Broadband Discrimination” (2003). Wu’s original
concern, which has remained constant across twenty years of activism, was
that home broadband operators—providers of physical internet access, typi-
cally referred to as internet service providers (ISPs)—might “restrict the
use of their broadband networks in ways that distort the market for inter-
net applications, home networking equipment and other markets of public
value” (2002, 1). Written in conversation with others in the libertarian-
leaning world of internet law at the time, including Lawrence Lessig and
Mark Lemley (see Wu 2003, 141), both of Wu’s early papers are explicit that
their major concern is business competition: “the danger of harm to new
application markets” (2002, 1) and “the public’s interest in a competitive
innovation environment centered on the Internet” (2003, 141); “the argu-
ment for network neutrality must be understood as a concrete expression
of a system of belief about innovation” (2003, 145). Despite the rapid adop-
tion of net neutrality as a principle to protect freedom of expression and
democracy, Wu argues that the doctrine’s primary purpose was to protect
businesses. This highlights the strange political bedfellows characteristic of
cyberlibertarian activism.
Wu’s papers from the early 2000s largely distinguish among three catego-
ries of participants in digital networks: content providers (such as the New
York Times, CNN, or individual bloggers), ISPs (such as Comcast or Veri-
zon), and users (typically understood to be individuals). Wu proposes a
“rule” that would “forbid broadband operators, absent a showing of harm,
from restricting what users do with their Internet connection, while giv-
ing the operator general freedom to manage bandwidth consumption and
other matters of local concern” (2003, 168). He introduces the term “dis-
crimination” to describe how ISPs might manage content they provide
to users. This arguably salts the discussion with a social justice flavor that
Wu himself admits might be misleading: “discrimination among Internet
applications is a different context” from discrimination according to crite-
ria “such as race, sex, or national origin” (152).
While Wu may have had innocent reasons for introducing this termi-
nology, it is hard not to see it as having had massively deforming effects
on the entire discussion. It is not the case that choosing to provide better
service to one content provider over another is a “different context” from
The Myth of “Free Culture” 263
in the first place’” (2019, 335). Yet providing enough bandwidth for video-
heavy services clearly falls under the forms of “discrimination” that even the
2015 FCC Open Internet rules, to say nothing of 2010 UK Ofcom rules,
explicitly allow if justified by usage considerations. Indeed, it is difficult to
find a civil or human rights issue in the question of whether Netflix, ISPs,
or consumers should pay for a data-heavy service, especially since the costs
will likely be passed on to consumers regardless.
In a typical piece of NN agitation, the Wall Street Journal argued against
the rollback of the 2015 Open Internet rules by noting that in the early
2010s, cell phone providers routinely offered tiered data service plans that
sometimes included “zero rating” provisions for content from its own pro-
viders: for example, “AT&T Inc. gave paying customers unlimited usage
of its own online video service DirecTV Now” (McKinnon and Knutson
2017). (The relationship of NN to internet provided over cellular networks
is already complicated, and it has not always been clear that NN rules
designed for ISPs also apply to cell service.) Plans like this “drew criticism
from some net-neutrality advocates,” and the “practice became so wide-
spread that Obama administration regulators, in one of their final actions,
sought to declare several plans illegal, including those offered by AT&T and
Verizon.” Yet the proposed regulation did not take hold, and such plans
dwindled in popularity regardless. The reasoning is not hard to grasp: most
people prefer plans that treat data equally, and the marketing language
and facts about the plans became well-known. Provision of data over these
networks is of course closely monitored by users and advocacy groups, and
yet we hear few complaints about the kinds of internet-ending service-
blocking NN advocates routinely scream about.
Sometimes, critics who are more familiar with the history of NN make
claims that are closer to the facts in terms of what NN has actually meant
as government regulation. In so doing, they have perhaps unintentionally
demonstrated how distant these concerns are from what we typically under-
stand as civil rights. For instance, New York Times reporter Keith Collins
writes, “Many consumer advocates argued that once the rules were scrapped,
broadband providers would begin selling the internet in bundles, not un-
like cable television packages” (2018). But he fails to note that whether
consumers like the way cable companies package their services is hardly
an important civil rights cause. The American Library Association offers
a similar assessment: “In a worst-case scenario, the internet could degrade
The Myth of “Free Culture” 265
into something similar to cable TV where you get priority access via the
more expensive ‘gold’ subscriber plan vs. a lower level of access from the
less expensive ‘bronze’ plan” (Bocher 2018). Again, such plans are the norm
across many kinds of products and services and are rarely understood as
matters of civil rights.
Activists across the country became incensed by the Trump administra-
tion FCC commissioner Ajit Pai’s reversal of the 2015 FCC Open Internet
order, which they knew would have a negative impact. Consider this sam-
ple of the many “news” and opinion pieces that explained exactly what
would happen. A piece in TechCrunch declares that NN rules contain the
“hunger these companies [ISPs] have had for the destruction of your right
to a neutral internet”; “America may not have a dictatorship, but it now,
if net neutrality is repealed, will have an oligopoly,” explicitly confusing
political sovereignty with corporate size and market sector dominance;
“the future is darker than you think” because we may pay “$20 more for
‘unlimited’ access,” which will lead to the United States becoming “less
attractive to talent” from abroad (Gorodyansky 2017). Similarly, New York
attorney general Eric T. Schneiderman said in a statement:
The FCC’s vote to rip apart net neutrality is a blow to New York consumers,
and to everyone who cares about a free and open internet. The FCC just
gave Big Telecom an early Christmas present, by giving internet service pro-
viders yet another way to put corporate profits over consumers. Today’s roll-
back will give ISPs new ways to control what we see, what we do, and what
we say online. That’s a threat to the free exchange of ideas that’s made the
Internet a valuable asset in our democratic process.
Today’s new rule would enable ISPs to charge consumers more to access
sites like Facebook and Twitter and give them the leverage to degrade high
quality of video streaming until and unless somebody pays them more
money. Even worse, today’s vote would enable ISPs to favor certain view-
points over others. (2017)
In an email to a New York Times reporter written after the formal repeal
of the Open Internet rules, Democratic FCC member Jessica Rosenworcel
stated that “internet service providers now have the power to block web-
sites, throttle services and censor online content. They will have the right to
discriminate and favor the internet traffic of those companies with whom
266 The Myth of “Free Culture”
they have pay-for-play arrangements and the right to consign all others
to a slow and bumpy road” (Collins 2018). Two years later but still project-
ing into the future, a piece in the tech publication OneZero argued that
“with no net neutrality rules, ISPs can also use their power as internet
access gatekeepers to disadvantage companies they compete with. AT&T,
for example, only hits users with costly and unnecessary bandwidth usage
caps and overage fees if they use a competing service like Netflix—but not
if they use AT&T’s own streaming services” (Bode 2019).
All these statements use the language of civil rights in a general way, but
then refer to specific economic concerns about costs for tiers of service that
are not civil rights concerns. These concerns exist in many other sectors
of media and communications, where they are rarely, if ever, framed as
having anything to do with civil rights. Again, this is not to suggest that
the repeal of the 2015 Open Internet order was correct. It is to note only
the surprising disconnect between the language used to rally support for
NN versus the concrete impact of the policies themselves.
Despite the enormous number of statements to this effect, the num-
ber of stories following up on what would seem an incredibly important
question was surprisingly thin. Surprising, that is, unless one is attuned to
this typical pattern in cyberlibertarian rhetoric and press coverage, where
claims of doom and destruction are routinely issued yet retrospective fact-
checking is virtually anathema. One telling exception was a piece by Zachary
Mack (2019) in the tech magazine The Verge, in which he interviewed for-
mer FCC commissioner, law professor, and Public Knowledge cofounder
Gigi Sohn, a reliable repeater of cyberlibertarian dogma.
Mack notes that even the conservative National Review stated that “the
internet apocalypse didn’t happen” but still asserts that “there’s a lot of lit-
tle things that did happen that aren’t great.” Sohn disagrees: “I don’t think
these are little things. I actually think they’re very big things.” She then offers
three examples of changes due to the repeal of the FCC’s Open Internet
order: “Verizon throttling the Santa Clara Fire Department’s broadband”;
“T-Mobile, Sprint and AT&T were found to have sold the precise geoloca-
tion data of their customers”; and “a customer bought his own router for
two hundred dollars and Frontier kept charging him ten dollars a month to
rent it.” Neither Mack nor Sohn notices that none of these things has any-
thing to do with what the public understands NN to be, even though Sohn
somehow attributes the selling of geolocation data to NN. The power to
The Myth of “Free Culture” 267
regulate such practices may well be present in the FCC’s Open Internet
order, but it isn’t NN as typically defined. Not only that, these reflect broad
industry practices that should be monitored or even abolished for other
reasons. Worse, if Sohn’s point is that the FCC regulates ISPs while nobody
regulates the many platforms and data brokers that traffic in personal data,
she is contributing to an interindustry battle between ISPs and platforms
that has little to do with civil rights.
Nevertheless, claims about NN as a vital civil rights cause turn up again
and again. In July 2021, the ACLU praised President Biden’s request that
FCC restore the 2015 Open Internet order, but then insisted that “net neu-
trality can’t wait” because “the internet’s anti-corporate censorship rules
known as net neutrality” ensure that ISPs “cannot block or slow down users’
access to internet content it disapproves of for business or political reasons”
(Marlow 2021). The insertion of “political reasons” is especially telling.
ACLU claims that “powerful ISPs like Verizon and Comcast have been free
to do just that—and they have,” but “do just that” refers not to restrictions
of access to content for political reasons. Rather, the typically threadbare
examples they offer refer to favoring or disfavoring content from Microsoft,
Netflix, YouTube (aka Google), and the ISPs themselves. Like the other
few post-hoc analyses, many of their examples are practices that the Open
Internet order does not cover. None involves the blocking access to politi-
cal information at all. The remaining examples come down to the same
issue we have seen repeatedly: consumers may have to pay a little extra for
unlimited access to video streaming platforms or certain audiovisual real-
time communications services. As always, the strange implication is that
by setting up shop as an ISP, a company should be compelled to provide
all its customers with unlimited bandwidth at the highest available speed
for an identical price.
While this may well be one reasonable approach to service pricing, it is
not obviously more or less in line with civil rights than many other pricing
plans, nor are such pricing plans in other media such as cable and specific
platform offerings typically discussed as civil rights issues. Further, the
bottom line always implies that some very large corporations, especially
Google, Facebook, and Netflix, should be able to demand any amount of
bandwidth from ISPs and pass along the costs of that usage to either ISPs or
consumers. While consumers will surely pay that price in the end, it is odd
to look at this complex network diagram and decide that civil or human
268 The Myth of “Free Culture”
As Newman notes, the rules we currently refer to as net neutrality are for-
mally called Open Internet rules by the FCC (see FCC 2015).
None of them takes on the more problematic original move: how is it
that “net neutrality,” a term coined to increase “innovation and competi-
tion” and whose entire ethical–political charge might at best be put on the
same order as antitrust actions, can have captured the popular imagination
to mean something entirely different, to which the public and activists
cling as if something incredibly important hangs on it, and indeed where
vast numbers of agitators can be harnessed to “save the internet” from a
danger it does not face, in the name of a policy that will not be mooted, let
alone realized?
Net neutrality is an especially interesting issue in digital politics. In con-
trast to many other questions addressed in this book, opponents of NN tend
to come from the political right (and ISPs) and its supporters come from
the progressive or centrist left and major technology corporations. But NN
does not provide an exception to cyberlibertarian principles. Instead, it
shows the especially troubling “liberal” face, in which not just the language
but the passion and energy of people genuinely devoted to civil rights are
turned toward advocating for something that, in the words of two espe-
cially thoughtful journalists, “isn’t necessarily even about neutrality, at least
not in the way it sometimes seems. Instead, we’re sussing out where inno-
vation begins and what government should do to encourage it” (Madrigal
and LaFrance 2014). That is, NN becomes a kind of deregulatory regulation
whose main purpose is to ensure “innovation,” which is not at all what well-
meaning civil rights activists appear to think it’s about, but is exactly the re-
construction of civil rights promoted by right-wing free market ideologues.
CHAPTER 6
Political Myths
F
ew sites are more obvious for observing cyberlibertarianism in
action than the political sphere, broadly defined. This includes
the nature and form of government, as well as popular partici-
pation in government. Cyberlibertarian discourse is characterized by the
remarkable certainty of digital utopians that digital technology not only
has political effects, but also radically reforms existing governmental struc-
tures in democratic directions and realizes the vision of those existing struc-
tures. Digital democracy is said to seek the same goals as our democratic
systems, but in ways both unavailable to and unforeseeable by prior gen-
erations. When reading the most strident of these writers, it is difficult to
understand how democracy ever emerged as a political form without the
liberating power of the technology we have today.
Politics is one area where the interestedness of digital utopians is most
evident. The loudest voices who advocate for the beneficial effects of digi-
tization on politics often speak passionately, directly, and stridently about
matters in which they have never before expressed any interest. When Clay
Shirky, Jeff Jarvis, and others first wrote about social media’s role played in
political uprisings in Iran (then called the “Twitter Revolution”) and later
in the Arab Spring (see Guesmi 2021 for a retrospective view of these claims),
it appeared to be the first time they had ever written anywhere about the
Middle East. The area became interesting because there was now some-
thing to say about the “digital” in it. After the Twitter Revolution failed to
result in a change in the political leadership and structure of the state, Iran
ceased to be of interest to digital utopians. As a result, these writers have
rarely, if ever, written about it again. Nevertheless, among the digerati,
271
272 Political Myths
DEMO CR AT I Z AT IO N
The fundamental political claim made for digital technology is that it democ-
ratizes. Few truths seem so unassailable. Yet as with so much else in cyber-
libertarian discourse, few slogans are harder to translate into sensible ideas
that can withstand ordinary levels of scrutiny. In pre-internet usage, espe-
cially in the mid-twentieth century, the word “democratization” had a fairly
specific meaning: the movement of political regimes toward more demo-
cratic forms (pace the obvious nuance needed to define “democracy” as a
political system to begin with). The word was especially prominent in de-
velopment discourse; it is easy to find documents from the United Nations
and global development bodies discussing the desirability of “democrati
zation” of states in Africa and South America, for example (Whitehead
2002; “Democratization”). These documents looked at fundamental issues
of political organization, such as the availability and fairness of voting pro-
cedures, the stability of institutions, access to education, and various civil
rights. Such discourses have been the subject of much criticism and analysis,
across the political spectrum. This is due to the complexities of develop-
ment and the question of what democracy means, particularly in countries
that do not have long traditions of discussion and debate on the subject.
As used in digital discourse, though, “democratization” has taken on an
entirely different set of meanings, whose relationship with this earlier mean-
ing is both tenuous and problematic. We are told with little hesitation that
it would be desirable and socially productive to “democratize” almost every
institution and civil society sector we might name, including information,
knowledge, finance, the professions, money, energy, and commerce of all
sorts. The meaning of “democratize” in these discourses may seem rela-
tively clear, and by stipulation not overtly political. The idea is that the
benefits of these parts of society have exclusively been enjoyed by segments
of the population limited by geography or access to political or economic
power. This group is often disparaged as an “elite,” a blurry amalgam of un-
earned social status, such as legacy admissions to universities, and directly
earned accomplishments, such as medical or legal licensure. So the inter-
net, and especially the web, is said to “democratize” knowledge. Providers
like Google and Wikipedia make information easily available to people who
would have otherwise had to seek it out in piecemeal and often haphazard
fashion, if they could access it at all.
274 Political Myths
The other reason I see is perhaps more subtle: we should make AI accessible
for the sake of social and economic stability. In the near future, ML and AI
will automate many jobs, and much of the value created by AI will accrue
to those who develop it (though overall the dynamics of the free market
will ensure that everyone will end up benefiting to some extent). One way
to counter-balance this is to make value creation through AI as broadly
available as possible, thus making economic control more distributed and
preventing a potentially dangerous centralization of power. If everyone can
use AI to solve the problems that they have, then AI becomes a tool that
empowers individuals. If using AI requires contracting a specialized com-
pany (that will most likely own your data), then AI becomes a tool for the
centralization and consolidation of power.
many smartphone apps, and beyond. While it is true that consumers may
not be able to manipulate these algorithms directly, it is unclear what ben-
efit there would be in doing so to “empower” anyone, especially if major
technology players like Google have the economic power to gather any real
loci of power that threaten their own. It is important to note that Chollet
is by no means a right-wing actor. He seems genuinely to believe that
“democratizing AI” is a good political thing. Yet he falls into the typical pat-
tern of deploying boilerplate cyberlibertarian rhetoric, in the nominal inter-
ests of individual freedom, to support the interest of one of the world’s most
powerful companies. As Dorothea Baur, an AI tech ethicist, writes, any “link
between ‘making AI accessible to as many people as possible’ and democ-
racy” simply “doesn’t exist” (2020).
A slightly narrower instance of “democratization” is found in “democ
ratizing journalism”: the idea of “a radically ‘democratized’ media, decen-
tralized, participative, and personally emancipating” (Carr 2018); or put
differently, a “decentralized, multimedia communication network that would
encourage the development of a ‘democratic personality’” (Carr 2018,
quoting Turner 2018). Given the long-standing tie between journalism and
democratic theory, the stakes of “democratizing” discourse should be clearer.
Surely we would only want to “democratize journalism” if it was no worse
than what it replaces or augments. Indeed it is hard to imagine a reason for
“democratizing journalism” other than improving democracy and journal-
ism’s well-understood functions within it. Again from Carr: “The founders
of companies like Google and Facebook, Twitter and Reddit, promoted
their networks as tools for overthrowing mass-media ‘gatekeepers’ and giv-
ing individuals control over the exchange of information. They promised,
as Turner writes, that social media would ‘allow us to present our authentic
selves to one another’ and connect those diverse selves into a more harmo-
nious, pluralistic, and democratic society” (2018, quoting Turner 2018).
Yet, Carr continues, “the democratization of media produced not har-
mony and pluralism but fractiousness and propaganda, and the political
energies it unleashed felt more autocratic than democratic.” That is, in a
familiar pattern, the promotion of technological “democratization” not only
has, as Baur (2020) rightly says, “nothing to do with equality in a democratic
sense”—that is, democracy—but also uses the language of democracy to
oppose democracy. The assumption—far too closely echoing the senti-
ments of right-wing propagandists like Alex Jones—that the “mainstream
276 Political Myths
media” is a “gatekeeper” that wants to keep the truth from citizens, while
participatory “journalism” is somehow free from self-interest and bias, turns
hundreds of years of institution-building on their heads.
It is no accident that the internet is typically held responsible for some
of the most destructive practical effects on journalism worldwide. This is
partly due to conspiratorial disparagement of institutions of journalism as
“elites” and “gatekeepeers,” which in part distracts from considering tech-
nology providers themselves as massively powerful “gatekeepers” (Hindman
2009, 38–57). The internet has also diverted the revenue streams that once
made newspapers and magazines sound businesses in the United States
(Stoller 2019; United Nations 2022; Weber 2014). During this series of
crushing blows to journalism, digital promoters—frequently with signifi-
cant self-interest, but just as often full of their own ignorance and ego—
told us that burning journalism to the ground would produce better and
more “democratic” journalism. Those who pushed back on this unlikely and
fact-resistant narrative were dismissed with the kind of bullying tactics famil-
iar across cyberlibertarian promotion. As Carr writes, again echoing Turner,
“Just as we failed to see that democratization could subvert democracy,
we may have overlooked the strengths of the mass-media news organiza-
tion in protecting democracy. Professional gatekeepers have their flaws—
they can narrow the range of views presented to the public, and they can
stifle voices that should be heard—yet through the exercise of their profes-
sionalism they also temper the uglier tendencies of human nature.” Yet
Carr also observes that Turner’s is one of the few voices in the 2018 collec-
tion he is reviewing, Trump and the Media (Boczkowski and Papacharissi
2018), to derive the obvious lesson: “What democracy needs first and fore-
most is not more personalized modes of mediated expression [but rather]
a renewed engagement with the rule of law and with the institutions that
embody it” (Turner 2018).
As a rule, “democratization” appears to mean tearing apart institutions,
regardless of their nominal functions, including institutions whose pur-
pose is to promote or even embody democracy. Thus the “democratization
of knowledge” slips readily into questioning the idea of knowledge and the
institutions established to develop and promote it, particularly the idea that
one needs to study and work in a given area for a sustained period of time to
speak about it with authority. It is hard to see how democracy is promoted
when Wikipedia, which is rapidly becoming the world’s most important
Political Myths 277
and professional accreditation, but rather that such expertise is the prob-
lem itself.
When we turn to cryptocurrency and blockchain, the absurdity of democ-
ratization reaches its apotheosis and, along the way, reveals its deepest com-
mitments. Blockchain discourse is replete with claims of “democratization.”
This is odd given the remarkable hostility toward democratic governance
that features prominently in that discourse, often from the same people
(Golumbia 2016). Cryptocurrency and Bitcoin promoters insist that their
products will “democratize money,” “democratize finance,” and “democ
ratize capital.” These ideas are ludicrous when examined for coherence
(Golumbia 2020a, 2020b), with “democratize” implying exemption from
government regulations, especially those of democratic governments.
It should be no surprise that a common trope in blockchain discourse is
that the technology “democratizes democracy”—sometimes through block-
chain voting technology (yet another of blockchain’s purported use cases that
fails conceptually and has never been made to work in practice; see Gerard
2017; Madore 2017), and sometimes for “government transparency” that is
guaranteed by the “trust” proponents say will be created by blockchain appli-
cations (Ogundeji 2017). Sometimes the claims become even more florid, as
when a lawyer and self-described “cryptocurrency millionaire” Jeffrey Berns,
CEO of Blockchains, “a company with no history,” purchased “an enor-
mous plot of land in the Nevada desert—bigger than nearby Reno . . . for
$170 million in cash” (Popper 2018). “Drawn to Nevada by its tax benefits,
including its lack of income tax,” Berns claims he bought the land in Storey
County to build “what he calls a ‘distributed collective entity’” that would
“operate on a blockchain where everyone’s ownership rights and voting
powers will be recorded on a digital wallet”—though he “acknowledged
that all this is way beyond what blockchains have actually accomplished.”
Replete with artists’ renderings of futuristic-looking cities, the plan depends
on the Nevada governor to help create “innovation zones” where, despite
Berns’s declaration that he is “not anti-government,” he aims to “democratize
democracy” (Nevett 2021). In these zones, “residents would manage most
of their affairs on blockchain applications,” although “many of the applica-
tions needed for the city have not been developed.” Even in business-
friendly Nevada, “the elected leaders of Storey County voted to oppose the
formation of a ‘separatist’ government on land owned by Blockchains.” That
is, on the ground, members of a democratic government saw technology
Political Myths 279
DE CENT R AL I Z AT IO N
is especially true since virtually all the gains made by progressive actors
in the past two centuries, along axes like “civil rights” and “human rights,”
have been those implemented and, more important, made long-standing
by democratically elected governments. Thus, the anarchist insistence that
government be entirely or largely abolished cannot help but at least ask the
question of how ending Social Security, Medicare, and Medicaid in the
United States or the National Health Service in Great Britain might better
realize the goals of those programs than we would by leaving them intact.
It should be no surprise that decentralization has been taken up as a
keyword that realizes the goals of antigovernment activists on the right and
far right. Many invocations of the neutral value of “open” in the context of
digital technology hark back to the writings of both Hayek (see chapter 2),
for whom “centralized planning” referred to communism, socialism, and
even fascism; and Hayek’s contemporary and in some ways rival Karl Pop-
per, whose promulgation of an “open society” mirrors Hayek’s disdain for
“centralization” (Tkacz 2015). Although Winner is correct to note the prev-
alence of decentralization in left political discourse, the pervasive influence
of neoliberal theory, especially through its rightist exemplars, requires par-
ticular attention. (For more on the history of right-wing neoliberalism, see
Mirowski 2013; Mirowski and Plehwe 2009. For more on the function of
neoliberalism in right and left politics, see Golumbia 2016.)
As Nathaniel Tkacz explains in detail, much of the foundational theory
of digital technology is derived from a relatively decontextualized reading
of Hayek. For example, Wikipedia founder Jimmy Wales relied on Hayek’s
“Use of Knowledge in Society” (1945) in his plans for the open encyclope-
dia. While Hayek’s explicit targets in The Road to Serfdom are the totali-
tarianisms of communism and fascism, nascent in that work is the idea
that government itself is “centralized” and that “any attempt at centralized
planning (i.e., socialism, communism, fascism) which is founded on exactly
the assumption that what is best for all society is directly knowable, is
likely to produce bad decisions that only satisfy a small group. For Hayek,
giving one group the ability to make decisions for the whole results in the
overall reduction of liberty and the advent of totalitarianism” (Tkacz 2012,
389–90). Tkacz rightly quotes from Hayek’s Road to Serfdom: “It is only as
the factors which have to be taken into account become so numerous that it
is impossible to gain a synoptic view of them, that decentralisation becomes
imperative. . . . Decentralisation has become necessary because nobody
284 Political Myths
physical nodes that can be seen and diagrammed. Once they have been
diagrammed, their contribution to the political whole is clear. Political power
shared among people, though, is not easily described in this sense, as it is
affected by shifting alliances, votes for policy and representation, and so
on. We might look at a network and say that it is more or less decentral-
ized, but it is much more difficult to look at a polity and determine that it
is structurally decentralized. It is not clear what such a description might
mean, especially when it is compared to democracy itself.
The idea that the “network” is decentralized is more metaphorical and
observer-dependent than the description suggests. If the point is that what
we see as “hierarchical control”—in political terms, totalitarian, dictatorial,
or monarchical authority—is impossible in a decentralized technical sys-
tem, we have to do a lot of work to determine what we mean by “control.”
One of the foundational texts in digital studies, Alexander Galloway’s 2004
Protocol: How Control Exists after Decentralization, makes this problem clear:
despite the fact that digital networks appear resistant to what we think we
recognize as “control,” which here means both political and technical power,
the internet “has a different, more horizontal system of command and con-
trol” (69). Galloway describes protocol itself—including “technical stan-
dards (such as the OSI Reference Model), network technologies (HTTP),
[and] institutional histories (IEEE)”—as “a technology able to establish real-
world control when it lacks certain fundamental tools such as hierarchy,
centralization, and violence,” a “massive control apparatus that guides dis-
tributed networks, creates cultural objects, and engenders life forms” (243).
This is abstract, yet there are concrete aspects to the critique as well.
Galloway and others suggest that a distributed network pushes more of its
processing functions out to nodes, especially toward nodes controlled by
end users. This is in contrast to Baran’s packet-switching model, which
depends more on some nodes with special privileges and powers. These
nodes can be compared to cloud services and routing systems such as DNS.
In a “decentralized” network, power is widely shared, but not all nodes are
equal. In a distributed network, the nodes are more or less equal. But due
to the context-dependent nature of what is being measured when we ask
whether a system is decentralized, it is not clear which criteria should be
used to determine what to call it.
Consider Facebook. The social network is a singular corporate entity with
remarkable power and a remarkably powerful platform. Thus in casual
286 Political Myths
but many other core notions, such as “peer production” and “open.” Along
the way they cite the work of thinkers and projects that emerge directly
from right-wing thought without noting, let alone reflecting, on their re-
purposing of these concepts for left-wing purposes. Indeed it is hard to
read the article without concluding that the authors are unaware of the polit-
ical histories of the concepts they deploy. Their overall reliance on decen-
tralization as an inherently better value extends from “ownership of the
platform (understood as a means of production)” (2016, 194) to “technical
infrastructure” to “governance model” to “copyright regimes” to the “value
generated by peer production” (195). Although they rightly note that “gov-
ernance in peer production may actually, and counter-intuitively, tend
towards concentration (‘oligarchy’) as projects grow”—citing the work of
Shaw (2012) and Shaw and Hill (2014), and stating that they “do not wish
to suggest that there is a mix of ‘ideal’ levels of decentralisation that these
five features can achieve in order to build, in turn, an ‘ideal’ peer-production
platform” (203)—they conclude by arguing that the typology they develop
serves the purpose of “breaking down and exposing the political benefits of
decentralisation for specific peer production dynamics.”
Yet decentralization remains almost entirely a metaphor throughout their
typology, without any attempt to drill down into the specifics either politi
cal scientists or network scientists have applied to understand whether sys-
tems are decentralized and in what respect. Decentralization, as analyzed
by the authors, follows the typical pattern of cyberlibertarian discourse,
and roughly corresponds to “good.” The authors’ interpretation of Marxist
or socialist desiderata aligns with a notion of decentralization, but they have
not shown why the reader should accept these attestations. Furthermore,
the authors have not explained how to square their endorsement of values
for their political purposes with the frequent endorsement of the same
formal values for exactly the opposite political purposes when deployed by
their opponents.
In recent years, decentralization has gained attention due to its critical
importance to Bitcoin and the blockchain software that other cryptocur-
rencies are built on. Even more than the Bitcoin blockchain, the Ethereum
blockchain, created to host applications other than cryptocurrencies, fore-
grounds decentralization as a core virtue. In fact, one of the first efforts
to realize the advertised potential of the Ethereum blockchain, so-called
Political Myths 289
network are in some way equal. Indeed that it is the connotation of “peer”
in some of its most widespread digital uses, such as “peer production” and
“peer-to-peer collaboration.”
Like other terms of cyberlibertarian art, “peer” admits of multiple defi-
nitions whose simultaneous usage serves to obscure the nature of the claims
made for it. Consider the opening of the Wikipedia entry for “Peer-to-Peer”
(2021):
In the first paragraph, an assertion is made about the abstract nature of the
network, according to which peers are “equally privileged” and “equipotent.”
This is then metaphorized to “egalitarian social networking” in “social con-
texts,” with the heavy implication being that peers in society are also equally
privileged and equally potent. Much of the internet-driven social theory
surrounding peer-to-peer takes for granted this fundamental equivalence,
as if it is imposed on system participants by virtue of participation itself.
While some network topologies insist that nodes contribute only a spe-
cific amount to a network, and participate only if they can contribute that
amount (of processing power, disk storage, etc.), in practice this is rare.
Although we can see some angles from which every X account is equally
privileged and potent, there are many others from which that is not the
case. Even though each account starts with zero followers, once time has
passed and some have grown to include hundreds of thousands or millions
of followers, the egalitarianism they started from is less clear. At first, it
might have seemed that only individuals would run X accounts, but corpo
rations, nonprofits, government agencies, and other institutions also run
them. Many considerations outside those confined to the network come into
play, including the celebrity of the user, their appearance in other media
outlets, or their possible roles in business or politics. So we might look at
Political Myths 293
X and call it a “peer-to-peer” network, because each account has the theo-
retical ability to “speak to” any other account. But over time, massive power
imbalances come to exist. Even if we think that “peer-to-peer” names a
virtue in social interactions, we make a mistake in looking at a system’s
initial state—in which there is a theoretical description of “nodes” or “peers”
as fundamentally equal—and assume that this carries through to its later
history.
Claims that “decentralized social media” might address the manifest prob-
lems miss this point entirely. Although X, to take the most obvious exam-
ple, is certainly a centralized provider, once we are inside the ecosystem there
are many angles from which it appears to be highly decentralized. Save
for the use of features like blocking and muting, anyone can speak to any-
one else. Even if someone has zero followers, there is a theoretical chance
of their tweet being read by anyone. However, the MIT Media Lab report
on decentralized social media by Barabas, Narula, and Zuckerman found
that despite overall support for decentralization, there are still limitations:
“Protecting the future of speech online involves not only these ambitious
experiments in decentralization, but the cultivation of an ecosystem of com-
peting publishing platforms, diverse in governance strategies, interopera-
ble and connected by a diversity of federated clients. We hope that those
most concerned with the potential of the network public sphere will sup-
port not only experiments with decentralization, but the legal, normative
and technical work necessary for these types of projects to thrive” (2017,
112). Throughout the report it’s evident that the crucial factor isn’t the
platform’s decentralization but rather the presence of laws, regulations, and
norms determining its aggregate effects on democracy as a whole and indi-
vidual citizens.
This mistaking of an abstract or initial state for a later set of facts is
particularly pronounced in blockchain and cryptocurrency. Theoretically,
in Nakamoto’s vision, full nodes (those that process all Bitcoin transactions
and are occasionally rewarded with cryptocurrency tokens as a reward) are
described abstractly, as if they are “equally privileged” and “equipotent” par-
ticipants in the blockchain network. They might well be described that way.
However, because any computer can run the Bitcoin software, the contri-
bution of any one node is affected by a number of factors, especially pro-
cessing power. Therefore, the more powerful the processor is, the more
tokens it can generate. As the Bitcoin and Ethereum networks have grown,
294 Political Myths
have done most of the work on Bitcoin Core, and that for the reference
client of Ethereum, development is even more concentrated, with two
developers doing the lion’s share of commits” (Srinivasan and Lee 2017).
While many have commented on the irony that the development of
Bitcoin and other blockchain software is highly centralized, and that even
governance decisions about that software appear to use mechanisms few
would consider decentralized, these do not point to the main operations
of blockchain or cryptocurrency, namely the mining of tokens itself. Over-
hanging the question of decentralized token mining is the implication that
everyone can get rich by mining their own tokens, although we have seen
how this flies in the face of the capital investment mining requires. The
rhetoric skips over facts, so that the cryptocurrency press is flooded with
claims that owning or trading cryptocurrency tokens is decentralized in
some fundamental sense, and that this decentralization of finance is wel-
come for the ordinary investor. Leaving aside the serious concerns anyone
should have about what it means to “invest” in cryptocurrency, the veil of
decentralization hides the massive and troubling ways in which cryptocur-
rency markets are at least as “centralized” as well-established trading venues
such as stock and bond markets.
Reports vary depending on many factors, including the number of
Bitcoin that remain in circulation and are available for trading, the corre-
spondence of Bitcoin wallet addresses with individuals and institutions,
and more. However, a commonly cited statistic is that “about 40 percent
of Bitcoin is held by perhaps 1,000 users” (Kharif 2017). Contrast this with
statistics developed by NYU economist Edward Wolff that “as of 2013, the
top 1 percent of households by wealth owned nearly 38 percent of all stock
shares,” and “the 10 percent of households with the highest wealth owned
more than 80 percent of all stock shares” (Kurtzleben 2017). While there is
room to quibble over specifics, including the comparison of global hold-
ings of the single token Bitcoin with the entirety of the U.S. stock market,
it is worth noting that according to the Census Bureau, there were over 125
million U.S. households as of 2023 (“Quick Facts” n.d.), of which 10 per-
cent is approximately 12.5 million, and 1 percent is 1.25 million. Despite
the massive inequality these numbers disclose, it would be next to impos-
sible to use them to argue that cryptocurrency ownership is less centralized
or concentrated than stock ownership: a thousand “households” world-
wide own 40 percent of Bitcoin, whereas even in one of the world’s richest
296 Political Myths
wording in the Bill of Rights and the Fourteenth Amendment. Privacy has
been formally enshrined in many constitutions written more recently than
the late eighteenth century as well as Article 12 of the 1948 Universal Dec-
laration of Human Rights.
Legal scholar Ronald J. Krotoszynski Jr. examines the “highly protean
nature of the concept of ‘privacy’” (2016, 1), a concept that “seems to mean
everything—and nothing—at the same time” (2). He suggests that rather
than resolving this inherent polysemy, we “learn to live with the privacy
hydra” (3) while seeking greater definitional clarity. Privacy issues cross trans-
national borders and thus cannot be adjudicated solely in national terms,
even if “local culture strongly informs and shapes the articulation and pro-
tection of privacy interests within particular legal systems” (4). Krotoszyn-
ski offers three rough conceptual glosses on privacy: (1) “nondisclosure” or
“informational privacy,” “the ability to control the gathering and dissemi-
nation of personal information, whether by the government or other per-
sons” (191n24); (2) “dignity,” according to which “everyone has inherent
dignity and the right to have their dignity respected and protected” (xiii,
quoting the Constitution of South Africa); and (3) “autonomy,” “the right
of the individual to be self-regulating in matters of central importance to
happiness and identity” (6).
Privacy, even more than freedom of speech, is of signal interest to nearly
everyone concerned with the intersection of human and civil rights with
digital technology. While independent thinkers, activists, scholars, and tech-
nology users tend to embrace thick conceptions of privacy that relate in a
variety of ways to the polysemy of the word “privacy” that Krotoszynski
notes, digital rights activists and technology promoters tend to use the word
in a different way. They consider privacy to mean absolute opacity to demo
cratic regulation, regardless of whether democracies have legitimate interests
and have used legitimate means to exercise their oversight responsibilities.
In the previous chapter we explored much of this when discussing encryp-
tion and anonymization technologies. The advocates of these technologies
believe that democracy is a sham and that all governments are inherently
authoritarian. Therefore, ordinary people must take advantage of all avail-
able technology to block everyone from observing everything they do.
Many of the most prominent digital rights organizations have seized on
privacy as a value on which they can capitalize to garner public support.
This is in part because freedom of speech is more readily recognized now
298 Political Myths
as a value that has been distorted so fully that its current form seems to serve
those who want to eradicate democracy more than those who support it.
As legal scholar Woodrow Hartzog puts it, “It turns out that a broad and
singular conceptualization of privacy is unhelpful for legal purposes. It
guides lawmakers toward vague, overinclusive, and underinclusive rules. It
allows industry to appear to serve a limited notion of privacy while leaving
people vulnerable when companies and people threaten notions of privacy
that fall outside the narrow definition. And it often causes people who dis
cuss privacy in social and political settings to talk past each other because
they don’t share the same notion of privacy” (2021, 1679).
As with net neutrality, what internet freedom activists and mainstream
civil rights activists (including lawyers and legal scholars in both camps)
mean by privacy is almost entirely different. Krotoszynski and other legal
scholars’ subtle shadings of meaning attached to privacy do not survive in
cyberlibertarian discourse. Instead, government is put on one side and pri-
vacy is put on the other, effectively turning privacy into anti-government
dogma, as in the “Cyphernomicon” (introduced in chapter 2). The initial
item in the opening section of the “Cyphernomicon” is labeled “The Basic
Issues,” and the first of these is “Great Divide: privacy vs. compliance with
laws” (May 1994), which could not make clearer how digital rights advo-
cates construe the issue. It is worth noting that none of the meanings of
world constitutions, as detailed by Krotoszynski and others, casts privacy in
these terms. Privacy and surveillance are critical issues for all users of digital
technology, especially for members of racial, ethnic, and other minorities
(Benjamin 2019; Browne 2015). But what these words mean in the hands of
critics and activists authentically concerned with civil rights is very differ-
ent from what they mean in cyberlibertarian discourse.
One of the most troubling ways cyberlibertarianism influences discus-
sions of privacy is that most activism around privacy is directed at govern-
ment surveillance. Reading the position papers of EFF, Fight for the Future,
and all the others, one would be forgiven for thinking surveillance is a
priori an activity that only governments perform. For institutions that pri-
oritize “innovation,” the solution to privacy problems is to give more control
over to corporations and technology, while disempowering governments.
Nowhere is this clearer than in public discussions of facial recognition and
other forms of biometric surveillance. Fight for the Future sponsors a proj-
ect called “Ban Facial Recognition,” which rightly describes the problem
Political Myths 299
are seeking to create the impression that this innovative technology is some-
how ill-suited to meeting the tax-reporting requirements that apply to tradi
tional banks and brokerages” (2021). Goldstein’s analysis is exactly correct,
not least in her pointing out that what Congress set out to do is to hold
cryptocurrency traders and trading platforms responsible for paying taxes,
the same way they are when trading more established instruments, and
using the same methods for doing so. Goldstein is also right that the only
reason to object to a leveling of the playing field is that cryptocurrency is
taking advantage of gaps in existing regulation to profit off what is essen-
tially cheating on taxes.
Even more, she is right that industry used a “combination of jargon and
threats” to stave off the regulations. At the core of these threats was the lan-
guage of privacy and surveillance. One of the most extreme examples could
be found in the crypto industry publication CoinDesk, which published an
opinion piece by Marta Belcher as part of its “Privacy Week” that declared
the tax provisions “unconstitutional” ( 2022). She called the bill “warrant-
less surveillance of sensitive financial information” that “violates the Fourth
Amendment of the US Constitution” while declaring that “financial pri-
vacy is not bad or illegal. To the contrary, it is essential for civil liberties.”
She also suggested that the U.S. Supreme Court needs to reexamine its 1974
Burrows v. Superior Court decision about bank secrecy—without mentioning
that, were the Court to do as she suggests, all reporting of financial trans-
actions to the IRS would be declared unconstitutional, a result that would
fulfill the decades-old dreams of right-wing anti-tax conspiracy theorists.
Similarly, Fight for the Future responded to the infrastructure bill with
a social media campaign and a website called DontKillCrypto.com. The
group declared that crypto exists “to create alternatives to Big Tech” and
the infrastructure bill would “undermine human rights and free expres-
sion, and create harmful surveillance requirements for artists & creators”
(Fight for the Future 2021). The bill “mandates mass surveillance of the
crypto-economy in the name of reducing tax avoidance. This puts many
fundamental cryptocurrency participants in an impossible position,” the
group claims. Therefore, resisting the bill is about “the United States’ abil-
ity to participate in cryptocurrencies and a decentralized future that puts
the rights of people above the exploitative and manipulative business mod-
els of Big Tech.” It is scarcely possible to measure the dishonesty in these
statements, which defend the practices of one of the most exploitative and
302 Political Myths
One of the main rallying cries for both digital technology promoters and
digital technology companies is “free speech,” even though these interests
are sometimes portrayed as different from each other. The idea that digital
technology does not merely promote free speech, but is free speech, remains
fundamental to the cyberlibertarian construction of technology as libera-
tory. This is despite the fact that free speech and “censorship” must be re-
framed so that they mean different things from the reasoning put forward by
the major thinkers who put freedom of speech at the center of democracy.
This framing, as with so many other matters of cyberlibertarian ideology,
does most of the argumentative work before the discussion even begins. As
one of the legal scholars who has worked most closely on this issue, Mary
Anne Franks, puts it in a 2019 essay, “In a 2019 survey conducted by the
First Amendment Center of the Freedom Forum Institute, 65 percent of
respondents agreed with the statement that ‘[s]ocial media companies violate
users’ First Amendment rights when they ban users based on the content of
their posts.’ That is, a majority of respondents believe that the First Amend-
ment applies to private actors. This belief animates the multiple lawsuits that
have been filed against companies such as Twitter and Google for alleged
free speech violations and fuels the increasingly common claim that social
media platforms are censoring conservative viewpoints.” She goes on:
But as the text of the First Amendment itself makes clear, this belief is wrong
as a matter of law. The First Amendment states that “Congress” shall make
no law abridging the freedom of speech: the right to free speech, like all
rights guaranteed in the Bill of Rights, is a right against the government.
While the reference to Congress is now understood to include a wide range
of government action, the “state action” doctrine maintains the fundamental
distinction between governmental and private actors. Private actors are not
subject to the restraints of the First Amendment except in the rare cases
when they perform a “traditional, exclusive public function.” (2019b)
Political Myths 303
This is not some arcane legal point with which only scholars should be
concerned. Rather, the First Amendment is a remarkably “fundamental”
statement of principle (see also Franks 2019a) around which democracy
is said to be organized, despite the way that many of its most fervent pro-
moters dismiss democratic governance as an important value. The First
Amendment exerts a “gravitational pull” or “cultural magnetism” that even
outside of digital technology, in legal scholar Daniel Greenwood’s words,
“threatens to swallow up all politics” (1999).
Ordinary citizens who accept this argument on its face may unwittingly
grant even more power to nongovernmental actors, regardless of how much
power those actors already possess. It is no accident that as early as 2012,
Twitter described itself as “the free speech wing of the free speech party”
(Halliday 2012), despite its product’s tendentious relationship to freedom
of speech as a legal concept. If tech companies’ claims were taken seriously,
newspapers would be unable to operate at all. This is because the simple
decision to edit the paper, to print the work of some reporters and not
others, to print some letters and not others, would be a violation of “free-
dom of speech.” The only allowable speech platforms would be those that
permitted everyone to speak, without regard for the means they were able
to bring privately toward amplifying that message.
One of the fascinating fractures between “digital rights” promoters and
the companies they generally represent is over freedom of speech. In a 2014
document that seems quite bizarre a decade later, EFF chastised Twitter for
complying with court orders to take down content in countries “where they
do not have significant assets or employees” (Galperin 2014). Twitter, EFF
claimed, was “stepp[ing] down from the Free Speech Party.” EFF’s legal
reasoning is quite strange, but its consequences are even stranger: Twitter,
in EFF’s opinion, must display any content its users choose to post, every-
where, regardless of government policies.
This policy can be understood in a different fashion when viewed from
the perspective of popular sovereignty. EFF’s view is that even if a govern-
ment is fully democratic in form, its citizens should have no power whatso
ever to regulate how Twitter operates. Further, unless Twitter has an office in
that country, the country must accept Twitter’s product and all communica-
tions that occur on it. Even though EFF is criticizing Twitter, it is difficult
to miss the absolutist deregulatory impulse at work. EFF is using framing
and pressure to prevent any company from complying with democratic laws.
304 Political Myths
The same piece calls “on users to remain vigilant, using resources such as The
Chilling Effects Project to keep Twitter honest, specifically citing the need
to watch out for Twitter caving to government pressure to censor overtly
political speech.” The EFF nods at the central concern of free speech juris-
prudence, “overtly political speech,” to temper the antidemocratic thrust
of its position.
As Franks argues, while a general orientation toward less rather than
more interference with user speech may be a welcome principle in content
moderation, platforms are under no obligation to “allow” any specific per-
son to use or not use their product. Indeed, this inversion of democratic
principle—under which we now learn that it is implicit in the Constitu-
tion that only platforms that allow unmoderated speech should be fully
legal—is so absurd as to rarely be said out loud (not least to the extreme
amounts of content moderation required to keep child abuse imagery and
other fully illegal materials from swamping any such platform; see Gillespie
2018; Roberts 2019). Yet it remains the default conceptual argument that
digital rights organizations claim to stand by.
The free speech framing is critical because it takes off the table the fun-
damental question of who decides whether social media platforms like X
are legitimate businesses to operate and/or legal technologies that can oper-
ate without a license. Democratic governments highly regulate many forms
of communications technology, and in fact, those with the most reach,
such as television and radio, have often been the most subject to regulation
and licensing. Given the tremendous power of social media, it is unclear
whether it fits into a democratic polity, especially without oversight. Despite
what advocates frequently say, First Amendment jurisprudence in the United
States is not absolute, even though it is the closest to “absolute” free speech
protection in the world. These include not just exceptions for private spaces,
as Franks and others have noted. Although in general what we call “public
spaces” are required to allow any legal speech across the board, govern-
ments are permitted to, and frequently do, impose what are called content-
neutral time, place, and manner restrictions on speech even in public forums.
This is so central to First Amendment jurisprudence that the general prin-
ciple is considered to be solid and generally untouchable law.
A report prepared for the U.S. Congress in 2014 made clear these and
other principles of exception to the First Amendment. Kathleen Anne Ruane
Political Myths 305
explains that “some public spaces, such as streets and parks, are known as
‘traditional public forums,’ which means that they generally are open to all
people to express themselves by speech-making, demonstrating, or leaflet-
ing, and the like” (2014, 7). In these forums, she goes on, “the government
may regulate speech as to its time, place, and manner of expression, so
that, for example, two demonstrations do not occur at the same time and
place” (8). Further:
Even speech that enjoys the most extensive First Amendment protection
may be subject to “regulations of the time, place, and manner of expression
which are content-neutral, are narrowly tailored to serve a significant gov-
ernment interest, and leave open ample alternative channels of communica-
tion.” In the case in which this language appears, the Supreme Court allowed
a city ordinance that banned picketing “before or about” any residence to
be enforced to prevent picketing outside the residence of a doctor who per-
formed abortions, even though the picketing occurred on a public street.
The Court noted that “[t]he First Amendment permits the government to
prohibit offensive speech as intrusive when the ‘captive’ audience cannot
avoid the objectionable speech.” (9)
These rarely considered aspects of free speech jurisprudence are highly rele
vant to the debate about free speech and digital technology. Even though
social media platforms like X and Facebook are wholly owned by private
companies, the “spaces” they offer are taken by digital advocates to be func-
tionally the same as “public forums” as defined by the Supreme Court. But
neither case law nor statute supports this contention.
Further, even in the unlikely case that the noncommercial parts of the
internet were to be declared “public forums,” the question of content-
neutral time, place, and manner restrictions should come into play. Digital
rights promoters have tried to keep this line of reasoning off the table, but
it deserves closer examination. U.S. public forum jurisprudence depends on
the notion of physical property. In general, to be a public forum, the people
must own a specific place or building, and it is the public ownership of the
property that requires speech protections. The implicit cyberlibertarian doc-
trine is that there is an infinite supply of such space, that ownership does
not matter, and that regardless of who sets up the space, who owns it, and
306 Political Myths
so on, it must be kept free of (speech) regulation. Buried within this logic
is the repudiation of the content-neutral time, place, and manner doctrine,
due to the nonphysical, unowned nature of the “property” involved.
Suppose, prior to the advent of the internet, someone were to propose the
creation of a massive, person-to-person, group-to-group public square in
which all speech were to be allowed (although even here the specific ex-
ceptions to free speech would have to be respected). It would be up to the
democracy to decide whether to create that space, how to create it, and when
it would and would not operate, and so on. We see this all the time in more
limited ways: local governments create “free speech walls,” for example, but
may choose to paint them over periodically, to add or remove space from
them, and so on. This is up to the people to decide through their demo-
cratically elected representatives.
“Digital rights” advocates subtly turn this upside down. Their view is
that anyone may erect “free speech walls” that float free of property own
ership, and governments have no say over this, nor do they have any say
over the speech conditions on these walls. In other words, governments
have no control over the creation and designation of “public forums.” This
means that governments do not have the power to manage public spaces,
which is one of the core functions of democratic governance. This is, at its
heart, a resurgence of the anarcho-capitalist doctrine of “permissionless
innovation” (see chapter 2).
Now imagine an alternative scenario. Suppose that, as with physical space,
governments retained the power to decide whether particular online spaces
are “public.” In the case of X, Facebook, and the like, this scenario raises a
profound question: why must governments allow these companies to run
what are thought of as public forums, despite not having the power we
associate with governmental ownership and oversight of such forums?
Another way of asking this question is as follows: Are purportedly public
forums like X and Facebook legal to begin with? And even if they are de
facto legal because nobody thought to ask this question when the compa-
nies were started, should they remain legal?
Given the destructive and antidemocratic power of social media, it seems
appropriate to ask whether Zuboff ’s (2019, 2021) suggestion that digital
technology and social media constitute a form of behavioral management
that democracies should consider outlawing is consistent with this question.
An alternative approach would be to consider licensing (Pasquale 2021),
Political Myths 307
which was done to great effect in the United States with regard to broad-
casting. This seems to be the intent behind the support for net neutrality
as described in the last chapter, although it has little to do with the specific
policies that go by that name.
It is no accident that agitation over Section 230, “free speech,” and “cen-
sorship” online keeps off the table the question of the legitimacy and legal-
ity of purportedly “public” speech forums, whatever their negative effects
might be and no matter how fully they are documented. The free speech
argument for digital technology is a fundamental backstop that makes it
nearly impossible for democracies to consider the central question: To what
extent do democratic polities need to allow to proliferate technologies whose
function is to make democratic sovereignty difficult or even impossible?
It is also no accident that the most vociferous advocates for “free speech
online” like EFF and the technology arm of ACLU refuse to reflect on re-
cent U.S. Supreme Court doctrine that increasingly equates money and
corporate action with speech. The ACLU has sided with corporations in
this regard, which has raised profound questions for many longtime ACLU
supporters about the organization’s commitment to democratic values. For
decades, Daniel Greenwood has noted the expansion of free speech jurispru-
dence in the United States, arguing that “the First Amendment principle of
abstention has expanded beyond a program of making politics safe to be-
come a primary vehicle in a post–New Deal attempt to reduce the scope of
conscious collective control over the market” (1999, 661). “The First Amend-
ment, understood in this way as a fundamental limitation on the scope
of government, has become the locus of a new Lochnerism,” Greenwood
writes, “or rather, a revival of the old Lochnerism under a new doctrinal
label.” Indeed, “First Amendment Lochnerism” has entered the vocabulary
of legal scholars, including some who work on digital technology issues, in-
cluding Tim Wu: “Scholars and dissenting judges have critiqued parts of
the Supreme Court’s First Amendment jurisprudence as ‘First Amend-
ment Lochnerism.’ The critique suggests that the protection of commer-
cial speech has become a means for the judiciary to strike down economic
regulation at will, creating a contemporary equivalent to the substantive due
process theories relied upon by the Court in New York v. Lochner” (Wu 2019).
No less a mainstream conservative than Supreme Court Justice William
Rehnquist, when he was still an Associate Justice, referred to Lochnerism
in a famous 1980 dissent in Central Hudson v. Public Service Commission as
308 Political Myths
a “bygone era . . . in which it was common practice for this Court to strike
down economic regulations adopted by a State based on the Court’s own
notions of the most appropriate means for the State to implement its con-
sidered policies.” He also called it a “discredited doctrine” that labels “eco-
nomic regulation of business conduct as a restraint on ‘free speech.’”
Note that what is at issue in the idea of Lochnerism is using free speech
as a way of describing economic activity, and then using that description
to prevent regulation of economic activity. (Somewhat suggestively for the
politics of cyberlibertarianism, Central Hudson was decided 8–1 by the gen-
erally liberal Court, who determined that restrictions on some commercial
speech violated the First Amendment. Rehnquist’s dissent is the only opin-
ion to mention Lochner—perhaps because the case had long been a talk-
ing point in right-wing legal circles but was considered beyond the pale for
most mainstream legal scholars.) This goes far beyond the famed “money
is speech” doctrine that many feel the Supreme Court made into case law
with its Citizens United decision, by entirely blurring the line between speech
and action. Such a distinction is fundamental to free speech law. It is only
by dint of being describable as speech that something gains First Amend-
ment protections, which seriously restricts the way democratic governments
can regulate it. Even though in a philosophical sense, the line between
speech and action can be hard to formalize, it is only because the two can
be distinguished in a rough sense that free speech doctrine exists at all. If
all speech were action and all action were speech, democratic governance
would be impossible.
Unfortunately, the power of digital technology has made it a vanguard site
for reinterpreting action as speech. The most obvious place to see this is in
the doctrine that “code is speech.” This holds that because computer pro-
grams are made of code that looks something like human language, every-
thing done with computer code deserves First Amendment protections—
even though the whole point of computer programs is to do things (i.e., to
take actions). EFF and other digital advocates routinely suggest that “code
is speech” is an obvious and well-established legal principle. Apple made
this claim in court filings in 2016, when it said it had a First Amendment
right not to provide the FBI with a way of unlocking, under legal warrant,
the iPhone of a suspect in the 2016 San Bernardino terror attack.
“Code is speech” is a remarkable and disingenuous interpretation of legal
philosophy. EFF, arguably its most vibrant defender, claims that the case
Political Myths 309
math would agree with. The more one thinks about it, the more curious it
becomes that there is “no meaningful difference” between these phenomena.
Things that are not meaningfully different can also be called “the same.” If
music, mathematical equations, and computer programming are the same
thing as human languages like German and French, why do they occupy
such different roles in our societies? German and French are functionally
equivalent to English, Greek, Latin, Swahili, and Japanese, as they serve
much the same roles and are taught and learned in very similar ways. Yet
there are no human societies that speak music, mathematical equations, or
C++ as their primary means of communication.
The ruling is worth dwelling on because it is the one that EFF and other
advocates insist has “established” that “code is speech.” Yet it does not even
approximate a thoughtful or learned opinion on the complex relationships—
especially in the sphere of law—between human languages and program-
ming code. If that question were ever to be adjudicated in a court of law,
unlike in the courts that heard Bernstein and the cases that followed it, it
would be necessary to include a wide range of experts in hearings—not
just the computer advocates who saw in Bernstein an opportunity to press
for the anarchic anti-government politics to which many of them subscribed.
Consider this from the more limited appeals ruling by Betty Fletcher:
“Cryptographers use source code to express their scientific ideas in much
the same way that mathematicians use equations or economists use graphs.
Of course, both mathematical equations and graphs are used in other fields
for many purposes, not all of which are expressive. But mathematicians and
economists have adopted these modes of expression in order to facilitate the
precise and rigorous expression of complex scientific ideas. Similarly, the
undisputed record here makes it clear that cryptographers utilize source code
in the same fashion.” Thomas Nelson’s dissent is even stronger. He writes
that he is “inevitably led to conclude that encryption source code is more
like conduct than speech. Encryption source code is a building tool. Aca-
demics and computer programmers can convey this source code to each
other in order to reveal the encryption machine they have built. But, the
ultimate purpose of encryption code is, as its name suggests, to perform
the function of encrypting messages. Thus, while encryption source code
may occasionally be used in an expressive manner, it is inherently a func-
tional device.”
Political Myths 311
In its original form, it meant that the Usenet software (which moves mes-
sages around in discussion newsgroups) was resistant to censorship because,
if a node drops certain messages because it doesn’t like their subject, the
messages find their way past that node anyway by some other route. This is
also a reference to the packet-routing protocols that the Internet uses to direct
packets around any broken wires or fiber connections or routers. (They don’t
redirect around selective censorship, but they do recover if an entire node is
shut down to censor it.)
The meaning of the phrase has grown through the years. Internet users
have proven it time after time, by personally and publicly replicating informa-
tion that is threatened with destruction or censorship. If you now consider
the Net to be not only the wires and machines, but the people and their social
structures who use the machines, it is more true than ever. (Gilmore 2013)
One must read carefully to detect the subtle shift in meaning of “censorship”
that happens here. Gilmore is discussing Usenet groups, which he helped
create by inventing the widely used alt.* hierarchy. He is characterizing as
censorship a wide range of activities that are nongovernmental in origin.
These activities may be legitimate restrictions of speech, such as blocking
child abuse imagery, which was widely available on earlier internet-based
systems like Usenet but has exploded in volume in recent years. Other activi
ties have nothing to do with speech per se, but instead with blocking the
operation of software—which is to say, with blocking actions.
This elaboration of “censorship” into the opposite of its usual meaning
becomes most evident when Gilmore talks about packet-routing proto-
cols that direct traffic around broken connections. This is an inbuilt design
feature of internet technology, which is typically attributed to the need
for a communication network that could survive a nuclear attack. But the
physical removal of nodes in a communication network that provides a
software platform (or even an entire protocol, as Usenet was) has little to
do with the legal concepts of free speech and censorship. Rather, it is better
understood as technical means for preventing government regulation of
technology by labeling it “censorship.”
Recently, the claim has been amplified through its association with crypto
currencies and blockchain technology. Public, permissionless blockchains
are theoretically decentralized, so that the software can be run by anyone
with sufficient computing power and network connectivity. Therefore, there
314 Political Myths
is not necessarily one person or group responsible for running the soft-
ware in a specific area. At first glance, it is not clear how a governmental
authority might restrict its operation. Regulating or prohibiting block-
chain software is difficult in practice, much like the protocols on which
the internet and web operate. Digital evangelists have named this property
“censorship resistance.” They believe that any attempt to manage how the
software runs would be censorship, regardless of whether it involves any-
thing recognizable as what ordinary people and ordinary law call “speech.”
This in turns licenses typical antigovernment agitation and digital excep-
tionalism. Despite the avowed intention of many in the cryptocurrency
and blockchain communities to bypass or even eliminate democratic gov-
ernance, any attempt by democracies to limit technological and economic
power can be understood as breaking a central formal rule said to be pro-
hibited to democracies—censoring “free speech.”
The notion of blockchain as “censorship resistant” is widespread among
enthusiasts, and is also occasionally mentioned by more apparently neutral
commentators. As early as 2011, EFF issued one of its typical cryptocurrency
explainers that asserted Bitcoin was “a step toward censorship-resistant digi-
tal currency” (Reitman 2011), though it made no effort to explain why cen-
sorship or resistance to it is an appropriate vantage point from which to view
financial transactions. In recent years, blockchain syncretists have become
bolder, as seen in a 2021 piece in CoinDesk called “Bitcoin’s Censorship-
Resistance Was a Step Change in History” (Dale 2021). Drawing on Gil
more’s chestnut and central theorists of free speech such as John Stuart Mill
and John Milton, the wildly revisionist piece quotes early Bitcoin entrepre-
neur Adam Ludwin’s statement that “nothing can stop me from sending
bitcoin to anyone I please.” It then explains that “censorship resistance is a
jargony way of saying speech, or any other activity, that [sic] can’t be vetoed
or stopped. . . . Censorship resistance is also a step change in the history of
political philosophy.”
Dale’s insertion of the phrase “or any other activity” into an otherwise
reasonable definition of censorship might be overlooked by readers, mak-
ing it easier to miss the fact that what is characterized as “censorship” here
is the transmission of funds from one person to another. Thus the core
action involved in economic activity is somehow assimilated to the core
activity in political discourse (as in Citizens United ), and the desirability of
speech for the latter is used to prevent political speech itself from affecting
Political Myths 315
On this point I stand strong: censorship is not a legal term, nor is it the sole
domain of government actors or synonymous with the First Amendment.
Throughout history, censorship has been enacted by royals, the Church, the
postal service, the Inquisition, publishers, the state, and yes, corporations.
Though the details differ, censorship exists in some form in every locale
throughout the world. Throughout history, censorship has most often served
those at the top, allowing them to consolidate power while silencing the
voices of anyone who might engage in protest. But the struggle for freedom
of expression is as old as the history of censorship, and it isn’t over yet.
Innovation noted that “in a liberal-democratic society, there’s little room for
debate once you’ve pulled the ‘censorship’ pin on the free-speech grenade.
It’s a conversation-ender in the same way calling someone a Communist
was during the Cold War” (Haggart and Tusikov 2021). They describe a
spring 2021 debate about a bill that “would have allowed, among other
things, the government’s arm’s-length telecommunications regulator to re-
quire certain social media platforms to prioritize Canadian content posted
to their platforms” (which it should be noted is already a licensing condi-
tion placed on other Canadian media companies):
What could have been a nuanced argument over whether these specific reg-
ulations were appropriate, or how to amend the provision, quickly devolved
into free-speech total war.
The digital rights group Open Media called it a “dangerous censorship
bill.” The Internet Society, whose corporate members include Google and
Facebook, published an open letter signed by some of Canada’s leading inter-
net scholars calling on Prime Minister Justin Trudeau “to stop harming the
Internet, the freedoms and aspirations of every individual in this country,
and our knowledge economy through overreaching regulatory policies that
will have significant, yet unintended consequences for the free and open
Internet in Canada.”
This idea of a free and open internet in which free speech is the guiding
principle is evident in social media companies’ self-portrayal. They sell
themselves as mere technical, passive “intermediaries” facilitating interactions
among users, thereby downplaying the extent to which they themselves
create a heavily structured and content-curated environment, in pursuit of
profit. Even though they are only companies that use the network of the
internet—they’re not the internet itself—and even though their algorithms,
by definition, order and present content in a way that’s just as “unnatural”
as anything a government could propose, they’ve co-opted this ideology to
the extent that regulation of their activities is seen as an attack on the inter-
net itself.
And of course, the language of attacks on the “internet itself ” are them-
selves disingenuous and flexible. Any regulation at all will “break” the inter-
net, despite the inability of EFF and others to explain exactly what these
harms will be, let alone why they trump the principles and powers of demo
cratic sovereignty to decide what is best for their own citizens.
In July 2012 a group of internet activists drafted what they called the
“Declaration of Internet Freedom.” Like many other similar efforts, this
declaration models itself on political forms of speech. Indeed, the Declara-
tion was deliberately issued to coincide with the July 4 U.S. Independence
Day holiday:
We believe that a free and open Internet can bring about a better world.
To keep the Internet free and open, we call on communities, industries
and countries to recognize these principles. We believe that they will help to
bring about more creativity, more innovation and more open societies.
We are joining an international movement to defend our freedoms be-
cause we believe that they are worth fighting for.
Let’s discuss these principles—agree or disagree with them, debate them,
translate them, make them your own and broaden the discussion with your
community—as only the Internet can make possible.
Join us in keeping the Internet free and open.
318 Political Myths
DECLARATION
We stand for a free and open Internet.
We support transparent and participatory processes for making Internet
policy and the establishment of five basic principles:
would give up its vigorous protection of its legal intellectual property rights
worldwide. The belief that signing such a document supports human free-
doms, like much computer utopian action and rhetoric, might ultimately
do more harm than good, except to corporations intent on using such
rhetoric for their own profit.
Shawn Powers and Michael Jablonski have written the most thorough
analysis of the discourse of internet freedom available so far, The Real Cyber
War: The Political Economy of Internet Freedom (2015). As they explain,
“Efforts to create a singular, universal internet built upon Western legal,
political, and social preferences alongside the ‘freedom to connect’ is [sic]
driven primarily by economic and geopolitical motivations rather than the
humanitarian and democratic ideals that typically accompany related policy
discourse” (3). They note that at the height of the Obama administration—
which was nearly a partnership with Google (as exemplified in The New
Digital Age, the 2013 volume cowritten by former Google CEO Eric Schmidt
and former Google executive and State Department adviser Jared Cohen)—
Secretary of State Hillary Clinton maintained an “evolving doctrine of
internet freedom.” Despite being “veiled in ideological language,” this was
in fact “the realization of a broader strategy promoting a particular concep-
tion of networked communication that depends on American companies
(for example, Amazon, AT&T, Facebook, Google, and Level 3), supports
Western norms (such as copyright, advertising-based consumerism, and
the like), and promotes Western products” (6). Clinton even proposed an
“expanded interpretation of the UDHR” (7), which includes that vaguely
defined “freedom to connect.”
While the intersection between political sovereignty and this “proposed
new freedom” remained vague, the Obama internet freedom agenda as real-
ized by Clinton “also focused on the economic logic of allowing for greater
transnational flows of information” (8). The authors argue that “Clinton’s
articulation of the benefits of free and open communication on interna-
tional peace, espousing the democratizing power of the internet and the
economic benefits of being online—‘A connection to global information
networks is like an onramp to modernity’—obfuscates geopolitical moti-
vations driving trends toward global connectivity” (9). This nod toward
“modernization theory” carries overtones of white-man’s-burden paternal-
ism toward non-Western countries and realpolitik toward Western democ-
racies. It follows in the tradition of neoliberal economics so ably analyzed
324 Political Myths
cyberlibertarian myths and causes such as Section 230, net neutrality, and
“censorship” (of U.S.-based technology companies) rather than being con-
cerned about obvious and crucial threats to democratic rule. It also exhib-
its a troubling strain of America First propaganda. Unsurprisingly, Google
and the Internet Society, a digital rights organization, funded the work.
Rather than promoting democracy or civil rights in any legible fashion,
this leading report on “internet freedom” turns out to be a “lobbying tool”
notable “for the absence of anything that would strengthen the authority
to defend the rule of law, or hold platforms to a higher level of account-
ability for their conduct.” Yet again, the apparent promotion of democracy
turns out to be anything but.
“C O DE IS L AW ”
Lawrence Lessig introduced the concept that “code is law” in the 1999
book Code: And Other Laws of Cyberspace (subsequently revised and reissued
in 2006), one of the earliest scholarly works to argue that the modern tech-
nical environment poses significant challenges for constitutional and rep-
resentative governance. Lessig is widely known as one of the academic and
legal world’s internet advocates who is not overtly a member of the right
wing. He frequently opines on issues of intellectual property, copyright, and
politics in ways that many on the left consider hospitable to their politics.
In many ways they are correct.
Yet Lessig is an odd candidate for the position of non-rightist proponent
of rights in the digital era. Evgeny Morozov describes the model Lessig ad-
vances in Code: “Lessig’s model assumes four forces—market, norms, laws,
and code—and, to many, it looks innocent and objective enough” (2013d).
But Morozov goes on:
To make full sense of this model, one needs to know where Lessig comes
from intellectually. His framework packs many assumptions about human
behavior, regulation, knowledge, and political economy. That Lessig matured
at the University of Chicago Law School, that he was profoundly influenced
by the legal theorist and judge Richard Posner (Lessig clerked for him), that
the code framework is rooted in the law and economics tradition of legal
theory—a tradition that is very friendly to neoliberalism—all of this matters.
Just like there’s nothing natural about the discourse of law and economics,
326 Political Myths
In this case, although we may know that human beings in real markets are
not purely rational agents, they still pursue what most right-wing thinkers
term “maximal utility.” This refers to the pursuit of the greatest good for
oneself, or for us, the greatest wealth for oneself, following more or less in
the footsteps of John Stuart Mill and Jeremy Bentham.
Lessig’s claim that “code is law” is built on his belief, mentioned by
Morozov, that there are four modalities for regulating human behavior:
norms, markets, laws, and architecture. As the legal scholar Viktor Mayer-
Schönberger writes in a thoughtful critical analysis of this claim, “Lessig is
less interested in norms, laws, and markets, and more in what he sees as an
overlooked fourth mode of regulation: architecture. Following in the foot-
steps of a long line of theorists, he suggests that the tools we use to interact
constrain us” (2008, 716–17). Among these tools, digital media (aka “cyber-
space”) is special: “Because cyberspace is plastic—a space that we can shape
like no other place—and it constrains human behavior, designing cyber-
space is a very powerful regulating activity. It produces what Lessig calls
‘West Coast Code,’ software code that regulates human behavior” (717).
Lessig contrasts “West Coast Code” with “East Coast Code,” which is to say
“laws,” which, in Mayer-Schönberger’s words,
In many ways this analysis is hard to disagree with. It comes down to the
view that engineers are creating de facto regulatory architectures that
constrain and even determine human behavior outside the channels we as
a society have decided appropriate for this activity: that is, governments.
(Lessig’s emphasis on computer code notwithstanding, many corporate
practices serve as de facto regulations and laws in many spheres of conduct,
even in democracies.)
328 Political Myths
Cyberspace presents something new for those who think about regulation
and freedom. It demands a new understanding of how regulation works and
of what regulates life there. It compels us to look beyond the traditional
lawyer’s scope—beyond laws, regulations, and norms. It requires an account
of a newly salient regulator.
That regulator is the obscurity in the book’s title—Code. In real space we
recognize how laws regulate—through constitutions, statues, and other legal
codes. In cyberspace we must understand how code regulates—how the soft-
ware and hardware that make cyberspace what it is regulate cyberspace as it
is. As William Mitchell puts it, this code is cyberspace’s “law.” Code is law. (6)
There is a trivial sense in which Lessig’s first paragraph is certainly true, but
no truer than it has been since the Constitution was ratified. Law must
adapt to changing circumstances, whether by metaphorical application of
existing law, enacting of new law, or both. (See Wallace and Green 1997 for
an account of this process with specific reference to digital media.) Beyond
that, there is an obvious way that Lessig’s point is refuted by widely known
facts. It is bizarre to suggest that because an activity takes place on the inter-
net has much, if any, bearing on its relationship to law, as existing laws and
regulations already cover activities that take place on the internet. This is not
mere technicality. Laws and regulations against false advertising apply to
internet advertisements the same way they do to any other advertisements.
The same is true for laws against libel and slander. Communications used
in furtherance of criminal acts or criminal conspiracies are searchable by
warrant just like any other communications. Buying something illegal over
the internet is exactly as illegal as it is in person. Tax laws apply to corpo
rations regardless of the modality in which they conduct business. So the
boldest strokes of Lessig’s rhetoric are seriously misleading: “the software
and hardware that make cyberspace what it is” do not “regulate” the inter-
net to the exclusion of the same laws that regulate all our other activities.
In the 2006 version of Code, this critical passage is replaced with a more
nuanced discussion:
Lawyers and legal theorists get bothered, however, when I echo this slogan
[that “code is law”]. There are differences, they insist, between the regulatory
330 Political Myths
effects produced by code and the regulatory effects produced by law, not
the least of which is the difference in the “internal perspective” that runs
with each kind of regulation. We understand the internal perspective of
legal regulation—for example, that the restrictions the law might impose on
a company’s freedom to pollute are a product of self-conscious regulation,
reflecting values of the society imposing that regulation. That perspective is
harder to recognize with code. It could be there, but it need not. And no doubt
this is just one of many important differences between “code” and “law.”
I don’t deny these differences. I only assert that we learn something useful
from ignoring them for a bit. Justice Holmes famously focused the regulator
on the “bad man.” He offered a theory of regulation that assumed that “bad
man” at its core. His point was not that everyone was a “bad man”; the point
instead was about how we could best construct systems of regulation.
My point is the same. I suggest we learn something if we think about the
“bot man” theory of regulation—one focused on the regulation of code. We
will learn something important, in other words, if we imagine the target of
regulation as a maximizing entity, and consider the range of tools the regula-
tor has to control that machine. (5–6)
governments will move from directly constraining behavior with East Coast
Code to indirectly constraining behavior with laws that regulate West Coast
Code. Such indirect regulation is much less transparent and thus less likely
to face the stiff public opposition that has kept the government within
our society’s system of checks and balances. Lessig is also worried that the
plasticity of software allows governments to constrain behavior more easily
and to a greater extent than they could through law alone. Corporations will
work with government to change the architecture of cyberspace because
they, too, profit from a more controllable space. Intellectual property rights
can thus be better enforced, advertisements more precisely targeted, and
some of the harsh wind of competition can be more easily avoided through
a more regulable space. (718)
lost: “Lessig’s central fear is that this coalition of producers of East Coast
Code and producers of West Coast Code will replace the values embedded
in the original Internet with ones that reflect their own—values that may
not comport with the preferences of the citizens. Lessig uses intellectual
property, privacy, and free speech as three examples of this potential shift
in values” (718–19). Instead of being concerned with the internet as an
attack on the system of constitutional and representative democracy itself,
he worries that this attack might be muted or constrained by government.
This concern is bolstered by the typical cyberlibertarian idea that there are
“values embedded in the original internet” that we are on the verge of los-
ing. Such a view requires an extremely narrow and self-interested reading
of both social and technological history. If there is such a thing as the “origi-
nal internet,” the only value that can be ascribed to it with some degree
of confidence is “partial survivability after world nuclear war,” a value not
really touched by any of the considerations Lessig raises.
Not surprisingly, Lessig offers a familiar remedy for this problem: “There
may be an antidote, Lessig suggests, in the form of ‘open code’: West Coast
Code that is open and thus not controllable by corporate coders. Such
open code may be less vulnerable to indirect regulation through laws, and
it is certainly less susceptible to corporate desires for control” (719). Yet
nothing of the sort is true. Open source code is widely used by corpora-
tions in the service of control. The fact that it is available for inspection
does not tell most of us—perhaps any of us—much about how it is being
used or give us the power to do much about it.
Underneath this analysis is one grounded firmly in the law and econom-
ics tradition and even the specific analyses of the Neoliberal Thought Col-
lective, even if Lessig does not always frame things this way: “Lessig wants
users to choose. The choice he envisions, however, is a specific one. It is
the choice of consumers selecting goods in the marketplace. Lessig does
not hide this preference; his argument often reflects a strong presumption
for the market. Choice for him is the ability to select from two or more
options. As long as there are options for users, there is competition. Com-
petitive markets ensure that users remain empowered. Choice is the first
foundational value of Lessig’s theory” (721). This is the fundamental precept
of Chicago School neoclassical economics, the law and economics move-
ment, and contemporary neoliberalism. In the guise of providing “choice”
and equating “choice” with “freedom,” many social spaces not previously
Political Myths 333
CYBERFASCISM
CHAPTER 7
T
he inchoate and syncretic politics of cyberlibertarianism make
it a potent vector through which right-wing thought spreads.
The rhetorical devices and strategies it uses solicit the support
of many who do not actively identify with the right. Understanding this
ability to garner political assent from beyond its nominally proper base
is one of the main reasons for developing the cyberlibertarian analytical
framework. Cyberlibertarianism has been one of the primary forces help-
ing to shift global politics to the right, though it would be a mistake to see
it as the only force propelling that shift. Analysis of cyberlibertarianism has
not so far left examined a highly pointed question: what role does cyber-
libertarianism play in encouraging overt fascism and Nazism?
There are important historical and philosophical ties between nominally lib-
ertarian politics and the far right. Commentators from both the moderate
right and across the left have pointed repeatedly to what one writer called
the “libertarian-to-fascism pipeline” (Dougherty 2017; see also Anderson
2011; Fenwick 2019; Slobodian 2019). Some have gone so far as to see polit-
ical libertarianism as the “friendly” or public face of fascism, attracting those
who may not be consciously ready to embrace the hateful core of fascism
(UnKoch My Campus n.d.). Others draw attention to the ways that almost
all anarchist and libertarian philosophies cannot help but create openings
for the far right—what Ross (2017) calls the “fascist creep.”
337
338 Cyberlibertarianism and the Far Right
Even granting these general principles, digital technology (and its promo-
tion) has specific and notable effects in the promotion of far-right politics.
As I discuss in The Cultural Logic of Computation (2009), there are “natu-
ral” affinities between digital technology and the foundations of political
reaction, especially with the conviction that might makes right: that the
dominant salient political factor in the world is how much power a person
or group can accrue to itself, and that any action is licensed in the pursuit of
that power and its maintenance once achieved: and ultimately that “might
makes right.” The widespread belief that technological empowerment is in-
herently positive is almost indistinguishable in social terms from the belief
that technological change is inherently politically progressive, or at least
politically welcome. Although this empowerment is said to be attached to
minority or otherwise vulnerable populations, it is not always easy to find it
credibly embedded within larger politics that resist the might-makes-right
perspective.
The Cultural Logic of Computation focused more narrowly on some-
thing like political psychology, the underlying ideas about self and society
that tend to be endorsed by those who strongly identify with computers
and the digital revolution. The connections of this analysis to general ques-
tions of political philosophy are to some extent obvious. A political phi-
losophy that is based solely on the empowerment of the self or the group
with which the self identifies does not align with the values associated with
core democratic politics. These values include sovereignty that is widely
dispersed across all people. The psychological appeal of digital technology
to the least rational parts of our brains and bodies has pushed populations
toward the promotion of more or less authoritarian power and away from
democratic and dissensus-based political theories. Here again the irony must
be noted of the claim that digital technology “democratizes,” despite the
manifest ways it presses so firmly against core democratic values.
It is not surprising that at the rightward extreme of digital technology
proponents there is an overrepresentation of far-right actors, whether they
overtly identify with the far right or simply express far-right ideas without
declaiming their political identities (or, in some important cases, explicitly
disavowing them). For many, perhaps most, digital evangelists, this creates
a political paradox with which they strenuously avoid engaging: if the dig-
ital promotes democracy, how is it that the spread of digital technology has
been nearly coterminous with, and in most cases directly implicated in, the
Cyberlibertarianism and the Far Right 339
not disprove their utility for fascism. This can even double down on the
original problem by failing to explain how and why tools intended to con-
nect everyone and magnify their voices will not differentially magnify the
voices of those who already have the most power and the most ability to
manipulate these tools and networks. (See Schradie 2019 on the usefulness
of digital tools for the right and far right; and Eubanks 2011 on the rela-
tively impotence of digital activism for progressive causes.)
They also deflect attention from digital tools by pointing out that other
forms of media, especially right-wing talk radio and cable networks as well
as evangelical broadcasting, all play roles in the spread of fascist propaganda
(Neiwert 2009, 2018). There is no doubt this is true, but that does not absolve
digital media of its role in growth of worldwide fascism. Indeed, most of
these other media forms predate the World Wide Web, so in raw historical
terms it is the rise of digital media that occurs simultaneously with the move
toward the right. Further, these same lines of denialist argument are often
advanced by those who elsewhere celebrate the power of digital media to
connect, coordinate, and do political work. It is only when critics point out
how useful these tools are for the far right that denialists downplay their
influence. Finally, and perhaps most damning of all, most of these activists
and organizations steadfastly resist every call to examine how their policy
positions, themes, ideas, and issues—all those we have examined so far—
magnify the power of the far right.
As we have seen in previous chapters, all libertarian pseudo-philosophies
are largely inchoate groups of apologies for concentrated power. They leave
significant openings for authoritarian politics and have often been described
as a kind of human face for those politics. In our world, most but not all
authoritarian politics emerge from the far right, and much more so since
the rise of digital technology than prior to it. This is in line with what early
critics of digital technology, such as Mumford and Ellul, anticipated. The
explosion of libertarian pseudo-philosophy is directly complicit with the
rise of fascism.
Murray Rothbard, Ayn Rand, Ron Paul, and other leading anarcho-
capitalists, whose works are often cited for support by cypherpunks and
digital activists, have been accused of being sympathetic to fascist politics
by some of their fellow travelers and critics. This became especially appar-
ent during and after the 2016 election of Donald Trump and with the rise
Cyberlibertarianism and the Far Right 341
The paleo-libertarian seed that Ron Paul, Murray Rothbard, and Lew Rock-
well planted in the 1990s has come to bear some really ugly fruit in the last
couple of years as elements of the alt-right have made appearances in various
libertarian organizations and venues. Back in February, alt-right hero Richard
Spencer stirred up a fuss at the International Students for Liberty Confer-
ence in DC after being invited to hang out by a group of students calling
themselves the “Hoppe Caucus.” Hans-Hermann Hoppe, long associated
with the Ludwig von Mises Institute as well as a panoply of racists and anti-
Semites, is perhaps the most popular gateway drug for the alt-right incur-
sion into libertarianism.
all land is privately owned, including all streets, rivers, airports, harbors, and
so on. With respect to some pieces of land, the property title may be un-
restricted; that is, the owner is permitted to do with his property whatever
he pleases as long as he does not physically damage the property owned by
others. With respect to other territories, the property title may be more or less
severely restricted. As is currently the case in some housing developments,
the owner may be bound by contractual limitations on what he can do with
his property (voluntary zoning), which might include residential versus com-
mercial use, no buildings more than four stories high, no sale or rent to Jews,
Germans, Catholics, homosexuals, Haitians, families with or without chil-
dren, or smokers, for example.
342 Cyberlibertarianism and the Far Right
Hoppe justifies his remarks by noting that discriminatory property title poli
cies, including all those he mentions, are currently found “in some housing
developments.” If such notices were merely occasional they might be over-
looked. Instead, throughout the book, Hoppe consistently makes remarks
that seem to target LGBTQ people, Black people, and other historically
discriminated minorities as unfit to be included in his ideal communities.
He also relies on the same highly questionable assertions about race, eth-
nicity, and “intelligence” that are found frequently among far-right extrem-
ists. “Civilization and culture do have a genetic (biological) basis” (184), he
writes. “However, as the result of statism—of forced integration, egalitari-
anism, welfare policies, and family destruction—the genetic quality of the
population has most certainly declined. Indeed, how could it not when
success is systematically punished and failure rewarded? Whether intended
or not, the welfare state promotes the proliferation of intellectually and
morally inferior people and the results would be even worse were it not
for the fact that crime rates are particularly high among these people, and
that they tend to eliminate each other more frequently” ( 184–85). Defend-
ers of Hoppe justify this kind of remark as truth-telling that is free of bias.
However, thinkers more attuned to the patterns and practices of fascism
and white supremacy will disagree.
Hoppe’s advocates (e.g., Kinsella 2010) argue that he is not personally
biased, racist, or homophobic. His demands for fascist political programs
are deflected as a matter of personal feeling, and we are expected to ignore
the obvious contexts of those political programs in the face of purportedly
good-faith attestations of personal belief. Of course, many fascists insist
that they are “good people” who love their neighbors (Baker 2016), but even
this misses the forest for the trees. Fascism is not just about constructing
and oppressing hated “others.” It is also about replacing democratic sover-
eignty, equal rights, and universal enfranchisement with the economic phi-
losophy of absolutely free markets. This is the heart of fascist philosophy
for both Mussolini’s Italy and Hitler’s Germany, as articulated by Landa
(2010). On these points Hoppe is unambiguous.
This makes “digital fascism a more fluid and ambivalent movement [com-
pared to its non-digital forms], which cannot be fully grasped with actor-
or ideology-centered approaches.” They argue that the most useful frame
for understanding digital fascism is Paxton’s (2004, 218) view of fascism as
characterized by “obsessive preoccupation with community decline, humili-
ation, or victimhood and by compensatory cults of unity, energy, and purity.”
Of course the conceptual resemblance of Paxton’s formulation to Griffin’s
should not be overlooked, nor should their resemblances in practice. “Nar-
ratives of victimhood and imperilment are key to understanding” (Fielitz
and Marcks 2019, 9) fascism in both its digital and non-digital manifesta-
tions. Yet digital media, and social media in particular, “does not simply
offer opportunities for far-right actors to spread their worldviews, but offers
opportunity structures that are particularly beneficial for far-right agency.
Moreover, social media itself (re-)produces orders of perception that are
prone to the fascist rationale” (14).
It is no accident that the most obvious instances of fascism in digital
media appear organized around such narratives of victimhood and imperil-
ment. GamerGate, QAnon, the pickup artists and men’s rights movements,
and much of the alt-right in general (Lyons 2017; Neiwert 2018) all begin
from a position of aggrievement that has long concerned researchers about
the imminence of resurgent fascism in digital contexts. Further, the mani-
fest ways in which social media is designed (both deliberately and organi-
cally) to engage our “hottest” affective centers and coordinately to suppress
the cooler parts of our minds makes it an especially potent vector for both
the development and dissemination of this new kind of fascism. It is not
surprising to find that incidents of violence in the physical world are increas-
ingly tied to inspiration from materials disseminated in social media. Digital
media forms are used to stoke those fires in ways that are hard to replicate
with other forms of media. Older forms of media technology, such as radio
and film, may have played a role in shaping historical forms of fascism, par-
ticularly in the rise of the Nazis and the Italian fascists. This is because broad-
cast (radio) or one-to-many distribution (film) can be largely contained
within specific geographical environments, and thus contain the spread of
fascism’s inchoate lust for violence within typically national borders.
Cyberfascism only secondarily recognizes many of historical fascism’s
traditional categories. Although there is no shortage of hate and “othering”
directed at familiar categories of race, gender, and ethnicity, some of the
primary vectors of cyberfascism fall along new lines. Consider one of the
346 Cyberlibertarianism and the Far Right
No less troubling is the fact that no criteria are offered for determining
who is a member of each group. This is not to say that true membership
in a fascistic nationality like Fascist Italy or Nazi Germany was a stable
and incontestable quality. As we have seen by watching Donald Trump’s
constantly shifting alliances, fascist nationalism is characterized by chang-
ing definitions of consanguinity that ultimately have more to do with per-
sonal loyalty than actual ancestry. But in “cyberspace,” nativism has an
even more ephemeral and free-floating nature. Rather than primarily being
characterized by kinship, cyberspace nativism depends on an ephemeral
quality of belonging, a group membership that does not even have the bare
supports in material reality one might expect. Becoming a “member” of the
digital “us” seems more a matter of affect and personality than anything
else, although that membership may be qualified in important ways by
other ephemeral qualities, especially one’s ability to code.
In its most anodyne forms, “learn to code” is a common slogan of the
digital age that encourages people to acquire a certain set of skills. But it
morphs quickly into a less savory judgment: that proficiency in coding is
superior to other knowledge sets. Part of the reason for this superiority is
the belief that coding is a natural or necessary skill for dealing with digital
technology, suggesting that “knowing how to code” provides a better way
of engaging digital media compared to those of us who are mere users.
Like all protofascist ideological formations, this one is blurry and mobile.
What constitutes “knowing how to code” changes widely from context to
context; proof of one’s knowledge is almost always subject to challenge.
Further, those challenges are frequent tools in social contests over the right
to belong or the right to participate in digital spaces, and therefore in society
at all (to those most identified with the digital). Claiming that one’s inter-
locutor does not know how to code, lacks the requisite “hacking” skills, or
is dishonest about their coding accomplishments, are among the first means
used by the participants in protofascist digital spaces to deauthorize any-
one with whom they disagree. In such forums, arguments often arise over
whose coding skills are truly sufficient to allow debaters to speak with author-
ity or to belong at all in digital space.
This substitution of coding skill for blood belonging is not as big a leap
as it may seem. One of the minor themes in contemporary U.S. fascism is
so-called producerism, which is “the idea that the real Americans are hard-
working people who create goods and wealth while fighting against para-
sites at the top and bottom of society” (Berlet 2009, 26; see also Berlet and
348 Cyberlibertarianism and the Far Right
Lyons 2000; Mudde and Rovira Kaltwasser 2017). These “parasites at the
top and bottom of society” (typically referring to Jewish people, though by
no means exclusively to them) occupy a structural or role-based type that
appears to only secondarily entail race. Yet as Berlet argues, they almost
inevitably assume that “proper citizenship is defined by white males” (Berlet
2009, 26). Thus the apparently deracialized distinction between “producer”
and “person who does not produce” is loaded with the same affective energy
usually reserved for the racial “us” and the racial “them.” The fact that
specific, concrete racial identities seem to be overlooked in this formation
belies the fictive nature of race itself. This is especially true as it occurs in
fascist and protofascist discourse. As a result, nonracial categories can easily
be transferred back into racial ones when necessary or useful. They provide
a kind of “cover” for the protofascist, allowing him to claim he never had
race or ethnicity in mind at all. Instead, he only had the innocent question
of whether a person has the “skill” required to contribute to a discussion.
In other words, whether that person truly is “one of us.”
Producerism is not just one theme among many in the digital fascist bag
of rhetorical tricks. Instead, along with the putative “placelessness” of cyber-
space, it serves as a cornerstone for establishing “digital media” as a place
that is dominated by and organic only to a certain class of people, those
who take their place in digital space by dint of their special, insider status,
which can only be granted by the same people who use producerism as a
criterion for carving up the world into “us” and “them.” One of the most
obvious points of connection between cyberlibertarian dogma and proto-
fascism is found in the ways technology promoters, frequently allied with
the most concerted centers of money and power in digital technology, insist
that journalists, researchers, and ordinary citizens have no right to speak
about the effects of digital technology unless they meet some arbitrary level
of technical facility.
Andreessen Horowitz is a well-known venture capital firm in the tech-
nology world. Both its founders, Marc Andreessen and Ben Horowitz,
have histories with the far-right that are concerning. The firm’s best-known
dip into right-wing waters, though, comes from former general partner
Balaji Srinivasan. He frequently takes up tropes from right-wing thought
while, in a manner familiar across much of the contemporary right, simul-
taneously disavows its right-wing content. Srinivasan, along with other
right-adjacent figures like Andreessen himself, former Facebook executive
Antonio García-Martínez, Y Combinator founder Paul Graham and its
Cyberlibertarianism and the Far Right 349
president Sam Altman, is among the Silicon Valley voices who most fre-
quently criticize individuals who speak about technology without actu-
ally “making anything.” So when the generally tech-friendly New York
Times journalist Kevin Roose criticized venture capitalists including Marc
Andreessen for their promotion of “critical infrastructure” despite their gen-
eral lack of interest in and knowledge about the topic, Srinivasan (2020a)
tweeted in response “guy who has built nothing thinks he can critique guy
who invented the web browser.” When Roose responded with a barbed
comment about Srinivasan’s crypto startup, Srinivasan (2020b) replied, “You
don’t even understand the industry you think you’re qualified to cover.
Quick, Kevin: what’s the difference between merge sort and quicksort?”
The buried implication that Andreessen’s having been the leader of the
team that developed the Mosaic web browser at the National Center for
Supercomputing Applications in 1992–93 constitutes “building something,”
whereas writing articles for the New York Times is not “building some-
thing,” shows the in-built prejudice and pronounced anti-intellectualism
of much pro-digital discourse. It also fits directly into the main producerist
construction, in which no matter how much “work” people may do, some
forms of work, especially those associated with finance or ideas, are “para-
sitic,” while other work, always characterized as “building,” is “productive.”
Like all protofascist formations, what seems a clear enough distinction on
the surface becomes much vaguer when examined closely. Srinivasan’s taunt
of Roose regarding his knowledge of the difference between two different
algorithmic methods for sorting data shows his commitment to a funda-
mental distinction between the deserving “us” and the undeserving and
parasitic “them,” which is characteristic of protofascism.
Some works that advocates of digital politics refer to as articulations of
their principles are, in fact, invitations to fascism. The Sovereign Individual
(Davidson and Rees-Mogg 1997)—which in many ways is indistinguish-
able from the business-focused technology promotion of figures like Alex
and Don Tapscott, George Gilder, and Kevin Kelly—stands out for persis-
tent invoking far-right rhetoric, not least its contempt for and dismissal
of democracy as “the fraternal twin of communism” (328); endorsement of
public choice theory (332), which Nancy MacLean (2017) puts at the heart
of the radical right’s attack on democratic governance; its continual nods
to the power of “machinery” as superior to democratic sovereignty; its nods
to “exit”; its description of democracy as the “nanny state” (95–126); and its
description of our era as similar to what the authors call “the last days of
350 Cyberlibertarianism and the Far Right
politics” in the fifteenth century. The book reads much like it was written
by a modern-day Carl Schmitt, the Nazi jurist whose antipolitical theories
of raw power have surprisingly experienced a resurgence in the digital age.
This seems not at all coincidental. Although (or even because) Rees-Mogg
is reported not to have authored much of the book, which was largely writ-
ten by “American investment guru and conservative propagandist” James
Dale Davidson (Beckett 2018), he serves as a remarkable embodiment of the
protofascist ideology that informs much technology promotion. “Born in
Bristol into a family of Somerset gentry—Moggs have lived in the county
for centuries” (Bates 2012) reads an obituary. Despite being widely known
now by the title “Lord,” Rees-Mogg spent much of his life working as an
editor at The Times of London, where “he was regularly derided as ‘Mystic
Mogg’—a parody of a tabloid astrologer—for his occasionally perverse or
wrong-headed assumptions, but none could deny that his columns were
serious, if often pompous, or—a term he would have relished—influential
in circles that mattered.”
Both Davidson and Rees-Mogg fit snugly into the characterization of fas-
cism advanced by Landa (2019) and other major thinkers on the topic: a
movement directed by those with the most wealth and power and who are
expert at developing support among the citizenry with whom they have only
an exploitative relationship. In keeping with many other fascist and proto-
fascist statements of position, The Sovereign Individual seems to argue that
democracy is an impossible and ludicrous form of political organization, but
does so exclusively via dismissive adjectives. Democracy is repeatedly char-
acterized as a “civic myth,” and existing democracies characterized as “nanny
states.” The work follows the typical shape of anarcho-capitalist propaganda,
focusing on two particular issues as being all that one needs to know of
politics. The first is the canard that nation–states have a “monopoly on vio-
lence,” a bastardized reinterpretation of thinking from the sociologist Max
Weber, and that this “protection” is all a democracy has to offer, making
democratic states functionally the same thing as organized crime. “Gov-
ernment is not only a protection service; it is also a protection racket,” they
write. “Government often operates like organized crime, extracting resources
from people within its sphere of operations as tribute or plunder” (130).
The second prong of the authors’ anarcho-capitalism is the familiar far-
right complaint that democratic regulation of economic activity is inherently
illegitimate and the most important restriction on “freedom” imaginable.
Cyberlibertarianism and the Far Right 351
liberty agreed on, at least, was that they were ‘the right,’ or the ‘right wing,’
and against ‘the left’ and anything ‘left wing’” (51).
It is unsurprising that Davidson and Rees-Mogg have incorporated these
lines of thought into their work, while disguising their political valence at
least rhetorically, and instead framing them as inevitable consequences of
the shift to digital technology. In the book’s penultimate chapter, “The
Twilight of Democracy,” the authors declare democracy to be “the fraternal
twin of communism” (1997, 328), before declaring that “analysis by Public
Choice economists” leads to the inexorable conclusion that digital tech-
nology enables “the more creative participants in the new economy [to be]
geographically distributed. Therefore, they are unlikely to form a sufficient
concentration to gain the attention of legislators” (332). The attentive reader
notes that Davidson and Rees-Mogg have earlier cited John C. Calhoun—
the influential congressional defender of chattel slavery and a signal archi-
tect of some of its most influential evasions such as “states’ rights”—as an
ur–public choice theorist, who “shrewdly sketched the arithmetic of modern
politics. Calhoun’s formula divides the entire population of the nation-state
into two classes: taxpayers, who contribute more to the cost of government
services than they consume; and tax consumers, who receive benefits from
government in excess of their contribution to the cost” (266). In this arche-
typal and quasi-Schmittian revisionist interpretation of the democratic
polity, Davidson and Rees-Mogg follow nearly to the letter the arguments
made by other right-wing advocates of “economic liberty,” including the
clutch of economists at wildly conservative institutions like the University
of Chicago and George Mason University.
MacLean shows repeatedly that despite their claims to be interested only
in economics, which is at once deeply political and supposedly apolitical,
these thinkers repeatedly dip into the most noxious strain of American
politics, the part that tried to develop pro-slavery politics prior to the Civil
War. They tried to articulate a coherent position in terms of a Constitution
that accommodated the slaveholding South but also contained much that
pointed toward its eventual dissolution. “Calhoun had no rival,” she writes,
in developing methods “to construct the operations of democratic govern-
ment” (2017, 3). Calhoun’s “ideas about government broke sharply from the
vision of the nation’s founders and the Constitution’s drafters, and even from
that of his own party. He wanted one class—his own class of plantation
owners—to overpower the others, despite its obvious numerical minority.”
Cyberlibertarianism and the Far Right 353
Writers like Davidson and Rees-Mogg, and some of their more recent
epigones like George Mason economists Tyler Cowen and Alex Tabarrok,
refer to Calhoun’s economic theories as precursors of their own, and spe-
cifically in Cowen and Tabarrok’s case as a public choice theorist. However,
they also acknowledge Calhoun’s advocacy for slavery as revealing a “lack
of ethical foundations which continues to hurt his reputation and draw
attention from his more valid and interesting contributions” (Tabarrok and
Cowen 1992, 671–72). These writers lack a sophisticated understanding
of race and racial politics, and as such they are uninterested in or outright
dismissive of cultural analysis that seeks to identify tropes, narratives, and
connections between the various expressions of politics, economics, and
culture. While a cultural critic may wonder why a purely economic theory
would choose to ground itself in the thought of the most prominent pro-
slavery politician in U.S. history—and how that theory could emerge in the
wake of arguably the most significant pro–civil rights decision in Supreme
Court history while claiming to have nothing to do with race—few thinkers
of non-right-wing political orientations will find such an argument persua-
sive. This is particularly true since the nod toward Calhoun is unnecessary
in theoretical terms.
The Sovereign Individual follows the shape of protofascist theorizing as
articulated by Landa and others. It claims to be a theory of political freedom,
but it rapidly reinterprets freedom to mean exclusively economic liberty.
This interpretation excludes and outright rejects the democratic values
that motivated the theory of political freedom to begin with. The theory
of economic liberty is not a story about how everyone can achieve greater
freedom or rights. Instead, it is a theory explicitly directed at those with
extreme wealth. The book makes clear that the “sovereign individual” of
the title is only a person who has the economic power to buy escape from
the reach of the democratic polity, which will still exist but be deprived
of the resources once provided by the very wealthy. Much like the “theory”
of ur-cyberlibertarian George Gilder, to which Davidson and Rees-Mogg
refer throughout their own book, The Sovereign Individual was explicitly,
if somewhat quietly, an advertisement for the investment newsletter the
authors published, Strategic Investment (Davidson and Rees-Mogg 1997,
401–3), along with “an investment club for accredited individuals, Strategic
Opportunities” (402), whose $995 annual subscription price clearly identi-
fies who is and is not accredited. The authors recommend that individuals
354 Cyberlibertarianism and the Far Right
must be strong” (381). The first example of a “strong social morality,” fit-
tingly enough with the Nietzschean tinges of the sovereign individual idea
itself, is Hitler, who “had a strong morality for survival, but its destructive
quality nearly destroyed his own society.” Even granting Hitler the credi-
bility that Nazism promoted the “survival” of some group (Aryans, pre-
sumably), describing that as “morality” of any kind, is to misunderstand
fascism.
In just a few pages scattered throughout the book, the authors come
close to admitting the real shape of their political program. Reflecting for
a moment on political liberty—that is, democracy—as it was understood
by its major theorists, the authors note that
a shared morality in a tolerant society was the ideal of John Locke and of early
philosophers of liberty. They did not at all believe that a society, of any kind,
can be maintained without rules, but they thought that the rules ought to be
subject to the best of reason, and that people should be coerced to accept
only the essential rules. They did recognize that coercion was inevitable in
social morality, particularly in the protection of life or of property, because
they considered that no society can survive if there is no security. They
applied an almost absolute tolerance to variations in personal choices that
did not affect the welfare of others. (382)
Yet the authors quickly back away from this apparent endorsement of demo-
cratic theory. “The original phrase of John Locke had it right” (383), they
write: “Everyone has a right to life, liberty, and estate.” Thomas Jefferson’s
alteration to “life, liberty and the pursuit of happiness” was a mistake,
despite being a “very fine phrase”: “Society depends absolutely on the right
to life and the right to property.” Thus as Landa’s (2010) analysis shows, the
language of liberalism is twisted against itself, and freedom for economic
activity replaces the rest of the values theorists like Locke, whatever their
faults, actually recommended.
Despite the sometimes obvious, sometimes occluded protofascism, espe-
cially their full-throated support for unregulated markets as the only proper
forces for justice and rights, Davidson and Rees-Mogg’s work is frequently
mentioned in contemporary digital advocacy. As Wendy Chun has written,
the book’s “vision has fueled and still fuels the development of seasteading,
cryptocurrencies and other plans for escape that dominate today. That it
Cyberlibertarianism and the Far Right 357
gets many things wrong is no comfort, however, for closing the distance
between its predictions and reality drives many Silicon Valley business
plans” (2021, 12). It does not merely drive them to develop a similar or
sympathetic viewpoint. Its foundational observations come directly from
digital technology propagandists. One of the figures whom the book quotes
most frequently is George Gilder, patron saint of Wired magazine, arch
anti-feminist (Borsook 2000), and coauthor of the “Magna Carta for the
Knowledge Age” (Dyson et al. 1994), which served as the basis for Winner’s
original analysis of cyberlibertarianism (1997). Even more significant, the
book is routinely invoked by Silicon Valley leaders, especially those whose
sympathies tend toward far-right politics.
Among core cyberlibertarian texts, The Sovereign Individual has special
status. It is routinely invoked, especially by venture capitalists and promoters
of cryptocurrency and blockchain, to convey a bizarre mix of protofascist
politics and technological utopianism. Notably, Peter Thiel considers it
one of his “six favorite books that predict the future” (The Week Staff 2016)
because it “breaks the taboo on prophecy: We’re not supposed to talk about
a future that doesn’t include the powerful states that rule over us today.
Rees-Mogg and Davidson argue that national governments could soon be-
come as antiquated as 19th-century empires.”
Cryptocurrency advocates have taken the book and run with it, and in
their hands its overtones become explicit. As even cryptocurrency and tech-
nology promoter E. Glen Weyl, who is himself given to technology-driven
syncretism, has stated in a recent review of the book that there is very little
new in it. Rather, it “takes as a starting point a roughly Thomas Friedman–
esque ‘the internet flattens the world’ and a set of predictions about tech-
nology that were prevalent in the 1990s” (2022). Then “it layers on top of
this an Objectivist (viz. the philosophy of Ayn Rand) worldview.” “Aspir-
ing ‘sovereign Individuals,’” he writes, “have become the closest of political
allies to and business funders of precisely the sort of movements and busi-
nesses that the book would see as epitomizing contemptible reaction.” The
book’s readers and promoters, he argues, “are deliberately sowing reaction
and discord to accelerate the collapse of the societies that allowed them to
reach their positions of power to be liberated from the remaining con-
straints those societies impose, a course of action explicitly advocated by
thinkers Thiel has funded such as Curtis Yarvin (aka Mencius Moldbug).”
Weyl concludes:
358 Cyberlibertarianism and the Far Right
In short, Sovereign Individual is roughly the Das Kapital of the Ayn Rand
worldview. It is a profoundly inaccurate statement of fact and set of projec-
tions intended to create a self-fulfilling dystopia. It has had a powerful influ-
ence on many of those shaping our digital future, particularly in the crypto
space. Those subscribing to it should be persuaded where possible but resisted
at every turn where not. Anyone considering allying with them politically or
taking funding from them should be thinking of it in similar terms to how
they would consider doing the same with an open adherent of a totalitarian
ideology. The world that has captured their imaginations is not one we must
or should want to live in.
directly onto the internet. The new communities that dominated the web
largely derived their habits, style, and culture from otaku culture. People
interacted on the web as collectives of isolated individuals, immersed in
fantasy products, cruel-minded gore, and self-obsession, all a means of
escape from the multiplying anxieties and dissatisfactions of real life. And
at the center of this, the quivering, vulnerable, and pale underbelly of the
internet that would digest it all, would be the chans” (36). A succession of
bulletin boards of this sort proliferated in Japan, “collectively referred to as
Nanashii Warudo (‘The Nameless World’)” (37). In the Nameless World,
otaku could “discard not only the hierarchy, but the sad fact of themselves,
and roam not simply without their bodies but without their souls in a
ghostly Saturnalia where all laws, prohibitions, and even human identity
dematerialized into a catalog of interests, desires, and self-gratification.” The
overlap between this “ghostly Saturnalia” and the philosophy of anarcho-
capitalists, which is nominally rule-free but also profoundly authoritarian
and destruction-friendly, is fundamental, not incidental.
Many of the specifics of chan culture, including its near-obsessive focus
on anime and manga, ported over to versions of the Nameless World that
traveled outside Japan. The embrace of cyberlibertarian principles as foun-
dational also followed suit. From the Amezou (First channel) service that
introduced an algorithm that pushed popular content to the top, to its suc-
cessor Ni channeru (2channel)—developed by a student named Hiroyuki
Nishimura who in 2015 took over as administrator of 4chan itself from its
creator, Christopher Poole—the chans prize absolute free speech and ano-
nymity as the core values of digital culture. They also embrace a culture of
destruction and hate but seem uninterested in reflecting on how this relates
to their other principles. As Beran points out, Japanese users of 2channel
found “relief in escaping Japan’s strict hierarchy of polite deference. Unlike
the hyper-polite real world, people found they could be rude to one another
with impunity” (38). Regardless of whether such social conventions are wel-
come or constricting, the 2channel rejoinder to them embraces a different
politics altogether: “2channel’s most popular replies were ‘omae mo na! ’ (‘you
too, asshole’) and ‘itteyoshi’ (meaning either ‘please leave’ or ‘please die’).”
Beran explains that 4chan’s development in the United States was not
initially modeled after Japan’s, but was instead influenced by the deep en-
gagement with digital technology, Wired magazine, and the video game
Cyberlibertarianism and the Far Right 361
hard to imagine more influential tools for promoting fascism than the
chans and similar sites. In short, promoting fascism is arguably one of the
main reasons sites like 4chan exist.
4chan and its offshoots are at the heart of nearly every fascist and pro
tofascist movement in the countries where they are widely used. The chans
spread movements like GamerGate, which served as a precursor and model
for subsequent right-wing conspiracies such as Pizzagate and QAnon. An
entirely manufactured controversy ostensibly concerned with “ethics in
games journalism,” Gamergate was an “anti-feminist, anti-identity politics,
anti–social justice warrior, and in some cases, just plain anti-women” (Beran
2019, 145) fascist cultural uprising whose promoters insisted that “it was they,
young white men, who were the marginalized outcasts.” Despite the clear
similarities to reactionary protofascist movements in areas where white
supremacy is dominant, many in the media and even some scholars (see
Mueller 2015) were divided on the issue, giving GamerGate supporters the
benefit of the doubt they did not deserve and paving the way for new move-
ments that use the same model. Despite some commentators recognizing
that GamerGate was a protofascist hate campaign, and that “the myth of
the Wild West of tech and the ‘be yourself ’ freedom of microblogging
platforms leads to belief that online harassment is the disconnected work
of individuals, when really strategic organizing is taking place” (Harry 2014),
far too many in online advocacy, especially the digital rights communities,
were at best vocal about the abhorrence of the actions of individuals and
groups, while maintaining that the underlying technology and the politics
surrounding it were somehow the solution to the problems they had created.
Reddit’s management acted with at least some haste to shut down sub-
reddits like r/The_Donald (Peck 2020) that had been at the heart of hate
campaigns and conspiracy theories used by Donald Trump to target his
enemies and spread lies. They also added much stricter moderation rules
to the entire site. It is unsurprising that these measures were largely effec-
tive (Coldeway 2017). However, it is difficult to say whether the wider
internet community has absorbed the lesson.
In a telling event that presaged what was to come, in 2009 4chan was the
subject of a denial of service attack (i.e., a flood of traffic likely intended to
bring down the site). Trying to manage the situation, AT&T shut off access
to 4chan, a typical response to managing such floods. Yet the prospect of
AT&T shutting off access to the site was met with horror and outrage across
Cyberlibertarianism and the Far Right 363
the internet. “The threats to the freedom and freedom of speech on the
internet are legion, and AT&T is the evil empire that netizens love to
hate,” as a journalist for The Guardian characterized the response (Ander-
son 2009). Even though moot, the 4chan founder, stated that “this wasn’t
a sinister act of censorship, but rather a bit of a mistake and a poorly exe-
cuted, disproportionate response on AT&T’s part,” cries of suppression of
free speech, “censorship,” and violations of net neutrality were widespread.
Rather than raising questions about the politics embodied in these dog-
matic calls, they were instead often taken at face value. This was echoed by
moot’s statement that AT&T’s response was “disproportionate,” despite
the fact that only the network service provider, not the website manager,
was in a position to understand the nature of the attack and the best strat-
egies to address it.
4chan and its offshoots are not simply “unfortunate” and “unforeseen”
consequences of cyberlibertarian dogma: they are among its most direct and
pure expressions. 4chan users regularly invoke most of the major points
of that dogma and organize harassment campaigns against anyone who
questions them. They particularly focus on free speech, encryption, and
anonymity, which is unsurprising given the platforms’ anonymous nature.
Indeed, the promotion of absolute free speech is a notable overlap between
the far right and digital rights advocates, and this is especially visible on
sites like 4chan, along with far-right social media sites like Gab and Truth
Social. Mass murders around the world regularly emerge from 4chan, and
antigovernment insurrectionists routinely post there.
The politics represented by Julian Assange and WikiLeaks are some of the
most divisive and controversial in the digital technology space. As discussed
in chapter 2, the cluster of individuals surrounding WikiLeaks all display
significant investments in right-wing libertarianism, despite their remark-
able ability to solicit support, and even political defense, from those whose
politics are very different. It is difficult to discuss the political libertarian-
ism of Assange, Greenwald, and Snowden among the digerati. All three have
consistently associated their own digital politics with libertarian or even
more extremist right-wing principles. It is important to ask how and why
people with other politics should sign on to their campaigns.
364 Cyberlibertarianism and the Far Right
Yet simply gesturing at their politics as libertarian risks giving them too
much credit. Not only does the right-wing libertarianism these figures advo-
cate, even in its syncretic variety seen with special force in the case of Glenn
Greenwald, tend toward and constantly create openings for overt fascism. To
the contrary, especially in Assange’s case, fascism has always been out in the
open. Yet even here, the political force of cyberlibertarianism is such that he
has many supporters, even on the progressive left, who refuse to acknowl-
edge the basic facts, let alone what they say about the digital politics Assange
advances. Assange’s supporters continue to portray his politics as something
different from what they are, despite repeated and explicit demonstrations.
They present him and his cause as anything other than fascist. Assange
exemplifies syncretism or fascist entryism—a figure who continues to draw
figures from the left toward the far right, precisely because he uses cyber-
libertarian pseudo-causes to obscure his own clear political commitments.
While hagiographic accounts of Assange’s life and work portray him as
a heroic whistleblower and journalist focused on a just society, more neutral
and objective accounts paint a different picture, even when those accounts
come from Assange’s close associates who believe (or at one time believed)
in his apparent mission. One need not go even that far, though, to see
the syncretism in Assange’s political views. One of his clearest statements
of purpose is found in the coauthored book Cypherpunks: Freedom and the
Future of the Internet (Assange et al. 2012), published by OR Books, which
itself is a persistent source of syncretic fascist propaganda. Assange is some-
times seen as something other than a technological utopian, due to state-
ments like “the internet is a threat to human civilization” (1), which is
prominently featured on the book’s back cover. Yet that obscures the more
familiar cyberlibertarian sentiments offered on the same page as well as
throughout the book: the internet is “our greatest tool of emancipation” (1);
and the only reason it is not fulfilling that role is that “states are systems
through which coercive force flows” (2)—a tenet of anarchism, not democ-
racy. Although the relationship between anarchism and the left-right dis-
tinction is complex, Assange makes clear that he is firmly on the right-wing
side of the divide: “Land ownership, property, rents, dividends, taxation,
court fines, censorship, copyrights and trademarks are all enforced by the
threatened application of state violence” (2–3).
None of this should be surprising. The book is explicitly titled after the
cypherpunks, the overtly anarcho-capitalist encryption advocates from
whom so much of the right-wing nature of internet advocacy stems. This
Cyberlibertarianism and the Far Right 365
Recall that states are the systems which determine where and how coercive
force is consistently applied.
The question of how much coercive force can seep into the platonic
realm of the internet from the physical world is answered by cryptography
and the cypherpunks’ ideals.
As states merge with the internet and the future of our civilization
becomes the future of the internet, we must redefine force relations.
If we do not, the universality of the internet will merge global humanity
into one giant grid of mass surveillance and mass control. (6)
opposites. The project was “devoted above all to one goal: subjecting the
power that was exercised behind closed doors to global scrutiny” (Domscheit-
Berg 2011, ix). Most world democracies already believe that government
operations should be exposed to public scrutiny, so there is a conspiratorial
edge to this statement, even if it is also true. But then Domscheit-Berg
asserts that what most convinced him the sentiment is correct was Assange’s
own conduct: “Over the course of my time with Julian Assange at Wiki
Leaks, I would experience firsthand how power and secrecy corrupt people.”
He hoped that the “almost reticent criticism” he expressed about Assange’s
conduct “would cause people to question the power of WikiLeaks and the
chief figure behind it,” but “in fact, the opposite happened.” Despite Wiki
Leaks’ professed commitment to “transparency” and revealing the false-
hoods of the powerful, “Julian also had a rather casual attitude toward the
truth” (65), reflecting the typical fascist and protofascist stance of mistrust-
ing “power,” yet harboring a strong desire to seize it. Perhaps especially
revealing of this dynamic is Domscheit-Berg’s observation that “in early
2010 his tone toward me changed radically. ‘If you fuck up, I’ll hunt you
down and kill you,’ he once told me” (70–71).
By 2016, it was becoming clear to many others that Assange was not pur-
suing transparency or accountability per se, but instead trying to realize an
agenda that was distinctly political and strongly aligned with the far right.
Even in early 2016, Die Zeit editor Jochen Bittner, in a New York Times opin-
ion piece called “How Julian Assange Is Destroying WikiLeaks,” affirmed
that the site “has been a boon for global civil liberties” but “the problem is
that the project is inseparable from the man. Mr. Assange has made little
secret about his skepticism toward Western democracy and his willingness
to work with autocratic governments like Vladimir V. Putin’s Russia. His
personal politics undermines WikiLeaks’ neutrality—and the noble cause
for which WikiLeaks used to stand. What we need is a WikiLeaks without
the founder of WikiLeaks.” Despite referring to WikiLeaks as embodying
a “noble cause,” Bittner raises important questions about its basic mission:
One element of Assange-think has been clear from early on: There is no
such thing as a legitimate secret. The public is entitled to share any knowl-
edge governments hold. Only complete transparency can stop and prevent
conspiracy. . . .
This is not only nonsense, it is dangerous radicalism.
Cyberlibertarianism and the Far Right 367
He goes on: “In his simplistic reading, the West is hypocritical because it
stands for civil liberties, and all secrets are antithetical to liberty. No won-
der he got a show on Russian television—his viewpoint puts him nicely
in line with Mr. Putin’s ideological agenda.” Despite the nod toward civil
liberties, the ideological agenda Assange and Putin share is nothing like
democracy—it is fascism.
At the core of fascism is the belief that people are divided into kinds.
Ideologues use all kinds of descriptive terms to divide people into groups,
sometimes stressing the superiority of themselves (Aryans, Übermenschen,
white supremacists) and sometimes the inferiority of others (racial and eth-
nic slurs, “sheeple,” “muggles”). Assange’s case shows how easily that belief
emerges in contexts that seem oppositional to them. In a recent full-throated
defense of Assange precisely because he “adopted the principles of cypher-
punk ethics, but he placed them into a distinctively cosmopolitan context.
By combining cypherpunk ethics with antiwar values and Enlightenment
ideals,” Patrick Anderson (2021, 307) avoids all discussion of either cypher-
punk or Assange’s personal politics. At the center of those cypherpunk
ethics and Assange’s own is the slogan “transparency for the powerful, pri-
vacy for the weak.” Anderson and most other supporters of Assange do not
seem to notice that the slogan splits the human world into two groups—
“weak” and “powerful”—and grants only certain, self-elected persons the
right and power to decide who belongs in each group.
It is important to remember that historical fascists, including those in
Italy and Germany before and during World War II, typically cast them-
selves as both the “weak” and the “powerful” depending on need. From
Nazis claiming Jews actually control the world to KKK members attacking
Black Americans for their alleged physical or sexual prowess, the key is always
a denial of the core commitment of democracy—that we are all the same.
Fascists decide on their own how society should be arranged, without re-
gard for the rights or interests of people in general. As we have seen, this
antidemocratic splitting is evident all over cypherpunk propaganda and is
frequently made explicit in the anarcho-capitalist literature on which cypher-
punk is based.
It is unsurprising that Assange, despite his rise to immense cultural and
social power worldwide, continued to assert his absolute right to categorize
himself as “weak,” and thus entitled to “privacy.” Meanwhile, he continued
to expose and attack civil servants and others, who were not implicated in
368 Cyberlibertarianism and the Far Right
The rise of the alt-right in the early 2010s brought with it a host of new,
online-based political movements characterized by neologisms and frantic
revisionist rewriting of history, philosophy, and political theory. Birthed
Cyberlibertarianism and the Far Right 371
economist Robin Hanson on the blog Overcoming Bias. This would later
be the basis for LessWrong, a community blog for Overcoming Bias and run
under the umbrella of SIAI, now known as MIRI (Machine Intelligence
Research Institute). The initial audience for LessWrong were fellow transhu-
manists, including the Extropians and SL4 mailing lists. In 2007, Curtis
Yarvin started the first neo-reactionary blog, Unqualified Reservations under
the pseudonym Mencius Moldbug, though he did not call himself, initi-
ally, “neo-reactionary”: he preferred to call himself a “formalist” or a “neo-
cameralist” (after his hero, Frederick the Great). This, however, was not the
beginning of his blogging career. Prior to founding his own blog, Moldbug
commented on 2Blowhards and GNXP (a racist site) as ‘Mencius’–and then
on Overcoming Bias. (2016, 15)
As journalist Corey Pein put it, speaking just of Yarvin but in a pithy phrase
that could be applied to the writings of all these figures, “Moldbug reads
like an overconfident autodidact’s imitation of a Lewis Lapham essay—if
Lewis Lapham were a fascist teenage Dungeon Master” (2014).
Populist derogation of expertise and the use of blogs and social media
sites as primary sources of information are only two of the points of har-
mony between neoreaction and cyberlibertarianism. They also come to-
gether around core issues such as decentralization, internet freedom/freedom
of speech, anonymization, and encryption. Additionally, they incorporate
fundamentally right-wing political precepts as if they flow naturally from
purportedly immutable facts about the world, such as technological de-
velopment or human nature. As Pein wrote with real prescience in 2014,
“Moldbuggism, for now, remains mostly an Internet phenomenon. Which
is not to say it is ‘merely’ an Internet phenomenon. This is, after all, a tech-
nological age. Last November, Yarvin claimed that his blog had received
500,000 views. It is not quantity of his audience that matters so much
as the nature of it, however. And the neoreactionaries do seem to be influ-
encing the drift of Silicon Valley libertarianism, which is no small force
today. This is why I have concluded, sadly, that Yarvin needs answering.”
He goes on:
We want to show what a society run by Silicon Valley would look like. That’s
where “exit” comes in. . . . It basically means: build an opt-in society, ulti-
mately outside the US, run by technology. And this is actually where the
Valley is going. This is where we’re going over the next ten years. . . . [Google
cofounder] Larry Page, for example, wants to set aside a part of the world
for unregulated experimentation. That’s carefully phrased. He’s not saying,
“take away the laws in the US.” If you like your country, you can keep it.
Same with Marc Andreessen: “The world is going to see an explosion of
countries in the years ahead— doubled, tripled, quadrupled countries.”
(Srinivasan 2013)
The “gifts” referred to are all political and cultural. Such activists insist that
an emancipatory politics can only be realized by following rules that they
claim are built into the technology, which is above and beyond politics.
However, this sublates the politics that ordinary people understand into
their superior version.
One of the areas where the far right and digital technology connect is
artificial intelligence and a seemingly unrelated philosophical doctrine called
“effective altruism.” These come together in the so-called rationalist com-
munity, a widely distributed group of people with especially pronounced
influence online and troubling ties to many of the far right’s leading figures.
RationalWiki—a project that has roots in some of the same intuitions and
ideas that ground the rationalist movement but embraces a far more varied
and evidence-based approach to argumentation—provides some sense of the
close interconnections among rationalism, AI, and the far right. The best-
known hub for rationalism is a blog and community discussion site called
LessWrong. As RationalWiki explains it, “LessWrong is a community blog
focused on ‘refining the art of human rationality.’ To this end, it focuses on
identifying and overcoming bias, improving judgment and problem-solving,
and speculating about the future. The blog is based on the ideas of Eliezer
Yudkowsky, a research fellow for the Machine Intelligence Research Insti-
tute (MIRI; previously known as the Singularity Institute for Artificial Intel-
ligence, and then the Singularity Institute). Many members of LessWrong
share Yudkowsky’s interests in transhumanism, artificial intelligence, the
Singularity, and cryonics” (“LessWrong”). The entry goes on to explain the
close ties between LessWrong and the particular conception of AI advanced
by Yudkowsky and others in his orbit:
MIRI and Yudkowsky differ from most academic and commercial research
on AI by taking advantage of a key ambiguity built into AI discourse from
its inception. For several decades, academics, mostly engineers and less so
experts in psychology or neuroscience, believed in AGI (artificial general
intelligence), which is also known as “strong AI.” These advocates seem
to share the intuition that machines are one day going to “wake up” and
“think,” words that must be put in quotation marks because of the many
background assumptions they import almost beyond the notice of many
people. First, they equate “intelligence” with the kinds of algorithmic pro-
cessing done by computers, which is already a tendentious proposal. Next,
they equate that kind of intelligence with “thinking,” even though few
people who study the mind, whether human or animal, would agree that
“intelligence” and “thought” refer to the same things. Next, they equate
“thinking” of this highly algorithmic sort with “consciousness,” although
this is done almost entirely under the table.
What we today call computers were once called “thinking machines,”
and fiction is replete with what appear to be representations of machines
that think, talk, and feel much like human beings do. These representa-
tions have led many to believe that such constructions are possible, with-
out doing the necessary conceptual and empirical work to show that this
is the case. The academic work that supports AGI typically makes a huge
leap from the notion of a “generalized problem solver” that could theo-
retically answer any question—much like Siri and other voice assistants
can now answer questions thanks to interfacing with search engines and
encyclopedias—to a machine that is actually conscious. These thinkers,
who tend to come from computer science and engineering, have at best
threadbare accounts of core concepts like “feeling,” “knowing,” and “think-
ing.” Instead, they assume that “processing rules algorithmically” is equiva-
lent to all these other concepts. In some cases it seems right to see in this
assumption a real hatred for the non-algorithmic parts of human beings
(as well as other living beings that we guess possess some form of conscious-
ness, especially the higher mammals). It is no accident that one of the
positions in recent philosophy associated with this way of thinking is called
“eliminative materialism,” a program of reduction that suggests that in some
future version of science, all the messy parts of human consciousness will
be understood in mechanistic terms.
Despite their commitment to rational enquiry, the rationalists simply
assert without evidence that there is a discrete quantity called “intelligence,”
Cyberlibertarianism and the Far Right 379
which can be measured by IQ tests. They believe that this quantity is not
just related to, or part of, the mind, but simply is the mind. Because their
account of intelligence maps onto what computers can do, this somehow
means that as computers become more powerful, humanlike minds will arise
in them. This sub-rosa account of superior intelligence rooted in quantifi-
able measures like IQ is why I and others have suggested that the dream of
AGI has much in common with white supremacy (Golumbia 2020b; Katz
2020). Further, AGI in this extended, nonacademic form has more than a
suggestion of messianic religion. When AGI arises, it will not simply be
another humanlike mind in the world. Instead, it will be vastly smarter
than us, and therefore vastly superior. This will produce an epistemic and
cultural crisis. For all intents and purposes, a god will have been created.
This is a bizarre assumption to find at the heart of a community that
prides itself on rational thought. It is especially odd given that some of
those at the edges of this community, like right-leaning provocateur Sam
Harris, profess to a deep atheism that is not infrequently racist, especially
toward Islam. They hold to a belief system about machines that demands
we accept inexplicable miracles not just as possibilities but as realities from
which we dare not turn away. This is where the story of Roko’s basilisk
comes in. As RationalWiki puts it:
the philosophy department. The CCRU existed from 1995 to 1998, although
its unofficial status makes it difficult to establish precise dates of its exis-
tence. Despite its brief tenure, the project continues to exert considerable
influence over contemporary digital culture. Many prominent writers and
theorists, especially those associated with digitally oriented movements such
as speculative realism / object-oriented ontology and accelerationism, were
either trained by or worked with Land and the CCRU.
The CCRU, like many of the early quasi-political digital theory move-
ments, combined a philosophy directly constructed on foundations derived
from digital technologies and the thinking surrounding them, with fringe,
science-fictional, drug-oriented, and conspiracy material. The CCRU site
has a page of links that includes information on topics such as alien abduc-
tion, UAP, the Anunnaki Invasion, astrology, the Cthulhu Mythos, the
osophy, Atlantis, nanotechnology, and work from prominent computer
scientists like AI researcher Marvin Minsky and roboticist Hans Moravec.
Their theoretical work showed a taste for the most extreme elements of
Continental thought, namely Gilles Deleuze, Félix Guattari, and in par-
ticular Georges Bataille, on whom Land did some of his earlier and more
conventional work.
One of the CCRU’s most telling early concepts was the notion of “cyber-
positive.” In their 1994 text “Cyberpositive,” Plant and Land explicitly in-
voke the AIDS epidemic, which at that time seemed beyond the reach
of medical science. “Cyberpositive” recalls the then-ubiquitous phrase
“HIV positive,” suggesting that one might embrace and even deliberately
spread the fatal infection. The authors’ tone already shows signs of craving
an apocalypse whose near-term advent they seem certain about: “Catastro-
phe is the past coming apart. Anastrophe is the future coming together.
Seen from within history, divergence is reaching critical proportions. From
the matrix, crisis is convergence misinterpreted by mankind. The media
are choked with stories about global warming and ozone depletion, HIV
and AIDS, plagues of drugs and software viruses, nuclear proliferation, the
planetary disintegration of economic management, breakdown of the fam-
ily, waves of migrants and refugees, subsidence of the nation state into its
terminal dementia, societies grated open by the underclass, urban cores in
flames, suburbia under threat, fission, schizophrenia, loss of control.”
Plant and Land directly connect their apocalyptic vision to digital tech-
nology and the politics they claim it embodies:
Cyberlibertarianism and the Far Right 383
Cruelly mixing Lenin, Mussolini, and Roosevelt into a single “modern” edi-
fice of “economic planning”—notably, an unacknowledged gesture at the
totalizing antidemocratic extremism of Hayek, the Liberty League, and the
John Birch Society—Plant and Land develop the idea that there are two
strains of digital technological development. One aims toward “stability”
and another that aims toward “contagion,” with the latter now construed
as “positive” in much the same way being infected with HIV can be archly
reinterpreted as “positive”: if, that is, one maintains a relentlessly negative,
upside-down vision of society and its possibilities, in which one courts
destruction and disease as desirable outcomes.
Plant and Land both worked in the then (and to some extent, still)
prominent mode of European Continental philosophy, which has been
taken to have primarily a left-oriented political orientation. In a 1998 inter-
view and analysis (published online in 2005), journalist and CCRU partici
pant Simon Reynolds sums up the CCRU program:
“Cyberpositive” was originally the title of an essay by Sadie Plant and Nick
Land. First aired at the 1992 drug culture symposium Pharmakon, “Cyber-
positive” was a gauntlet thrown down at the Left-wing orthodoxies that still
dominate British academia. The term “cyberpositive” was a twist on Norbert
384 Cyberlibertarianism and the Far Right
Both Land and Moldbug tell stories full of familiar fascist tropes, inflected
with a “new” form of racial hatred updated for the digital age. Here again
is Topinka:
Land and Moldbug are profoundly lapsarian thinkers. For them, progres
sivism—the conspiracy the “Cathedral” sustains—is the fall that obscures
and indeed encourages the degeneration of the races. Land . . . argues that
the progressive Enlightenment follows the “logical perversity” of “Hegel’s
dialectic,” enforcing the “egalitarian moral ideal” through progressivism’s
sustaining formula: “tolerance is tolerable” and “intolerance is intolerable.”
This formal structure guarantees a “positive right to be tolerated, defined ever
more expansively as substantial entitlement.” . . . If progressivism is the fall,
tolerance is the juggernaut that tramples any attempt at ascent. For Land,
the American Civil War is a moment of original sin that that “cross-coded
the practical question of the Leviathan with (black/white) racial dialectics.”
(interpolated quotations are from Land 2013)
From the right angle, Land’s thought reads like an uncanny squaring of
a circle that began with William Shockley’s dual promotion of semicon-
ductors and racial eugenics at the heart of Silicon Valley. Topinka calls this,
quoting Peter Thiel, a desire to return “back to a past that was futuristic,”
with all the technological trappings of futurity but the past’s acceptance of
racial hierarchy. The problem is still the “Cathedral”:
Land proposes as a formal fix “hyper-racism,” his vision for accelerating the
“explicitly superior” and already “genetically self-filtering elite” through a
system of “assortative mating” that would offer a “class-structured mechanism
for population diremption, on a vector toward neo-speciation.” . . . This is
eugenics as a program for exit, not only from the progressive Enlightenment
but also from the limits of humanity. Despite its contemporary jargon, this
hyper-racism is indistinguishable in its form from late Victorian eugenics,
which also recommended a program of “assortative mating.” Of course, now
eugenics places us on a vector toward neo-speciation; so it’s back to the past,
but now it’s futuristic. (quotation from Land 2014b)
Cyberlibertarianism and the Far Right 387
DE AT H T E CHNO LO G IE S
The case of Jim Bell is one of the most revealing offshoots of the cypher-
punk movement, and it is not well-known outside critics of digital politics.
Bell was an early member of the Cypherpunks mailing list (and continues
to participate on it to this day). After graduating from MIT and work-
ing at Intel, in 1982 he formed SemiDisk Systems, which produced exter-
nal RAM disks (McCullagh 2001). Bell was convicted on conspiracy and
weapons charges in the late 1990s. During a 2001 federal prosecution, Bell
declared that “he has been a member of the Libertarian Party and indi-
cated his political beliefs were anarcho-libertarian, saying: ‘I don’t advocate
chaos. I don’t think there should be a lack of order. I think there should be
a lack of orders.’” He was not simply a dabbler in right-wing libertarianism
but an active supporter of some of its most extreme exponents, including
the militia and sovereign citizens movements. The Vancouver, Washington,
newspaper The Columbian reported in 1997 that Bell “frequented meetings
of the Multnomah County Common Law Court, an anti-government group
with no legal authority” (Branton 1997).
Bell distinguished himself in crypto-anarchist history by writing and re-
leasing a ten-part essay in 1995 and 1996 called “Assassination Politics” (often
referred to by the initials AP; Bell 1997. It is worth noting that political
Cyberlibertarianism and the Far Right 389
Hanson’s projects have been criticized for incentivizing violence and mur-
der, his own defenses (see esp. Hanson 2006) typically avoid addressing the
issue directly.
Currently, assassination marketplaces seem technologically and practi-
cally infeasible, due in part to problems at the intersection of information
and the physical world that technology promoters are unable to acknowl-
edge. The “oracle problem” is a particularly vexing issue, as it is difficult to
understand how an entity could provide physical proof of an assassination
to a software program, but not provide that same proof to law enforcement
or someone seeking revenge for the first killing. In addition most crypto-
currencies, although originally and erroneously understood by many to be
anonymous, are very nearly the opposite. However, there have been proj-
ects that have a disturbing amount in common with the goals laid out by
Bell. In 2018, a decentralized betting market called Augur was launched
using the so-called smart contract features of the Ethereum blockchain.
Almost immediately, “a single person used the Augur protocol to bet on
whether President Donald Trump would survive 2018, and that person
apparently placed the bet on a lark after a Twitter discussion” (McCullagh
2018). Declan McCullagh, the journalist who had previously covered Jim
Bell’s legal cases in the early 2000s with skepticism, dismissed concerns
about the use of Augur to incentivize assassination as “panic” and “hand-
wringing.” However, this seemed to be more properly directed at the ques-
tion of whether Augur could be used for AP, rather than whether AP itself
is desirable or possible.
Bell, May, and other cypherpunks saw technology affordances that could
make democratic governance impossible. They described a direct path from
“uncensorable,” encrypted, anonymous technologies including cryptocur-
rencies, to a distributed fascism. According to their vision, the greatest crime
of all is to attempt to govern responsibly, and that crime would be punish-
able by death. Few visions of the human future are darker than this one,
which embodies the distributed fascism that informs so much digital agita-
tion, whether knowingly or unknowingly. Across the political spectrum, and
especially among the racist right (Miller 2022), violence directed at our
political opponents seems to be a more and more acceptable option. It is
hard not to wonder how few technology “innovations” might be necessary
for the Mad Max future to be realized. This vision is shared by anarcho-
capitalists like Bell and May along with the armed members of right-wing
Cyberlibertarianism and the Far Right 393
militia movements and their only slightly more moderate allies in the gun
rights movements. Many of them have accepted the rightist claim that fas-
cism is ultimately about “big government,” rather than the embrace of death
and destruction no matter who the target. This leaves them open to falling
into a fascist dystopia. A smaller group actively seeks to create it. The fact
that these actors have placed digital technology at the center of their goals,
often deriving their politics directly from what they consider to be core fea-
tures of such developments, should give everyone who seeks to resist such
dystopias serious pause about the technology’s unchecked proliferation.
One of the most troubling and strange extensions of the logic underlying
AP is the promotion of weapon printing using 3D printers and other off-
the-shelf technology to circumvent governmental regulations on guns and
other weapons. In the United States, where gun regulation is next to non-
existent, this is already a problematic issue. But this is even more true in
other democracies that strictly limit the availability of guns and other weap-
ons, and is framed as a direct affront to democratic governance. The most
public figure to be associated with 3D printed guns is Cody Wilson (see
chapter 4), whose bizarre syncretic manifesto Come and Take It: The Gun
Printer’s Guide to Thinking Free (2016) deserves special mention as a docu-
ment of cyberlibertarianism becoming cyberfascism.
Wilson’s project is so anti- government that the book reads like a
technology-enhanced version of The Turner Diaries. He promotes the un-
checkable, unregulated spread of deadly weapons with evident glee. His
bravado and absolute lack of interest in the opinions of anyone who thinks
otherwise is remarkable. Wilson draws inspiration from many of the cyber-
libertarian touchstones: “peer-to-peer technology,” free and open source,
including the words of Richard Stallman; encryption; Tor; the cypherpunks
including Tim May; wiki software; “makers” and “hackers”; cryptocurrency;
and the reinterpretation of all governmental regulation of action, including
economic activity, as “censorship.” He coyly teases out the authoritarian-
ism implicit in much cyberlibertarian dogma: “Obviously democratization
is too important a task to let just anyone do it” (74).
Invoking both the syncretism in cyberlibertarian dogma and some of
the most troubling syncretic figures in digital technology, Wilson is linked
with Bitcoin promoter and self-described anarchist revolutionary Amir Taki
(Bartlett 2014). Immersed in the extremes of libertarian, Second Amend-
ment culture that characterize militia movements, Waco, the Oklahoma
394 Cyberlibertarianism and the Far Right
City bombing, sovereign citizens, and so on, Wilson is convinced that all
freedoms proceed exclusively from each citizen being armed to the teeth,
without regard to the bloodshed and terror (a word Wilson uses throughout
the book in a positive sense) that must ensue. He calls this regime “the high-
est ground of political realism” (9). Fueling that sentiment is the idea that
the plans for 3D gun printing are “open source software” and “WikiLeaks
for guns” (8)—that is, cyberlibertarianism made into fascist direct action.
despite its being largely parallel with statutes for wire and mail fraud that
are largely accepted. Such disingenuous attacks on the law—which tend to
read statutes in a literal way that has little in common with how lawyers
and judges actually interpret them—give rise to many social media posts
that claim users may “go to jail for violating the Terms of Service of web-
sites.” This claim is clearly beyond the law’s intended scope and has never
happened. Nonetheless, such posts continue to generate significant anti-
government, pro-technology sentiment among promoters of digital tech-
nology. Indeed, CFAA has rarely been used; when it has, it’s almost always
in a way that parallels long-standing wire and mail fraud regulations. Yet
Mike Masnick and others are not writing heated jeremiads about “going to
federal prison for putting the wrong address on a postcard” or lying over
the telephone, despite such sentiments being functionally identical with
those expressed about CFAA.
Auernheimer’s prosecution under CFAA brought out defenders from
across the range of digital advocacy, all of whom saw his prosecution as
unjust and hurtful to freedom. None of them, however, noted his politics.
These defenses reached their apotheosis in the New York Times in 2013
when philosopher Peter Ludlow—whose editing of the signal collections
High Noon on the Electronic Frontier (1996) and Crypto Anarchy, Cyberstates,
and Pirate Utopias (2001) displayed both knowledge of and some sympathy
for the most extreme cypherpunk politics—wrote a piece called “Hacktiv-
ists as Gadflies” defending Auernheimer. The grounds for Ludlow’s defense
of Auernheimer were not simply that CFAA was bad law, a point on which
experts can and do disagree, but that Auernheimer was a cultural hero com-
parable to Socrates. The piece begins by making the comparison explicit:
“Around 400 BC, Socrates was brought to trial on charges of corrupting
the youth of Athens and ‘impiety.’ Presumably, however, people believed
then as we do now, that Socrates’ real crime was being too clever and, not
insignificantly, a royal pain to those in power or, as Plato put it, a gadfly.
Just as a gadfly is an insect that could sting a horse and prod it into action,
so too could Socrates sting the state.” He goes on: “We have had gadflies
among us ever since, but one contemporary breed in particular has come in
for a rough time of late: the ‘hacktivist.’ While none have yet been forced to
drink hemlock, the state has come down on them with remarkable force.
This is in large measure evidence of how poignant, and troubling, their
message has been.” Auernheimer was convicted of stealing from people
Cyberlibertarianism and the Far Right 397
for his own gain, something he openly bragged about doing in other cases
before and after. This had nothing to do with helping people or advancing
some kind of “wisdom.” To Ludlow, however, Auernheimer was an example
of “individuals who redeploy and repurpose technology for social causes”
who are “different from garden-variety hackers out to enrich only them-
selves.” Not only is the latter exactly what Auernheimer was trying to do,
but he had already demonstrated his commitment to hacking for the good
of Nazism and against the good of anyone else—none of which Ludlow sees
fit to mention.
It is difficult to determine what “Socratic wisdom” Ludlow and others
believe Auernheimer has to offer the world:
When the federal judge Susan Wigenton sentenced Weev on March 18, she
described him with prose that could have been lifted from the prosecutor
Meletus in Plato’s Apology. “You consider yourself a hero of sorts,” she said,
and noted that Weev’s “special skills” in computer coding called for a more
draconian sentence. I was reminded of a line from an essay written in 1986
by a hacker called the Mentor: “My crime is that of outsmarting you, some-
thing that you will never forgive me for.”
When offered the chance to speak, Weev, like Socrates, did not back
down: “I don’t come here today to ask for forgiveness. I’m here to tell this
court, if it has any foresight at all, that it should be thinking about what it
can do to make amends to me for the harm and the violence that has been
inflicted upon my life.” (Ludlow 2013)
The sentences Ludlow quotes certainly do not parallel what Socrates says
in Plato’s dialogues. Contrarily, they echo the typical protofascist, white
supremacist complaint that the person prosecuted for harming others is
actually the victim. The cause for which Auernheimer is said to be advo
cating or even sacrificing himself is entirely unclear. Is CFAA an unjust
law? Is it in fact being used to target people who don’t deserve it? Are they
unable to defend themselves in court? None of these questions is even
open for discussion. CFAA has been used so infrequently that there have
been few prosecutions, and even fewer convictions. This fact raises ques-
tions about the putative offense it must be to democracy and freedom.
Auernheimer’s conviction was overturned after the hearing, but not be-
cause the CFAA was found to be an inherent violation of important rights.
398 Cyberlibertarianism and the Far Right
In fact the CFAA issue wasn’t part of the final adjudication, which turned
instead on the question of whether Auernheimer had been prosecuted in the
correct venue (Kravets 2014). Just as troubling, many individuals and groups
associated with digital rights came to Auernheimer’s defense. UC Berkeley
law professor Orin Kerr, renowned technology lawyer Tor Ekeland (Kerr
2013), and EFF (Cushing 2013) all chose to represent Auernheimer; Kerr and
EFF donated their services pro bono. They were all convinced that CFAA
was not just an overbroad law but a deep offense to freedom and civil
rights. None of them, to my knowledge, explained how CFAA differs from
the parallel wire and mail fraud statues. None noted that Auernheimer’s
motive almost certainly was fraud, regardless of whether he committed a
technical violation of CFAA as it was intended to function. Worst, none of
them remarked on Auernheimer’s politics, which in his own opinion drove
both his specific actions in this case and his overall “activism.” Unlike the
infamous ACLU case defending a Nazi march in Skokie, Illinois, none of
Auernheimer’s defenders found it necessary to mention or explain their sup-
port for a Nazi defendant promoting Nazi ideas.
Before his sentencing in 2013, Auernheimer gave a public speech on the
courthouse steps that raised even more questions about his serving as an
avatar of suppressed, Socratic insight:
So, I stand outside this courthouse today, and I feel like America is in a cul-
tural decline. That, I look around the kind of pace, and the kind of people,
that we’ve had in the past 50 years, and it doesn’t match the 50 years previ-
ous. I feel like, I feel like [laughs], there’s something wrong. And in my
country there’s a problem. And that problem is the Feds. They take every-
body’s freedom, and they never give it back. And if you go, if you go to
Georgia, and you have a staph infection, they can have a bacteriophage that
they genetically engineer [to] eat your staph. Like, no joke. Whereas here
they’re like, we’re gonna cut your arm off, or flood you with antibiotics until
you die. Like there, they can have a treatment that’s known to be the best in
the world, because their FDA doesn’t define each individual bacteriophage
as a new treatment that has to go through clinical trials. If you want to put
a drone in the air, how many commercial applications of drones are there?
There’s a shit-ton. If you want to put a drone in the air and have it speak
TCAS, the Traffic Collision Avoidance System, you just can’t do that. There’s
no licensing path for the FDA, for the FAA, to do this. You’re not allowed to
innovate. Stop thinking outside the box, Western man.
Cyberlibertarianism and the Far Right 399
I feel [laughs], I feel like, you know, we could have laptop batteries, that
last a hundred fucking years. Fuckin . . . with betavoltaics. And we can’t have
this, because the NRC says no. (Crook 2013)
I
t is conventional to include in any work critical of the computation
regime sentiments suggesting that the author is “not a Luddite,”
“does not hate computers,” does not want to “stand in the way
of innovation,” and so on. These demands are themselves marks of cyber-
libertarianism, not least because they seem to require a kind of obeisance
to the purported beneficial social effects of digitization that must be open
to discussion and demonstration. The assertion that computers are obvi-
ously good for society and democracy is often used to shut down questions
about them, which is a species of the faux “democratization” that struc-
tures cyberlibertarianism.
Technology promoters use frames like “techlash” to disqualify criticism.
They insist that the world is full of unthinking rejection of digital technol-
ogy. To even participate in the conversation, one must acknowledge that
even if there are flaws, digital technology is a force for good. Further, criti-
cal perspectives are always asked to talk about how we can resuscitate the
“dream” of an internet that benefits everyone.
All these perspectives put digital technology before the political, the social,
and the human. They ask us to care about digital technology as a primary
concern. This is the work of cyberlibertarianism. Neither democracies nor
human beings concerned with overall well-being need to make the prolif-
eration of digital technology a fundamental priority. Sometimes media and
technology serve those fundamental priorities. Many times they don’t. Act-
ing as if they are indispensable is not just misguided, but can advance a very
different politics from those we hope for. Thinkers with wide experience
have understood from the beginning that computers, despite their benefits,
401
402 Epilogue
rejects the arguments for “permissionless innovation,” arguing that for some
class of technologies licensure is the correct approach.
These orientations have a great deal in common. In general, they recom
mend that democracies have not only the right but also the responsibility
to determine whether certain technologies (construed broadly) are com-
patible with healthy and democratic societies. Many modern democracies
allow the advertisement of addictive products with negative health conse-
quences, such as tobacco and alcohol. Despite the oft-noted failures of U.S.
alcohol prohibition in the 1920s, more recent efforts to contain and mini-
mize the damage caused by tobacco have proven successful and embody
principles that many supporters of democracy see as central. Other tech-
nologies that are more immediately dangerous have been either outlawed
altogether or restricted in use only to licensed professionals. The most obvi-
ous examples are technologies that involve radioactive materials, but many
other technologies are restricted as well. In many cases, especially those of
addictive substances, industry has been remarkably adroit at pushing back
on every regulation democracies offer, changing aspects of their products
to get around regulations, and knowingly engaging in activity that is nearly
as harmful. For example, Big Tobacco shifted to vaping products when the
number of smokers started to fall. These companies generate the appear-
ance of popular support for deregulatory policies, and even genuine popu-
lar support by product users who may not think about, or at least may not
care about, the fact that their “grassroots” support for destructive products
is being solicited and manipulated by industrial actors whose bad faith and
propensity to exploit have long been demonstrated beyond question.
The European Union is currently the world leader in standing up to
digital technology companies and even nonindustrial digital technology
promoters. It is able to make clear to the public when technology propo-
nents’ stories about the dangers of regulation fail to match reality. Despite
the introduction of regulations, digital technology has not fallen into disuse
in the EU and the democratic character of member nations has advanced
higher in international rankings of democracy.
We remain at the beginning of an international regime of strict regula-
tion of digital technology, which we hope will prioritize democracy. This
regime should not only rule out certain particularly destructive practices but
also lay out clear principles that products must adhere to before being intro-
duced. Digital technologies should be made to adhere to clear principles that
404 Epilogue
regulators, legislators, and jurists, along with citizens, determine are com-
patible with democracy, much as medical and food regulators have stan-
dards for safety and efficacy that must be met by all products. This may not
mean they must meet the “do no harm” standard associated with medical
services, although it is worth noting that medical standards in fact partake
of a complex cost–benefit calculation that is likely to have analogues in digi-
tal technology regulation.
It seems clear that technologies that enable the collection and analysis
of information about people and their actions must be high on the list of
candidates for regulation. A question that must be addressed is whether
nongovernmental actors should have the ability to create “public squares”
without more detailed accounts of their relationship to democratic speech
laws and human rights. Despite its manifest benefits and pleasures, this is
one technology that strikes me as a likely candidate for abolition or some-
thing close to it. Most forms of biometric data collection, and the post-
processing of that data, seem like candidates for abolition as well. This would
likely include, to the disappointment of their many fans, visual and audi-
tory surveillance systems and voice-activated devices of many sorts. Regula
tors face the challenge of determining whether the contexts in which audio
devices are the only safe or useful alternatives to attain certain goals (such
as for people who cannot see or use a keyboard or in moving vehicles) can
be effectively isolated from general use. Additionally, regulators must deter-
mine whether such contexts can still provide enough data for the products
to function effectively. These are exactly the kinds of questions that regula-
tors will have to face, as they do every day for many other points of law,
consumer products, and technology.
Works Cited
Abbate, Janet. 1999. Inventing the Internet. Cambridge, Mass.: MIT Press.
Abegg, Lukas. 2016. “Code Is Law? Not Quite Yet.” CoinDesk, Aug. 27. https://www
.coindesk.com/markets/2016/08/27/code-is-law-not-quite-yet/.
Abella, Alex. 2008. Soldiers of Reason: The RAND Corporation and the Rise of American
Empire. New York: Houghton Mifflin.
Abelson, Hal, Ross Anderson, Steven M. Bellovin, Josh Benaloh, Matt Blaze, Whitfield
Diffie, John Gilmore, Peter G. Neumann, Ronald L. Rivest, Jeffrey I. Schiller, and
Bruce Schneier. 1997. “The Risks of Key Recovery, Key Escrow, and Trusted Third-
Party Encryption.” Massachusetts Institute of Technology. https://academiccommons
.columbia.edu/doi/10.7916/D8GM8F2W.
Abelson, Hal, Ross Anderson, Steven M. Bellovin, Josh Benaloh, Matt Blaze, Whit-
field Diffie, John Gilmore, Matthew Green, Susan Landau, Peter G. Neumann,
Ronald L. Rivest, Jeffrey I. Schiller, Bruce Schneier, Michael Spencer, and Daniel J.
Weitzner. 2015. “Keys under Doormats: Mandating Insecurity by Requiring Gov-
ernment Access to All Data and Communications.” MIT Computer Science and
Artificial Intelligence Laboratory. https://dspace.mit.edu/handle/1721.1/97690.
“About the EFF.” 2022. Electronic Frontier Foundation. Last modified Nov. 11. https://
www.eff.org/about-eff.
“About Free Press.” 2024. Free Press. Last modified Feb. 16. https://www.freepress.net/
about.
“About the Internet Observatory.” 2024. Stanford Internet Observatory Cyber Policy
Research Center. Last modified Feb. 20. https://cyber.fsi.stanford.edu/io/about.
“About OTI.” 2024. New America Foundation Open Technology Institute. Last modi
fied Feb. 13. https://www.newamerica.org/oti/about/.
“About Tor: History.” n.d. Tor Project. Last modified Mar. 19, 2024. https://www.tor
project.org/about/history/.
Achenbach, Joel. 2019. “Two Mass Killings a World Apart Share a Common Theme:
‘Ecofascism.’” Washington Post, Aug. 18. https://www.washingtonpost.com/science/
two-mass-murders-a-world-apart-share-a-common-theme-ecofascism/2019/08/18/
0079a676-bec4-11e9-b873-63ace636af08_story.html.
405
406 Works Cited
Guide to the ‘Techlash’: What It Is and Why It’s a Threat to Growth and Progress.”
Information Technology and Innovation Foundation, Oct. 28. https://itif.org/pub
lications/2019/10/28/policymakers-guide-techlash/.
Ausloos, Jef. 2020. The Right to Erasure in EU Data Protection Law. New York: Oxford
University Press.
Badii, Farzaneh. 2021. “The Christchurch Call: Are We Multistakeholder Yet?” Digital
Medusa, Nov. 14. https://digitalmedusa.org/the-christchurch-call-are-we-multistake
holder-yet/.
Baker, Kelly J. 2016. “Nice, Decent Folks.” KellyJBaker.com, Nov. 17. https://www
.kellyjbaker.com/nice-decent-folks/.
Ball, James. 2011. “Israel Shamir and Julian Assange’s Cult of Machismo.” The Guard-
ian, Nov. 8. https://www.theguardian.com/commentisfree/cifamerica/2011/nov/08/
israel-shamir-julian-assange-cult-machismo.
Bamford, James. 2014. “Edward Snowden: The Untold Story.” Wired, Aug. https://
www.wired.com/2014/08/edward-snowden/.
“Ban Facial Recognition.” 2020. Fight for the Future project website. Last modified
Mar. 20, 2024. https://www.banfacialrecognition.com/.
Banker, Elizabeth. 2020. “A Review of Section 230’s Meaning and Application Based
on More Than 500 Cases.” Internet Association, Jul. https://web.archive.org/
web/20211217114027/https://internetassociation.org/wp-content/uploads/2020/07/
IA_Review-Of-Section-230.pdf.
Barabas, Chelsea, Neha Narula, and Ethan Zuckerman. 2017. “Defending Internet
Freedom through Decentralization: Back to the Future?” MIT Media Lab, Aug.
https://static1.squarespace.com/static/59aae5e9a803bb10bedeb03e/t/59ae908a46c3c
480db42326f/1504612494894/decentralized_web.pdf.
Baran, Paul. 1962. “On Distributed Communications Networks.” RAND Corpora-
tion. https://doi.org/10.7249/P2626.
Barbrook, Richard. 1998. “The Hi-Tech Gift Economy.” First Monday 3:12 (Dec. 7).
https://doi.org/10.5210/fm.v3i12.631.
Barbrook, Richard. 2000. “Cyber-Communism: How the Americans Are Superseding
Capitalism in Cyberspace.” Science as Culture 9:1 (Mar.): 5–40.
Barbrook, Richard. 2005. “The Hi-Tech Gift Economy: Special Issue Update.” First
Monday S3 (Dec. 5). https://firstmonday.org/ojs/index.php/fm/article/view/631/552.
Barbrook, Richard, and Andy Cameron. 1995. “The Californian Ideology.” Mute 3
(Sep.) https://www.metamute.org/editorial/articles/californian-ideology.
Barbrook, Richard, and Andy Cameron. 1996. “The Californian Ideology.” Science as
Culture 6:1 (Jan.): 44–72. [Expanded version of Barbrook and Cameron 1995.]
Barlow, John Perry. 1996. “A Declaration of the Independence of Cyberspace.” In Lud-
low 2001, 27–30.
Baron, Sabrina Alcorn, Eric N. Lundquist, and Eleanor F. Shevlin, eds. 2007. Agent of
Change: Print Culture Studies after Elizabeth L. Eisenstein. Amherst: University of
Massachusetts Press.
Bartlett, Jamie. 2014. The Dark Net: Inside the Digital Underworld. London: William
Heinemann.
408 Works Cited
Bartlett, Jamie, dir. 2017. Secrets of Silicon Valley. Two-part documentary film. London:
BBC Two. https://www.bbc.co.uk/programmes/b0916ghq.
Bates, Stephen. 2012. “Lord Rees-Mogg Obituary.” The Guardian, Dec. 29. https://
www.theguardian.com/media/2012/dec/29/william-rees-mogg-obituary.
Baur, Dorothea. 2020. “No, We Don’t Want to ‘Democratize’ AI.” Medium, Dec. 4.
https://dorotheabaur.medium.com/no-we-dont-want-to-democratize-ai-c5b481d
b5afa.
Beadon, Leigh. 2012. “Game of Thrones on Track to Be Most Pirated Show of 2012;
Pirates Still Asking HBO for Legitimate Options.” TechDirt, May 11. https://www
.techdirt.com/2012/05/11/game-thrones-track-to-be-most-pirated-show-2012-pirates
-still-asking-hbo-legitimate-options/.
Beckett, Andy. 2018. “How to Explain Jacob Rees-Mogg? Start with His Father’s
Books.” The Guardian, Nov. 9. https://www.theguardian.com/books/2018/nov/09/
mystic-mogg-jacob-rees-mogg-willam-predicts-brexit-plans.
Belcher, Marta. 2022. “Tucked Inside Biden Infrastructure Bill: Unconstitutional
Crypto Surveillance.” CoinDesk, Jan. 25. https://www.coindesk.com/layer2/2022/01/
25/tucked-inside-biden-infrastructure-bill-unconstitutional-crypto-surveillance/.
Belew, Kathleen. 2019. Bring the War Home: The White Power Movement and Paramili-
tary America. Cambridge, Mass.: Harvard University Press.
Bell, Jim. 1997. “Assassination Politics.” Self-pub., Apr. 3. http://jya.com/ap.htm.
Beltramini, Enrico. 2021. “Against Technocratic Authoritarianism. A Short Intellectual
History of the Cypherpunk Movement.” Internet Histories (2): 101–18. https://doi
.org/10.1080/24701475.2020.1731249.
Benjamin, Ruha. 2019. Race after Technology: Abolitions Tools for the New Jim Code.
Boston: Polity.
Benkler, Yochai. 2007. The Wealth of Networks: How Social Production Transforms Mar-
kets and Freedom. New Haven, Conn.: Yale University Press.
Benson, Chris. 2022. “Whistleblower Snowden Visits Bucknell Virtually.” Daily Item,
Feb. 23. https://www.dailyitem.com/news/update-whistleblower-snowden-visits-buck
nell-virtually/article_1a6d0780-945a-11ec-80d5-efec40e5fc01.html.
Beran, Dale. 2019. It Came from Something Awful: How a Toxic Troll Army Accidentally
Memed Donald Trump into Office. New York: All Points Books.
Berlet, Chip. (1990) 1999. “Right Woos Left.” Political Research Associates. https://
politicalresearch.org/1999/02/27/right-woos-left.
Berlet, Chip. 2009. Toxic to Democracy: Conspiracy Theories, Demonization and Scape-
goating. Rev. ed. Somerville, Mass.: Political Research Associates.
Berlet, Chip. 2016. “What Is the Third Position?” Political Research Associates, Dec. 19.
https://politicalresearch.org/2016/12/19/what-third-position.
Berlet, Chip, and Matthew N. Lyons. 2000. Right-Wing Populism in America: Too Close
for Comfort. New York: Guilford Press.
Berners-Lee, Tim. 1999. Weaving the Web: The Original Design and Ultimate Destiny of
the World Wide Web by Its Inventor. New York: HarperCollins.
Berners-Lee, Tim. 2017. “Act Now to Save the Internet as We Know It.” Medium, Dec. 12.
https://medium.com/@timberners_lee/act-now-to-save-the-internet-as-we-know-it
-ccf47ce8b39f.
Works Cited 409
Bernholz, Lucy, Hélène Landemore, and Rob Reich, eds. 2021. Digital Technology and
Democratic Theory. Chicago: University of Chicago Press.
Bertrand, Natasha, and Daniel Lippman. 2019. “Inside Mark Zuckerberg’s Private Meet-
ings with Conservative Pundits.” Politico, Oct. 14. https://www.politico.com/news/
2019/10/14/facebook-zuckerberg-conservatives-private-meetings-046663.
Bhuiyan, Johana. 2014. “Marcy Wheeler Leaves The Intercept.” Politico, May 16. https://
www.politico.com/media/story/2014/05/marcy-wheeler-leaves-the-intercept-00
2247/.
Bilton, Nick, and Jenna Wortham. 2011. “Two Are Charged with Fraud in iPad Secu-
rity Breach.” New York Times, Jan. 18. https://www.nytimes.com/2011/01/19/technol
ogy/19ipad.html.
Binder, Matt. 2021. “Zuckerberg Feared Facebook’s Conservative Users, So They
Received Special Treatment.” Mashable, Feb. 22. https://mashable.com/article/face
book-mark-zuckerberg-conservative-pages.
Birkerts, Sven. 2006. The Gutenberg Elegies: The Fate of Reading in an Electronic Age.
New York: Farrar, Straus and Giroux.
Bittner, Jochen. 2016. “How Julian Assange Is Destroying WikiLeaks.” New York Times,
Feb. 17. https://www.nytimes.com/2016/02/08/opinion/how-julian-assange-is-destroy
ing-wikileaks.html.
Blow, Charles. 2021. “Tucker Carlson and White Replacement.” New York Times, Apr. 11.
https://www.nytimes.com/2021/04/11/opinion/tucker-carlson-white-replacement
.html.
Blue, Violet. 2017. How to Be a Digital Revolutionary. San Francisco: Digita Publications.
Bocher, Robert. 2018. “ALA Net Neutrality FAQ.” American Library Association, Apr.
https://www.ala.org/advocacy/sites/ala.org.advocacy/files/content/telecom/netneu
trality/ALA%20Network%20Neutrality%20FAQ.pdf.
Boczkowski, Pablo J., and Zizi Papacharissi, eds. 2018. Trump and the Media. Cam-
bridge, Mass.: MIT Press.
Bode, Karl. 2019. “Killing Net Neutrality Was Even Worse Than You Think.” OneZero,
Nov. 20. https://onezero.medium.com/killing-net-neutrality-was-even-worse-than
-you-think-132a21aab55a.
Bonilla-Silva, Eduardo. 2017. Racism without Racists: Color-Blind Racism and the Persis
tence of Racial Inequality in America. 5th ed. Lanham, Md.: Rowman & Littlefield.
Borgman, Christine. 2000. From Gutenberg to the Global Information Infrastructure.
Cambridge, Mass.: MIT Press.
Borsook, Paulina. 2000. Cyberselfish: A Critical Romp through the Terribly Libertarian
Culture of High Tech. New York: PublicAffairs.
boyd, danah. 2014. It’s Complicated: The Social Lives of Networked Teens. New Haven,
Conn.: Yale University Press.
boyd, danah. 2019. “Facing the Great Reckoning Head-On.” Medium, Sep. 13. https://
onezero.medium.com/facing-the-great-reckoning-head-on-8fe434e10630.
boyd, danah. 2022. “Crisis Text Line, from My Perspective.” Apophenia, Jan. 31. https://
www.zephoria.org/thoughts/archives/2022/01/31/crisis-text-line-from-my-perspec
tive.html.
410 Works Cited
Boyle, James. 2008. The Public Domain: Enclosing the Commons of the Mind. New
Haven, Conn.: Yale University Press.
Bracha, Oren, and Frank Pasquale. 2008. “Federal Search Commission: Access, Fair-
ness, and Accountability in the Law of Search.” Cornell Law Review 93(6) (Sep.):
1149–1210. https://scholarship.law.cornell.edu/clr/vol93/iss6/11/.
Brand, Stewart. 1987. The Media Lab: Inventing the Future at MIT. New York: Viking
Penguin.
Brandom, Russell. 2017. “We Have Abandoned Every Principle of the Free and Open
Internet.” The Verge, Dec. 19. https://www.theverge.com/2017/12/19/16792306/fcc-net
-neutrality-open-internet-history-free-speech-anonymity.
Brandom, Russell, Alex Heath, and Adi Robertson. 2021. “Eight Things We Learned
from the Facebook Papers.” The Verge, Oct. 25. https://www.theverge.com/22740969/
facebook-files-papers-frances-haugen-whistleblower-civic-integrity.
Branton, John. 1997. “Activist Bell Faces Sentencing Friday.” The Columbian (Vancou-
ver, Wash.), Nov. 20. https://cryptome.org/jdb/jimbell6.htm.
Brockwell, Naomi. 2022. “Edward Snowden Played Key Role in Zcash Privacy Coin’s
Creation.” CoinDesk, Apr. 27. https://www.coindesk.com/tech/2022/04/27/edward
-snowden-played-key-role-in-zcash-privacy-coins-creation/.
Brodwin, David. 2015. “The Chamber’s Secrets.” U.S. News & World Report, Oct. 22.
https://www.usnews.com/opinion/economic-intelligence/2015/10/22/who-does
-the-us-chamber-of-commerce-really-represent.
Brooke, Heather. 2011. The Revolution Will Be Digitised. London: William Heinemann /
Windmill Books.
Brooker, Katrina. 2018. “‘I Was Devastated’: Tim Berners-Lee, the Man Who Created
the World Wide Web, Has Some Regrets.” Vanity Fair, Jul. 1. https://www.vanity
fair.com/news/2018/07/the-man-who-created-the-world-wide-web-has-some
-regrets.
Browne, Simone. 2015. Dark Matters: On the Surveillance of Blackness. Durham, N.C.:
Duke University Press.
Budowsky, Brent. 2011. “Ron Paul and Occupy Wall Street Can Change the World
Together.” The Hill, Nov. 7. https://thehill.com/blogs/pundits-blog/economy-a-bud
get/177914-ron-paul-and-occupy-wall-street-can-change-the-world-together/.
Burns, Alexander, and Maggie Haberman. 2013. “2013: Year of the Liberal Billionaire.”
Politico, Nov. 1. https://www.politico.com/story/2013/11/liberal-billionaires-fundrais
ing-2013-elections-tom-steyer-mark-zuckerberg-michael-bloomberg-099207.
But, Jason. 2012. “Facebook Welcomes Hackers, If They Wear a White Hat.” The Con-
versation, May 9. https://theconversation.com/facebook-welcomes-hackers-if-they
-wear-a-white-hat-6892.
Buterin, Vitalik. 2017. “The Meaning of Decentralization.” Medium, Feb. 6. https://
medium.com/@VitalikButerin/the-meaning-of-decentralization-a0c92b76a274.
Byers, Dylan, and Claire Atkinson. 2020. “‘Same Old Defense’: Civil Rights Groups
Hammer Facebook after Meeting.” NBC News, Jul. 7. https://www.nbcnews.com/
tech/tech-news/same-old-defense-civil-rights-groups-hammer-facebook-after-meet
ing-n1233114.
Works Cited 411
Cadwalladr, Carole. 2017. “Robert Mercer: The Big Data Billionaire Waging War on
Mainstream Media.” The Guardian, Feb. 26. https://www.theguardian.com/politics/
2017/feb/26/robert-mercer-breitbart-war-on-media-steve-bannon-donald-trump
-nigel-farage.
Cadwalladr, Carole. 2019. “Facebook’s Role in Brexit—and the Threat to Democracy.”
TED, Apr. https://www.ted.com/talks/carole_cadwalladr_facebook_s_role_in_brexit
_and_the_threat_to_democracy.
Cameron, Euan. 2012. The European Reformation. 2nd ed. New York: Oxford University
Press.
Carey, Robert F., and Jacquelyn A. Burkell. 2007. “Revisiting the Four Horsemen of
the Infopocalypse: Representations of Anonymity and the Internet in Canadian
Newspapers.” First Monday 12:8 (Aug. 6). https://firstmonday.org/ojs/index.php/
fm/article/view/1999/1874.
Carolan, Jennifer. 2018. “Empathy Technologies Like VR, AR, and Social Media Can
Transform Education.” TechCrunch, Apr. 22. https://techcrunch.com/2018/04/22/
empathy-technologies-like-vr-ar-and-social-media-can-transform-education/.
Carr, Nicholas. 2010. The Shallows: What the Internet Is Doing to Our Brains. New
York: Norton.
Carr, Nicholas. 2018. “Can Journalism Be Saved?” Los Angeles Review of Books, Mar. 27.
https://lareviewofbooks.org/article/can-journalism-be-saved/.
Carrico, Dale. 2005. “Pancryptics: Technocultural Transformations of the Subject of
the Privacy.” PhD diss. University of California–Berkeley. https://amormundi.blog
spot.com/2012/07/pancryptics-my-dissertation-online.html.
Carrico, Dale. 2013. “Futurological Discourses and Posthuman Terrains.” Existenz 8(2)
(Fall): 47–63. https://existenz.us/volumes/Vol.8-2Carrico.pdf.
Carusone, Angelo. 2020. “Facebook and Twitter Don’t Censor Conservatives: They
Hire and Promote Them.” NBC News, Oct. 29. https://www.nbcnews.com/think/
opinion/facebook-twitter-don-t-censor-conservatives-they-hire-promote-them-ncna
1245308.
Casey, Michael J. 2019. “The Crypto-Surveillance Capitalism Connection.” CoinDesk,
Feb. 4. https://www.coindesk.com/markets/2019/02/04/the-crypto-surveillance-capi
talism-connection/.
Cassidy, John. 2013. “Apple’s Tax Dodges: Where’s the Public Outrage?” New Yorker, May 21.
https://www.newyorker.com/news/john-cassidy/apples-tax-dodges-wheres-the-pub
lic-outrage.
Cerf, Vinton G. 2012. “Keep the Internet Open.” New York Times, May 24. https://
www.nytimes.com/2012/05/25/opinion/keep-the-internet-open.html.
Charles Koch Foundation. 2020. “New Stanford Program Will Explore Implications
of Making Companies Responsible for Users’ Online Speech.” Press release, Sep. 9.
https://charleskochfoundation.org/news/new-stanford-program-will-explore-impli
cations-of-making-companies-responsible-for-users-online-speech/.
Chenou, Jean-Marie. 2014. “From Cyber-Libertarianism to Neoliberalism: Internet
Exceptionalism, Multi-Stakeholderism, and the Institutionalization of Internet
Governance in the 1990s.” Globalizations 11:2: 205–23. https://doi.org/10.1080/1474
7731.2014.887387.
412 Works Cited
Davidson, James Dale, and Lord William Rees-Mogg. 1997. The Sovereign Individual:
Mastering the Transition to the Information Age. New York: Simon & Schuster.
Davies, Harry, Simon Goodley, Felicity Lawrence, Paul Lewis, and Lisa O’Carroll.
2022. “Uber Broke Laws, Duped Police and Secretly Lobbied Governments, Leak
Reveals.” The Guardian, Jul. 11. https://www.theguardian.com/news/2022/jul/10/
uber-files-leak-reveals-global-lobbying-campaign.
DeChiaro, Dean. 2019. “‘A Real Gift to Big Tech’: Both Parties Object to Immunity
Provision in USMCA.” Roll Call, Dec. 17. https://rollcall.com/2019/12/17/a-real-gift
-to-big-tech-both-parties-object-to-immunity-provision-in-usmca/.
De Filippi, Primavera, and Aaron Wright. 2018. Blockchain and the Law: The Rule of
Code. Cambridge, Mass.: Harvard University Press.
“Definitions of Fascism.” 2022. Wikipedia. Last modified Jul. 18. https://en.wikipedia
.org/w/index.php?title=Definitions_of_fascism&oldid=1099091105.
Delgado, Richard, and Jean Stefancic. 2018. Must We Defend Nazis? Why the First
Amendment Should Not Protect Hate Speech and White Supremacy. New York: NYU
Press.
“Democratization.” 2021. Wikipedia. Last modified Jul. 17. https://en.wikipedia.org/w/
index.php?title=Democratization&oldid=1034086921.
DeNardis, Laura. 2015. The Global War for Internet Governance. New Haven, Conn.:
Yale University Press.
Denning, Dorothy. 1996. “The Future of Cryptography.” In Ludlow 2001, 85–101.
Destination Hub. 2022a. “‘I Tried to Warn You’ it’s already here—Edward Snowden
2022.” YouTube video, Feb. 9. https://youtube.com/watch?v=n1NEFwY9TiQ. [Video
since deleted.]
Destination Hub. 2022b. “‘it’s too late!! ‘This is the Secret They Are Hiding from
You’ | Edward Snowden 2022.” YouTube video, Feb. 20. https://www.youtube.com/
watch?v=rT6dS82kGuA. [Video since deleted.]
Destination Hub. 2022c. “This Is Getting Serious, Why Is Nobody Talking about
This? Edward Snowden 2022.” YouTube video, Apr. 23. https://www.youtube.com/
watch?v=LQQeu0c1lSY. [Video since deleted.]
Dewar, James A. 1998. The Information Age and the Printing Press: Looking Backward to
See Ahead. Los Angeles: RAND Corporation.
Dewar, James A., and Peng Hwa Ang. 2007. “The Cultural Consequences of Printing
and the Internet.” In Baron, Lundquist, and Shevlin 2007, 365–77.
Dingledine, Roger [arma]. 2014. “Possible Upcoming Attempts to Disable the Tor
Network.” Tor Blog, Dec. 19. https://blog.torproject.org/possible-upcoming-attempts
-disable-tor-network/.
Diringer, David. 1982. The Book before Printing: Ancient, Medieval, and Oriental. New
York: Dover.
Doctorow, Cory. 2014. “Crypto Wars Redux: Why the FBI’s Desire to Unlock Your
Private Life Must Be Resisted.” The Guardian, Oct. 9. https://www.theguardian.com/
technology/2014/oct/09/crypto-wars-redux-why-the-fbis-desire-to-unlock-your-pri
vate-life-must-be-resisted.
Doctorow, Cory. 2015. Information Doesn’t Want to Be Free: Laws for the Internet Age.
New York: McSweeney’s.
Works Cited 415
Doctorow, Cory. 2020. How to Destroy Surveillance Capitalism. San Francisco: Medium
Editions. https://onezero.medium.com/how-to-destroy-surveillance-capitalism-8135
e6744d59.
Doctorow, Cory, and Christoph Schmon. 2020. “The EU’s Digital Markets Act: There
Is a Lot to Like, but Room for Improvement.” Electronic Frontier Foundation, Dec.
15. https://www.eff.org/deeplinks/2020/12/eus-digital-markets-act-there-lot-room
-improvement.
Domscheit-Berg, Daniel. 2011. Inside WikiLeaks: My Time with Julian Assange at the
World’s Most Dangerous Website. New York: Crown.
Dougherty, Michael Brendan. 2017. “The Libertarianism-to-Fascism Pipeline.” National
Review, Aug. 24. https://www.nationalreview.com/2017/08/libertarians-sometimes
-become-fascists-heres-why/.
Dourado, Eli. 2013. “Making Airspace Available for ‘Permissionless Innovation.’”
Technology Liberation Front, Apr. 23. https://techliberation.com/2013/04/23/making
-airspace-available-for-permissionless-innovation/.
Dowd, Trone. 2020. “Snowden Warns Governments Are Using Coronavirus to Build
‘the Architecture of Oppression.’” Vice, Apr. 9. https://www.vice.com/en/article/
bvge5q/snowden-warns-governments-are-using-coronavirus-to-build-the-architec
ture-of-oppression.
Driscoll, Kevin. 2011. “Net Neutrality Research Database, 0.1.” Aug. 9. https://kevin
driscoll.org/projects/netneutrality/nn.html.
Duffy, Clare. 2020. “Facebook VP on Ad Boycott: We Have ‘No Incentive’ to Allow
Hate Speech.” CNN, Jun. 28. https://www.cnn.com/2020/06/28/tech/nick-clegg
-facebook-boycott-reliable/index.html.
Dulong de Rosnay, Melanie, and Francesca Musiani. 2016. “Towards a (De)centralization-
Based Typology of Peer Production.” tripleC 14(1): 189–207. https://doi.org/10.31269/
triplec.v14i1.728.
Dwyer, Colin. 2020. “Authors, Publishers Condemn the ‘National Emergency Library’
as ‘Piracy.’” National Public Radio, Mar. 30. https://www.npr.org/2020/03/30/8237
97545/authors-publishers-condemn-the-national-emergency-library-as-piracy.
Dyson, Esther. 1997. Release 2.0: A Design for Living in the Digital Age. New York:
Broadway.
Dyson, Esther, George Gilder, George Keyworth, and Alvin Toffler. 1994. “Cyberspace
and the American Dream: A Magna Carta for the Knowledge Age.” Future Insight,
Aug. http://www.pff.org/issues-pubs/futureinsights/fi1.2magnacarta.html.
Eco, Umberto. 1995. “Ur-Fascism.” New York Review of Books, Jun. 22. https://www
.nybooks.com/articles/1995/06/22/ur-fascism/.
Edelson, Laura, Minh-Kha Nguyen, Ian Goldstein, Oana Goga, Tobias Lauinger, and
Damon McCoy. 2021. “Far-Right News Sources on Facebook More Engaging.”
Medium, Mar. 3. https://medium.com/cybersecurity-for-democracy/far-right-news
-sources-on-facebook-more-engaging-e04a01efae90.
“Edward Snowden Explains Why He Doesn’t Talk about Ukraine Crisis.” 2022. Coin
Desk, Jun. 11. https://www.coindesk.com/video/edward-snowden-explains-why-he
-doesnt-talk-about-ukraine-crisis/.
416 Works Cited
Edwards, Paul N. 1997. The Closed World: Computers and the Politics of Discourse in
Cold War America. Cambridge, Mass.: MIT Press.
e-flux. 2018. “Why Is Nick Land Still Embraced by Segments of the British Art and
Theory Scenes?” e-flux Conversations, Mar. https://conversations.e-flux.com/t/why
-is-nick-land-still-embraced-by-segments-of-the-british-art-and-theory-scenes/6329.
Eichenwald, Kurt. 2013. “The Errors of Edward Snowden and His Global Hypocrisy
Tour.” Vanity Fair, Jun. 26. https://www.vanityfair.com/news/2013/06/errors-edward
-snowden-global-hypocrisy-tour.
Eisenstat, Yaël, and Nils Gilman. 2022. “The Myth of Tech Exceptionalism.” Noēma,
Feb. 10. https://www.noemamag.com/the-myth-of-tech-exceptionalism/.
Eisenstein, Elizabeth L. 1979. The Printing Press as an Agent of Change: Communica-
tions and Cultural Transformations in Early-Modern Europe. New York: Cambridge
University Press.
Eisenstein, Elizabeth L. 2005. The Printing Revolution in Early Modern Europe. 2nd ed.
New York: Cambridge University Press.
Electronic Frontier Foundation. n.d. “Net Neutrality.” Last modified Mar. 3, 2024.
https://www.eff.org/issues/net-neutrality.
Electronic Frontier Foundation. 2011. Annual Report, 2009–2010. https://www.eff.org/
files/eff-2009-2010-annual-report.pdf.
“Eliezer Yudkowsky.” 2021. RationalWiki. Last modified Jun. 22. https://rationalwiki
.org/w/index.php?title=LessWrong&oldid=2336968.
Emerson, Lori. 2016. “Selling the Future at the MIT Media Lab.” LoriEmerson.net,
Feb. 17. https://loriemerson.net/2016/02/17/selling-the-future-at-the-mit-media-lab/.
Enlund, Martin. 2022. “On Gutenberg, Satoshi and Polarization.” metaperspectiv, Jan.
6. https://enlund.org/en/posts/gutenberg/.
Eubanks, Virginia. 2011. Digital Dead End: Fighting for Social Justice in the Information
Age. Cambridge, Mass.: MIT Press.
Eye Opener. 2022. “‘If You Knew What’s Coming, You Would Get Prepared Now’—
Edward Snowden (2022).” YouTube video, May 23. https://www.youtube.com/
watch?v=Uj8Gwq-9dE0. [Video since deleted.]
Fabbri, Alice, Alexandra Lai, Quinn Grundy, and Lisa Anne Bero. 2018. “The Influ-
ence of Industry Sponsorship on the Research Agenda: A Scoping Review.” American
Journal of Public Health 108:11 (Nov. 1). https://doi.org/10.2105/AJPH.2018.304677.
Falkvinge, Rick. 2015. “Language Matters: All the Copyright Lobby’s Subtleties.” Tor-
rentFreak, Oct. 18. https://torrentfreak.com/language-matters-all-the-copyright-lob
bys-subtleties-151018/.
Farrow, Ronan. 2019. “How an Élite University Research Center Concealed Its Rela-
tionship with Jeffrey Epstein.” New Yorker, Sep. 6. https://www.newyorker.com/
news/news-desk/how-an-elite-university-research-center-concealed-its-relationship
-with-jeffrey-epstein.
FCC. 2015. “FCC Releases Open Internet Order.” Mar. 12. https://www.fcc.gov/docu
ment/fcc-releases-open-internet-order.
Feiner, Lauren. 2019. “Pelosi Pushes to Keep Tech’s Legal Shield Out of Trade Agree-
ment with Mexico and Canada.” CNBC, Dec. 5. https://www.cnbc.com/2019/12/05/
pelosi-pushes-to-keep-section-230-out-of-usmca-trade-agreement.html.
Works Cited 417
Feldman, Ari. 2016. “Why Does WikiLeaks Have a Reputation for Anti-Semitism?”
Forward, Aug. 15. https://forward.com/news/347546/why-does-wikileaks-have-a-repu
tation-for-anti-semitism/.
Feldman, Brian. 2020. “Facebook Has Always Been Right-Wing Media.” Vice, Oct. 29.
https://www.vice.com/en/article/n7vvwq/facebook-has-always-been-right-wing
-media.
Fenwick, Cody. 2019. “Here’s Why Economist Brad Delong Believes Libertarianism Is
Essentially a Form of White Supremacy.” Salon, Jan. 4. https://www.salon.com/2019/
01/04/heres-why-this-economist-believes-libertarianism-is-essentially-a-form-of
-white-supremacy_partner/?f bclid=IwAR1Fc0NjI-P4tCEsgI7i9rVM5UdCyTM8z
AZxOXBhVsR_23UQvE7EXzDSYeE.
Ferdinand, Peter, ed. 2000. The Internet, Democracy, and Democratization. New York:
Routledge.
Fielitz, Maik, and Holger Marcks. 2019. “Digital Fascism: Challenges for the Open
Society in the Time of Social Media.” University of California–Berkeley Center for
Right-Wing Studies Working Paper, Jul. 16. https://escholarship.org/uc/item/87w5c
5gp.
Fight for the Future. n.d. “Projects.” Last modified Nov. 20, 2023. https://www.fight
forthefuture.org/projects/.
Fight for the Future. 2021. “Don’t Kill Crypto.” Campaign website. Last modified Mar.
3, 2024. https://www.fightforthefuture.org/actions/stop-the-senate-from-sneaking
-through-total-surveillance-of-the-crypto-economy/.
Finley, Klint. 2020. “The Wired Guide to Net Neutrality.” Wired, May 5. https://www
.wired.com/story/guide-net-neutrality/.
Fleischer, Dorothy A. 1991. “The MIT Radiation Laboratory: RLE’s Microwave Heri-
tage.” RLE Currents 4:2 (Spring). https://web.archive.org/web/19990225094504/
http:/rleweb.mit.edu/Publications/currents/4-2cov.htm.
Foroohar, Rana. 2018. “Year in a Word: Techlash.” Financial Times, Dec. 16. https://
www.ft.com/content/76578fba-fca1-11e8-ac00-57a2a826423e.
Fouché, Rayvon. 2012. “From Black Inventors to One Laptop per Child: Exporting a
Racial Politics of Technology.” In Lisa Nakamura and Peter A. Chow-White, eds.,
Race after the Internet. New York: Routledge, 61–84.
Ford, Paul. 2016. “Reboot the World.” New Republic, Jun. 22. https://newrepublic.com/
article/133889/reboot-world.
Forsyth, Susan R., Donna H. Odierna, David Krauth, and Lisa A. Bero. 2014. “Con-
flicts of Interest and Critiques of the Use of Systematic Reviews in Policymaking:
An Analysis of Opinion Articles.” Systematic Reviews 3:122. https://doi.org/10.1186/
2046-4053-3-122.
Frank, Sam. 2015. “Come with Us If You Want to Live: Among the Apocalyptic Lib-
ertarians of Silicon Valley.” Harper’s Magazine, Jan. https://harpers.org/archive/2015/
01/come-with-us-if-you-want-to-live/.
Frank, Thomas. 1997. The Conquest of Cool: Business Culture, Counterculture, and the
Rise of Hip Consumerism. Chicago: University of Chicago Press.
Franken, Al. 2010. “The Internet as We Know It Is Still at Risk.” Huffington Post, Dec.
22. https://www.huffpost.com/entry/the-internet-as-we-know-i_b_800159.
418 Works Cited
Franks, Mary Ann. 2013. “The Lawless Internet? Myths and Misconceptions about
CDA Section 230.” Huffington Post, Dec. 18. https://www.huffpost.com/entry/sec
tion-230-the-lawless-internet_b_4455090.
Franks, Mary Anne. 2019a. The Cult of the Constitution: Our Deadly Devotion to Guns
and Free Speech. Stanford, Calif.: Stanford University Press.
Franks, Mary Anne. 2019b. “The Free Speech Black Hole: Can the Internet Escape the
Gravitational Pull of the First Amendment?” Knight First Amendment Institute at
Columbia University, Aug. 21. https://knightcolumbia.org/content/the-free-speech
-black-hole-can-the-internet-escape-the-gravitational-pull-of-the-first-amendment.
“The Free-Knowledge Fundamentalist.” 2008. The Economist, Jun. 7. https://web.archive
.org/web/20220301190721/https:/www.economist.com/technology-quarterly/2008
/06/07/the-free-knowledge-fundamentalist.
Freeland, Chris. 2020. “Announcing a National Emergency Library to Provide Digi-
tized Books to Students and the Public.” Internet Archive Blogs, Mar. 24. https://blog
.archive.org/2020/03/24/announcing-a-national-emergency-library-to-provide
-digitized-books-to-students-and-the-public/.
Free Press and Free Action Fund. 2012. “Declaration of Internet Freedom.” Jul. 4.
https://web.archive.org/web/20120920041723/http:/www.internetdeclaration.org/.
Frenkel, Sheera, and Cecilia Kang. 2021. An Ugly Truth: Inside Facebook’s Battle for Domi
nation. New York: HarperCollins.
Friedman, David. 1989. The Machinery of Freedom: Guide to a Radical Capitalism. 3rd
ed. New York: Open Court.
Freidman, David. 2008. Future Imperfect: Technology and Freedom in an Uncertain
World. New York: Cambridge University Press.
Galloway, Alexander. 2004. Protocol: How Control Exists after Decentralization. Cam-
bridge, Mass.: MIT Press.
Galperin, Eve. 2014. “Twitter Steps Down from the Free Speech Party.” Electronic
Frontier Foundation, May 21. https://www.eff.org/deeplinks/2014/05/twitter-steps
-down-free-speech-party.
“GDPR Fines and Notices.” 2022. Wikipedia. Last modified Jul. 12. https://en.wikipe
dia.org/w/index.php?title=GDPR_fines_and_notices&oldid=1097679934.
Geigner, Timothy. 2013. “Game of Thrones Director: I’m 100% Opposed to the Piracy
I Just Said Helps My Show Survive.” TechDirt, Feb. 28. https://www.techdirt.com/
2013/02/28/game-thrones-director-im-100-opposed-to-piracy-i-just-said-helps-my
-show-survive/.
Gellis, Cathy. 2018. “The GDPR: Ghastly, Dumb, Paralyzing Regulation It’s Hard to
Celebrate.” TechDirt, May 25. https://www.techdirt.com/2018/05/25/gdpr-ghastly
-dumb-paralyzing-regulation-hard-to-celebrate/.
Gerard, David. 2017. Attack of the 50 Foot Blockchain: Bitcoin, Blockchain, Ethereum,
and Smart Contracts. Self-pub., CreateSpace.
Gerard, David. 2020. Libra Shrugged: How Facebook Tried to Take Over the Money.
Self-pub.
Gerard, David. 2022. “Creationism on the Blockchain (Review of George Gilder, Life
After Google).” b2o Review, Apr. 29. https://www.boundary2.org/2022/04/david-gerard
-creationism-on-the-blockchain-review-of-george-gilder-life-after-google/.
Works Cited 419
Giddens, Anthony. 1985. The Nation-State and Violence. Cambridge: Polity Press.
Gillespie, Tarleton. 2018. Custodians of the Internet: Platforms, Content Moderation, and
the Hidden Decisions That Shape Social Media. New Haven, Conn.: Yale University
Press.
Gilmore, John. 2013. “John Gilmore, Entrepreneur and Civil Libertarian.” Toad.com.
Last modified Nov. 27. http://www.toad.com/gnu/.
Gitlin, Todd. 1993. The Sixties: Years of Hope, Days of Rage. Rev. ed. New York: Bantam.
Glaser, April. 2018. “The Watchdogs That Didn’t Bark.” Slate, Apr. 19. https://slate.com/
technology/2018/04/why-arent-privacy-groups-fighting-to-regulate-facebook.html.
Glaser, April. 2019. “There Are at Least Five Reasons Why Mark Zuckerberg Would
Have Tucker Carlson over for Dinner.” Slate, Oct. 15. https://slate.com/technology/
2019/10/mark-zuckerberg-tucker-carlson-ben-shapiro-facebook-conservatives.html.
Godin, Benoît, and Dominique Vinck, eds. 2017. Critical Studies of Innovation: Alter-
native Approaches to the Pro-Innovation Bias. Northampton, Mass.: Edward Elgar.
Godwin, Mike. 2007. “Superhuman Imagination: Vernor Vinge on Science Fiction,
the Singularity, and the State.” Reason, May. https://reason.com/2007/05/04/super
human-imagination/.
Gogarty, Kayla. 2020. “Facebook Is Letting the Trump Campaign Publish at Least 529
Ads with False Claims of Voter Fraud.” Media Matters, May 19. https://www.media
matters.org/facebook/facebook-letting-trump-campaign-publish-least-529-ads
-false-claims-voter-fraud.
Goldberg, Carrie. 2019. Nobody’s Victim: Fighting Psychos, Stalkers, Pervs, and Trolls. New
York: Plume.
Goldman, Eric, and Jeff Kosseff, eds. 2020. Zeran v. America Online E-Resource. Santa
Clara University Legal Studies Research Paper. https://dx.doi.org/10.2139/ssrn.3663
839.
Goldstein, Alexis. 2021. “Crypto Doesn’t Have to Enable Tax Cheats.” Bloomberg, Aug.
27. https://www.bloomberg.com/opinion/articles/2021-08-26/crypto-doesn-t-have
-to-enable-tax-cheats.
Goldstein, Evan. 2020. “Higher Ed Has a Silicon Valley Problem.” Chronicle of Higher
Education (Sep. 23). https://www.chronicle.com/article/higher-ed-has-a-silicon-valley
-problem.
Goldstein, Paul. 2003. Copyright’s Highway: From Gutenberg to the Celestial Jukebox.
Stanford, Calif.: Stanford University Press.
Golumbia, David. 2009. The Cultural Logic of Computation. Cambridge, Mass.: Har-
vard University Press.
Golumbia, David. 2013. “Completely Different and Exactly the Same.” Uncomputing,
Mar. 6. Last modified Sep. 17, 2023. http://www.uncomputing.org/?p=22.
Golumbia, David. 2014a. “‘Permissionless Innovation’: Using Technology to Disman-
tle the Republic.” Uncomputing, Jun. 11. Last modified Dec. 21, 2023. https://www
.uncomputing.org/?p=1383.
Golumbia, David. 2014b. “Tor Is Not a ‘Fundamental Law of the Universe.’” Pando
Daily, Dec. 12. https://web.archive.org/web/20141218100032/https:/pando.com/2014/
12/12/tor-is-not-a-fundamental-law-of-the-universe/.
420 Works Cited
Golumbia, David. 2015. “Tor, Technocracy, Democracy.” Uncomputing, Apr. 23. Last
modified Dec. 2, 2023. http://www.uncomputing.org/?p=1647.
Golumbia, David. 2016a. “Code Is Not Speech.” SSRN, Apr. 13. https://dx.doi.org/
10.2139/ssrn.2764214.
Golumbia, David. 2016b. “Marxism and Open Access in the Humanities: Turning
Academic Labor against Itself.” Workplace: A Journal for Academic Labor 28 (Sep. 16).
https://doi.org/10.14288/workplace.v0i28.186213.
Golumbia, David. 2016c. “‘Neoliberalism’ Has Two Meanings.” Uncomputing, Jul. 22.
Last modified Oct. 5, 2023. http://www.uncomputing.org/?p=1803.
Golumbia, David. 2016d. The Politics of Bitcoin: Software as Right-Wing Extremism.
Minneapolis: University of Minnesota Press.
Golumbia, David. 2017. “The Militarization of Language: Cryptographic Politics and
the War of All against All.” boundary 2 44:4 (Nov.). 95–112. https://doi.org/10.1215/
01903659-4206337.
Golumbia, David. 2020a. “Blockchain: The White Man’s Burden.” Medium, Feb. 21.
https://davidgolumbia.medium.com/blockchain-the-white-mans-burden-e3ef75c9
7830.
Golumbia, David. 2020b. “Cryptocurrency Is Garbage. So Is Blockchain.” Medium,
Jun. 27. https://davidgolumbia.medium.com/cryptocurrency-is-garbage-so-is-block
chain-3e80078e77fe.
Golumbia, David. 2021. “Trump’s Twitter Ban Is a Step toward Ending the Hijacking
of the First Amendment.” Boston Globe, Jan. 10. https://www.bostonglobe.com/2021/
01/10/opinion/stretching-first-amendment/?event=event25.
Good, Chris. 2013. “Julian Assange Backs Ron and Rand Paul.” ABC News, Aug. 17.
https://abcnews.go.com/blogs/politics/2013/08/julian-assange-backs-ron-and
-rand-paul.
Gorbis, Marina. 2013. The Nature of the Future: Dispatches from the Socialstructed
World. New York: Free Press.
Gorodyansky, David. 2017. “This Is the Future If Net Neutrality Is Repealed: The
Creeping, Costly Death of Media Freedom.” TechCrunch, Dec. 9. https://techcrunch
.com/2017/12/09/this-is-the-future-if-net-neutrality-is-repealed-the-creeping-costly
-death-of-media-freedom/.
Graham, Megan, and Salvador Rodriguez. 2020. “Facebook Meeting with Civil Rights
Groups ‘a Disappointment,’ Ad Boycott Organizers Say.” CNBC, Jul. 7. https://
www.cnbc.com/2020/07/07/leaders-of-facebook-ad-boycott-no-commitment-to
-action-from-execs.html.
Gray, Rosie. 2017. “Behind the Internet’s Anti-Democracy Movement.” The Atlantic,
Feb. 10. https://www.theatlantic.com/politics/archive/2017/02/behind-the-internets
-dark-anti-democracy-movement/516243/.
Greelish, David. 2013. “An Interview with Computing Pioneer Alan Kay.” Time, Apr. 2.
https://techland.time.com/2013/04/02/an-interview-with-computing-pioneer-alan
-kay/.
Green, Stephen J. 2017. “The Cryptocurrency Reformation: Comparing Martin Luther
and Satoshi Nakamoto.” Steemit. https://steemit.com/cryptocurrency/@stevg/the
-cryptocurrency-reformation-comparing-martin-luther-and-satoshi-nakamoto.
Works Cited 421
Greenberg, Andy. (2012) 2013. This Machine Kills Secrets: How WikiLeakers, Cypher-
punks, and Hacktivists Aim to Free the World’s Information. New York: Plume.
Greene, David, Corynne McSherry, Sophia Cope, and Adam Schwartz. 2016. “Rights
at Odds: Europe’s Right to Be Forgotten Clashes with U.S. Law.” Electronic Fron-
tier Foundation, Nov. https://www.eff.org/files/2016/11/29/rtbf-us_law_legal_back
ground.pdf.
Greenwald, Glenn. 2010. “Palin and the Tea-Party ‘Movement’: Nothing New.” Salon,
Feb. 7. https://www.salon.com/2010/02/07/palin_64/.
Greenwald, Glenn. 2011. “The Tea Party and Civil Liberties.” Salon, Feb. 9. https://
www.salon.com/2011/02/09/tea_party_9/.
Greenwood, Daniel J. H. 1999. “First Amendment Imperialism.” Utah Law Review,
657–99. https://dx.doi.org/10.2139/ssrn.794786.
Greenwood, Daniel J. H. 2013. “Do Corporations Have a Constitutional Right to
Bear Arms? and Related Puzzles in Post-National Jurisprudence.” Draft paper. Last
modified Sep. 17, 2023. https://people.hofstra.edu/Daniel_J_Greenwood/pdf/2dA
DRAFT.pdf.
Greer, Evan. 2021a. “Surveillance Capitalism.” Get Better Records, Apr. 8. YouTube
video, 00:03:31. https://youtu.be/NvBHFLFllJ8?.
Greer, Evan. 2021b. “Decentralization is our best bet . . .” X, Jul. 15. https://twitter.com/
evan_greer/status/1415640718299779077.
Griffin, Roger. 1991. The Nature of Fascism. New York: Routledge.
Griffin, Roger. 1995. Fascism. New York: Oxford University Press.
Grofman, Bernard, Alexander H. Treschel, and Mark Franklin, eds. 2014. The Internet
and Democracy in Global Perspective: Voters, Candidates, Parties, and Social Move-
ments. New York: Springer.
Grove, Lloyd, and Justin Baragona. 2021. “Is Glenn Greenwald the New Master of
Right-Wing Media?” Daily Beast, Jun. 6. https://www.thedailybeast.com/is-glenn
-greenwald-the-new-master-of-right-wing-media.
Guesmi, Haythem. 2021. “The Social Media Myth about the Arab Spring.” Al Jazeera,
Jan. 27. https://www.aljazeera.com/opinions/2021/1/27/the-social-media-myth-about
-the-arab-spring.
Gurstein, Michael. 2013. “Multistakeholderism vs. Democracy: My Adventures in
‘Stakeholderland.’” Gurstein’s Community Informatics, Mar. 20. https://gurstein.word
press.com/2013/03/20/multistakeholderism-vs-democracy-my-adventures-in-stake
holderland/.
Gurstein, Michael. 2014a. “The Multistakeholder Model, Neoliberalism and Global
(Internet) Governance.” Gurstein’s Community Informatics, Mar. 26. https://gurstein
.wordpress.com/2014/03/26/the-multistakeholder-model-neo-liberalism-and-global
-internet-governance/.
Gurstein, Michael. 2014b. “Democracy or Multistakeholderism: Competing Models
of Governance.” Gurstein’s Community Informatics, Oct. 19. https://gurstein.word
press.com/2014/10/19/democracy-or-multi-stakeholderism-competing-models-of
-governance/.
Hagey, Keach, and Jeff Horwitz. 2021. “Facebook’s Internal Chat Boards Show Politics
Often at Center of Decision Making.” Wall Street Journal, Oct. 24. https://www.wsj
.com/articles/facebook-politics-decision-making-documents-11635100195.
422 Works Cited
Haggart, Blayne, and Natasha Tusikov. 2021. “How ‘Free Speech’ Kills Internet Regu-
lation Debates: Part Two.” Centre for International Governance Innovation, Sep. 10.
https://www.cigionline.org/articles/how-free-speech-kills-internet-regulation-de
bates/.
Haider, Shuja. 2017. “The Darkness at the End of the Tunnel: Artificial Intelligence
and Neoreaction.” Viewpoint Magazine, Mar. 28. https://viewpointmag.com/2017/03/
28/the-darkness-at-the-end-of-the-tunnel-artificial-intelligence-and-neoreaction/.
Hall, Zac. 2014. “Occupy Wall Street Co-founder: Appoint Eric Schmidt CEO of
America.” 9to5Google, Mar. 20. https://9to5google.com/2014/03/20/occupy-wall-street
-co-founder-appoint-eric-schmidt-ceo-of-america/.
Halliday, Josh. 2012. “Twitter’s Tony Wang: ‘We Are the Free Speech Wing of the Free
Speech Party.’” The Guardian, Mar. 22. https://www.theguardian.com/media/2012/
mar/22/twitter-tony-wang-free-speech.
Halpern, Sue. 2018. “Cambridge Analytica, Facebook, and the Revelations of Open
Secrets.” New Yorker, Mar. 21. https://www.newyorker.com/news/news-desk/cam
bridge-analytica-facebook-and-the-revelations-of-open-secrets.
Hanna, Rachael. 2020. “Metadata Collection Violated FISA, Ninth Circuit Rules.”
Lawfare Blog, Sep. 14. https://www.lawfaremedia.org/article/metadata-collection
-violated-fisa-ninth-circuit-rules.
Hanson, Robin. 2006. “Foul Play in Information Markets.” In Bob Hahn and Paul
Tetlock, eds., Information Markets: A New Way of Making Decisions in the Public and
Private Sectors. Washington, D.C.: AEI Press, 126–41. https://mason.gmu.edu/~rhan
son/foulplay.pdf.
Harguindéguy, Jean-Baptiste Paul, Alistair Cole, and Romain Pasquier. 2021. “The
Variety of Decentralization Indexes: A Review of the Literature.” Regional and Fed-
eral Studies 31(2): 185–208. https://doi.org/10.1080/13597566.2019.1566126.
Harry, Sydette. 2014. “Ouroboros Outtakes: The Circle Was Never Unbroken.” Model
View Culture, Dec. 8. https://modelviewculture.com/pieces/ouroboros-outtakes-the
-circle-was-never-unbroken.
Hartzog, Woodrow. 2021. “What Is Privacy? That’s the Wrong Question.” University
of Chicago Law Review 88:7 (Nov.): 1677–88. https://lawreview.uchicago.edu/print
-archive/what-privacy-thats-wrong-question.
Harwell, Drew. 2021. “Rumble, a YouTube Rival Popular with Conservatives, Will Pay
Creators Who ‘Challenge the Status Quo.’” Washington Post, Aug. 12. https://www
.washingtonpost.com/technology/2021/08/12/rumble-video-gabbard-greenwald/.
Hassan, Samer, and Primavera De Filippi. 2017. “The Expansion of Algorithmic Gov-
ernance: From Code Is Law to Law Is Code.” Field Actions Science Reports 17: 88–90.
https://journals.openedition.org/factsreports/4518.
Hatmaker, Taylor. 2020. “Trump Vetoes Major Defense Bill, Citing Section 230.”
TechCrunch, Dec. 23. https://techcrunch.com/2020/12/23/trump-ndaa-veto-section
-230/.
Haworth, Alan. 1994. Anti-Libertarianism: Markets, Philosophy, and Myth. New York:
Routledge.
Hayek, Friedrich A. (1944) 2001. The Road to Serfdom. New York: Routledge.
Works Cited 423
Ignatius, David. 2021b. “The State Department Gets Serious About the Global Tech-
nology Race.” Washington Post, Oct. 27. https://www.washingtonpost.com/opinions/
2021/10/27/state-department-gets-serious-about-global-technology-race/.
Innis, Harold. 1950. Empire and Communications. Toronto: University of Toronto Press.
Innis, Harold. 1951. The Bias of Communication. Toronto: University of Toronto Press.
Jain, Samir C. 2017. “The Non-inevitable Breadth of the ‘Zeran’ Decision.” In Gold-
man and Kosseff 2020, 55–56.
Jarvis, Jeff. 2005. “About Me and Disclosures.” BuzzMachine, Jul. 4. https://buzzma
chine.com/about/.
Jarvis, Jeff. 2009. What Would Google Do? Reverse-Engineering the Fastest-Growing
Company in the History of the World. New York: HarperCollins.
Jarvis, Jeff. 2011a. “The Article and the Future of Print.” BuzzMachine, Jun. 18. https://
buzzmachine.com/2011/06/18/the-article-and-the-future-of-print/.
Jarvis, Jeff. 2011b. “Gutenberg of Arabia.” BuzzMachine, Feb. 13. https://buzzmachine
.com/2011/02/13/gutenberg-of-arabia/.
Jarvis, Jeff. 2011c. Gutenberg the Geek: History’s First Technology Entrepreneur and Silicon
Valley’s Patron Saint. Seattle: Amazon Digital Services. Kindle.
Jarvis, Jeff. 2011d. Public Parts: How Sharing in the Digital Age Improves the Way We
Work and Live. New York: Simon & Schuster.
Jarvis, Jeff. 2013a. “Technopanic: The Movie.” BuzzMachine, Apr. 12. https://buzzma
chine.com/2013/04/12/technopanic-the-movie/.
Jarvis, Jeff. 2013b. “That’s like investigating cameras . . .” X, Mar. 5. https://twitter.com/
jeffjarvis/status/309123359772381184.
Jarvis, Jeff. 2019. “A Rising Moral Panic.” BuzzMachine, Jan. 18. https://buzzmachine
.com/2019/01/18/a-rising-moral-panic/.
Jeftovic, Mark E. 2017. “This Time Is Different Part 2: What Bitcoin Really Is.” Hacker
Noon, Dec. 12. https://hackernoon.com/this-time-is-different-part-2-what-bitcoin
-really-is-ae58c69b3bf0.
Jenkins, Henry, and David Thorburn, eds. 2003. Democracy and New Media. Cam-
bridge, Mass.: MIT Press.
Jia, Jian, Ginger Zhe Jin, and Liad Wagman. 2018. “The Short-Run Effects of GDPR
on Technology Venture Investment.” National Bureau of Economic Research Work-
ing Paper, Nov. https://www.nber.org/papers/w25248.
Johns, Adrian. 2000. The Nature of the Book: Print and Knowledge in the Making. Chi-
cago: University of Chicago Press.
Johnson, Ashley, and Daniel Castro. 2021. “How Other Countries Have Dealt with
Intermediary Liability.” Information Technology and Innovation Foundation, Feb.
22. https://itif.org/publications/2021/02/22/how-other-countries-have-dealt-inter
mediary-liability/.
Jones, Meg Leta. 2016. Ctrl + Z: The Right to Be Forgotten. New York: NYU Press.
Juskalian, Russ. 2008. “Interview with Clay Shirky, Part I: ‘There’s Always a New Ludd
ism Whenever There’s Change.’” Columbia Journalism Review, Dec. 19. https://
archives.cjr.org/overload/interview_with_clay_shirky_par.php.
Kafka, Peter. 2014. “It’s Over! Viacom and Google Settle YouTube Lawsuit.” Vox, Mar. 18.
https://www.vox.com/2014/3/18/11624656/its-over-viacom-and-google-settle-you
tube-lawsuit.
Works Cited 425
Kahn, Jeremy. 2003. “The Man Who Would Have Us Bet on Terrorism—Not to
Mention Discard Democracy and Cryogenically Freeze Our Heads—May Have a
Point (About the Betting, We Mean).” CNN Money, Sep. 15. https://money.cnn.com/
magazines/fortune/fortune_archive/2003/09/15/349149/index.htm.
Karr, Tim. 2021. “Net Neutrality Violations: A History of Abuse.” Free Press, Jul. 9.
https://www.freepress.net/blog/net-neutrality-violations-history-abuse.
Katz, Yarden. 2020. Artificial Whiteness: Politics and Ideology in Artificial Intelligence.
New York: Columbia University Press.
Kaye, David. 2017. Report of the Special Rapporteur on the Promotion and Protection of
the Right to Freedom of Opinion and Expression. United Nations Human Rights
Council. https://digitallibrary.un.org/record/1304394?ln=en.
Keating, Joshua E. 2012. “How WikiLeaks Blew It.” Foreign Policy, Aug. 16. https://
foreignpolicy.com/2012/08/16/how-wikileaks-blew-it/.
Kerr, Orin. 2013. “United States v. Auernheimer, and Why I Am Representing Auern-
heimer Pro Bono on Appeal before the Third Circuit.” Volokh Conspiracy, Mar. 21.
https://volokh.com/2013/03/21/united-states-v-auernheimer-and-why-i-am-repre
senting-auernheimer-pro-bono-on-appeal-before-the-third-circuit/.
Khanna, Derek. 2014. “The Conservative Case for Taking on the Copyright Lobby.”
Business Insider, Apr. 30. https://www.businessinsider.com/time-to-confront-the-copy
right-lobby-2014-4.
Kharif, Olga. 2017. “The Bitcoin Whales: 1,000 People Who Own 40 Percent of
the Market.” Bloomberg Businessweek, Dec. 8. https://www.bloomberg.com/news/
articles/2017-12-08/the-bitcoin-whales-1-000-people-who-own-40-percent-of-the
-market.
King, Andrew A., and Baljir Baatartogtokh. 2015. “How Useful Is the Theory of Dis-
ruptive Innovation?” MIT Sloan Management Review, Sep. 15. https://sloanreview
.mit.edu/article/how-useful-is-the-theory-of-disruptive-innovation/.
Kinsella, Stephen. 2008. Against Intellectual Property. Auburn, Ala.: Ludwig von Mises
Institute.
Kinsella, Stephen. 2010. “Hoppe on Covenant Communities and Advocates of Alter-
native Lifestyles.” StephenKinsella.com, May 26. https://www.stephankinsella.com/
2010/05/hoppe-on-covenant-communities/.
Klein, Ezra. 2021. “The Way the Senate Melted Down over Crypto Is Very Reveal-
ing.” New York Times, Aug. 12. https://www.nytimes.com/2021/08/12/opinion/senate
-cryptocurrency.html.
Kobek, Jarret. 2016. I Hate the Internet. San Francisco: We Heard You Like Books.
Koenig, Bryan. 2013. “Facebook’s Zuckerberg Gives Himself a Label.” CNN, Sep. 18.
Kosseff, Jeff. 2017. “The Judge Who Shaped the Internet.” In Goldman and Kosseff
2020, 57–60.
Kosseff, Jeff. 2019. The Twenty-Six Words That Created the Internet. Ithaca, N.Y.: Cor-
nell University Press.
Kraus, Rachel. 2020. “Once Again, There Is No ‘Anti-Conservative’ Bias on Social
Media.” Mashable, Jul. 28. https://mashable.com/article/anti-conservative-bias-face
book.
426 Works Cited
Kravets, David. 2014. “Appeals Court Reverses Hacker/Troll ‘Weev’ Conviction and
Sentence.” Ars Technica, Apr. 11. https://arstechnica.com/tech-policy/2014/04/appeals
-court-reverses-hackertroll-weev-conviction-and-sentence/.
Krotoszynski, Ronald J., Jr. 2016. Privacy Revisited: A Global Perspective on the Right to
Be Left Alone. New York: Oxford University Press.
Kurtzleben, Danielle. 2017. “While Trump Touts Stock Market, Many Americans Are
Left Out of the Conversation.” National Public Radio, Mar. 1. https://www.npr.org/
2017/03/01/517975766/while-trump-touts-stock-market-many-americans-left-out
-of-the-conversation.
LaFraniere, Sharon. 2020. “Roger Stone Was in Contact with Julian Assange in 2017,
Documents Show.” New York Times, Apr. 29. https://www.nytimes.com/2020/04/29/
us/politics/roger-stone-julian-assange.html.
Land, Nick. 1996. “Shoot-Out at the Cyber Corral.” New Scientist, Aug. 17: 41.
Land, Nick. 2013. “The Dark Enlightenment.” Self-pub., n.p. https://web.archive.org/
web/20190928101326/http:/www.thedarkenlightenment.com/the-dark-enlighten
ment-by-nick-land/.
Land, Nick. 2014a. “Exterminator.” Outside In, Aug. 8. https://web.archive.org/web/20
141028044642/http:/www.xenosystems.net/exterminator/.
Land, Nick. 2014b. “Hyper-Racism.” Outside In, Sep. 29. https://web.archive.org/
web/20190401033904/https:/www.xenosystems.net/hyper-racism/.
Land, Nick. 2018. Crypto-Current: Bitcoin and Philosophy. Self-pub., Oct. 31. https://
etscrivner.github.io/cryptocurrent/.
Landa, Ishay. 2010. The Apprentice’s Sorcerer: Liberal Tradition and Fascism. Boston: Brill.
Landa, Ishay. 2019. Fascism and the Masses: The Revolt against the Last Humans, 1848–
1945. New York: Routledge.
La Roche, Julia. 2014. “Infamous Hacker ‘Weev’ Went on CNBC to Explain the Fas-
cinating Hedge Fund He’s about to Launch.” Business Insider, Apr. 28. https://www
.businessinsider.com/andrew-weev-auernheimer-hedge-fund-2014-4.
Larson, Max. Forthcoming. “Computer Center Sabotage: Luddism, Black Studies,
and the Diversion of Technological Progress.” boundary 2.
Leahy, Patrick. 2011. “Senate Judiciary Committee Unanimously Approves Bipartisan
Bill to Crack Down on Rogue Websites.” Press release, May 26. Last modified Dec.
2022. https://www.leahy.senate.gov/press/senate-judiciary-committee-unanimously
-approves-bipartisan-bill-to-crack-down-on-rogue-websites.
Leahy, Patrick. 2012. “Senate Should Focus on Stopping Online Theft That Undercuts
Economic Recovery.” Press release, Jan. 23. Last modified Dec. 26, 2022. https://
www.leahy.senate.gov/press/leahy-senate-should-focus-on-stopping-online-theft
-that-undercuts-economic-recovery.
Lee, Seung. 2016. “Is Facebook—and Zuckerberg—Liberal or Conservative? It’s Com-
plicated, Data Shows.” Newsweek, May 11. https://www.newsweek.com/facebook
-and-zuckerberg-liberal-or-conservative-its-complicated-data-shows-458823.
Leeper, Sarah Elizabeth. 2000. “The Game of Radiopoly: An Antitrust Perspective of
Consolidation in the Radio Industry.” Federal Communications Law Journal 52(2):
473–96. https://www.repository.law.indiana.edu/fclj/vol52/iss2/9.
Works Cited 427
Lemieux, Scott. 2021. “Glenn Greenwald Excited by Tucker Carlson’s National Social-
ism.” Lawyers, Guns, and Money, Mar. 4. https://www.lawyersgunsmoneyblog.com/
2021/03/glenn-greenwald-excited-by-tucker-carlsons-national-socialism.
Lepore, Jill. 2010. The Whites of Their Eyes: The Tea Party’s Revolution and the Battle over
American History. Princeton, N.J.: Princeton University Press.
Lepore, Jill. 2014. “The Disruption Machine.” New Yorker, Jun. 23. https://www.new
yorker.com/magazine/2014/06/23/the-disruption-machine.
Lepore, Jill. 2020. If Then: How the Simulmatics Corporation Invented the Future. New
York: Liveright.
Leslie, Stuart W. 1993. The Cold War and American Science: The Military–Industrial–
Academic Complex at MIT and Stanford. New York: Columbia University Press.
“LessWrong.” 2021. RationalWiki. Last modified Jun. 22. https://rationalwiki.org/w/
index.php?title=LessWrong&oldid=2336968.
Lessig, Lawrence. 1999. Code: And Other Laws of Cyberspace. New York: Basic Books.
Lessig, Lawrence. 2004. Free Culture: How Big Media Uses Technology and the Law to
Lock Down Culture and Control Creativity. New York: Penguin.
Lessig, Lawrence. 2006. Code: And Other Laws of Cyberspace, Version 2.0. New York:
Basic Books.
Lessig, Lawrence. 2008. Remix: Making Art and Commerce Thrive in the Hybrid Econ-
omy. New York: Penguin.
Levin, Mark R. 2009. Liberty and Tyranny: A Conservative Manifesto. New York: Simon
& Schuster.
Levine, Alexandra S. 2022. “Suicide Hotline Shares Data with For-Profit Spinoff, Rais-
ing Ethical Questions.” Politico, Jan. 28. https://www.politico.com/news/2022/01/28/
suicide-hotline-silicon-valley-privacy-debates-00002617.
Levine, Yasha. 2018a. “All EFF’d Up.” The Baffler, Jul. https://thebaffler.com/salvos/
all-effd-up-levine.
Levine, Yasha. 2018b. Surveillance Valley: The Secret Military History of the Internet. New
York: PublicAffairs.
Levinson, Paul. 1999. Digital McLuhan: A Guide to the Information Millennium. New
York: Routledge.
Levy, Karen E. C. 2017. “Book-Smart, Not Street-Smart: Blockchain-Based Smart
Contracts and the Social Workings of Law.” Engaging Science, Technology, and Society
3: 1–15. https://doi.org/10.17351/ests2017.107.
Levy, Pema. 2013. “The Woman Who Knows the NSA’s Secrets.” Newsweek, Oct. 4.
https://www.newsweek.com/2013/10/04/woman-who-knows-nsas-secrets-238050
.html.
Levy, Steven. 1994. “Battle of the Clipper Chip.” New York Times, Jun. 12. https://www
.nytimes.com/1994/06/12/magazine/battle-of-the-clipper-chip.html.
Levy, Steven. 2002. Crypto: How the Code Rebels Beat the Government—Saving Privacy
in the Digital Age. New York: Penguin.
Levy, Steven. 2010. Hackers: Heroes of the Computer Revolution. 25th anniversary ed.
Sebastopol, Calif.: O’Reilly Media.
Lewis, Janet I. 2014. “When Decentralization Leads to Recentralization: Subnational
State Transformation in Uganda.” Regional and Federal Studies 24(5): 571–88. https://
doi.org/10.1080/13597566.2014.971771.
428 Works Cited
Liu, Alan. 2004. The Laws of Cool: Knowledge Work and the Culture of Information.
Chicago: University of Chicago Press.
Loeb, Zachary. 2018a. “From Megatechnic Bribe to Megatechnic Blackmail: Mum-
ford’s ‘Megamachine’ after the Digital Turn.” b2o: An Online Journal, 3(3) (Aug.).
https://www.boundary2.org/2018/07/loeb/.
Loeb, Zachary [Librarian Shipwreck]. 2018b. “Why the Luddites Matter.” Librarian
Shipwreck, Jan. 18. https://librarianshipwreck.wordpress.com/2018/01/18/why-the-lud
dites-matter/.
Loeb, Zachary. 2021a. “The Magnificent Bribe.” Real Life, Oct. 25. https://reallifemag
.com/the-magnificent-bribe/.
Loeb, Zachary. 2021b. “Specters of Ludd (Review of Gavin Mueller, Breaking Things at
Work).” b2o Review, Sep. 28. https://www.boundary2.org/2021/09/zachary-loeb-spec
ters-of-ludd-review-of-gavin-mueller-breaking-things-at-work/.
Loomis, Erik. 2018. “Today on Tucker’s White Power Hour. . . . Mr. Glenn Greenwald,
Again!” Lawyers, Guns, and Money, Jun. 14. https://www.lawyersgunsmoneyblog.com/
2018/06/today-tuckers-white-power-hour-mr-glenn-greenwald.
Lovink, Geert. 2006. “Trial and Error in Internet Governance: ICANN, the WSIS,
and the Making of a Global Civil Society.” In Jodi Dean, Jon W. Anderson, and
Geert Lovink, eds., Reformatting Politics: Information Technology and Global Civil
Society. New York: Routledge, 205–19.
Ludlow, Peter, ed. 1996. High Noon on the Electronic Frontier: Conceptual Issues in
Cyberspace. Cambridge, Mass.: MIT Press.
Ludlow, Peter, ed. 2001. Crypto Anarchy, Cyberstates, and Pirate Utopias. Cambridge,
Mass.: MIT Press.
Ludlow, Peter. 2013. “Hacktivists as Gadflies.” New York Times, Apr. 13. https://archive
.nytimes.com/opinionator.blogs.nytimes.com/2013/04/13/hacktivists-as-gadflies/.
Lyons, Matthew. 2017. Ctrl-Alt-Delete: The Origins and Ideology of the Alternative Right.
Somerville, Mass.: Political Research Associates. https://politicalresearch.org/2017/
01/20/ctrl-alt-delete-report-on-the-alternative-right.
Lytvynenko, Jane, Craig Silverman, and Alex Boutilier. 2019. “White Nationalist
Groups Banned by Facebook Are Still on the Platform.” BuzzFeed News, May 30.
https://www.buzzfeednews.com/article/janelytvynenko/facebook-white-national
ist-ban-evaded.
Mac, Ryan. 2017. “Y Combinator Cuts Ties with Peter Thiel after Ending Part-Time
Partner Program.” BuzzFeed News, Nov. 17. https://www.buzzfeednews.com/article/
ryanmac/y-combinator-cuts-ties-with-peter-thiel-ends-part-time.
Mac, Ryan, and Craig Silverman. 2021. “‘Mark Changed the Rules’: How Facebook
Went Easy on Alex Jones and Other Right-Wing Figures.” BuzzFeed News, Feb. 22.
https://www.buzzfeednews.com/article/ryanmac/mark-zuckerberg-joel-kaplan
-facebook-alex-jones.
“Machine Intelligence Research Institute.” 2021. Wikipedia. Last modified Jun. 28.
https://en.wikipedia.org/w/index.php?title=Machine_Intelligence_Research_
Institute&oldid=1030850724.
Mack, Zachary. 2019. “Net Neutrality Was Repealed a Year Ago—What’s Happened
Since?” The Verge, Jul. 9. https://www.theverge.com/2019/7/9/20687903/net-neutral
ity-was-repealed-a-year-ago-whats-happened-since.
Works Cited 429
MacLean, Nancy. 2017. Democracy in Chains: The Deep History of the Radical Right’s
Stealth Plan for America. New York: Random House.
Madigan, Kevin. 2016. “Librarians’ Contradictory Letter Reveals an Alarming Igno-
rance of the Copyright System.” George Mason University Center for Intellectual
Property x Innovation, Dec. 19. https://cip2.gmu.edu/2016/12/19/librarians-contra
dictory-letter-reveals-an-alarming-ignorance-of-the-copyright-system/.
Madore, P. H. 2017. “How Votem Intends to Democratize Democracy through Block-
chain Technology.” CryptoCoins News, Dec. 2. https://web.archive.org/web/201702
12204949/https:/www.cryptocoinsnews.com/votem-blockchain-democracy-vot
ing/.
Madrigal, Alexis C. 2017. “The Dumb Fact of Google Money.” The Atlantic, Aug. 30.
https://www.theatlantic.com/technology/archive/2017/08/the-dumb-fact-of
-google-money/538458/.
Madrigal, Alexis C., and Adrienne LaFrance. 2014. “Net Neutrality: A Guide to (and
History of ) a Contested Idea.” The Atlantic, Apr. 25. https://www.theatlantic.com/
technology/archive/2014/04/the-best-writing-on-net-neutrality/361237/.
Malcolm, Jeremy. 2008. Multi-Stakeholder Governance and the Internet Governance
Forum. Perth, Aust.: Terminus Press.
Manancourt, Vincent. 2022. “What’s Wrong with the GDPR?” Politico, Jun. 15. https://
www.politico.eu/article/wojciech-wiewiorowski-gdpr-brussels-eu-data-protection
-regulation-privacy/.
“Manila Principles on Intermediary Liability.” 2015. Electronic Frontier Foundation,
Mar. 24. https://www.eff.org/files/2015/10/31/manila_principles_1.0.pdf.
Mann, Michael. 2004. Fascists. New York: Cambridge University Press.
Mantelero, Alessandro. 2013. “The EU Proposal for a General Data Protection Regula-
tion and the Roots of the ‘Right to Be Forgotten.’” Computer Law and Security Report
29(3): 229–35. https://dx.doi.org/10.1016/j.clsr.2013.03.010.
Marantz, Andrew. 2019a. Antisocial: Online Extremists, Techno-Utopians, and the Hijack-
ing of the American Conversation. New York: Viking.
Marantz, Andrew. 2019b. “The Dark Side of Techno-Utopianism.” New Yorker, Sep. 23.
https://www.newyorker.com/magazine/2019/09/30/the-dark-side-of-techno-utopi
anism.
Markoff, John. 2005. What the Dormouse Said: How the Sixties Counterculture Shaped
the Personal Computing Industry. New York: Penguin.
Markoff, John. 2011. “MIT Media Lab Names New Director.” New York Times, Apr. 25.
https://www.nytimes.com/2011/04/26/science/26lab.html.
Marlow, Chad. 2021. “Why Net Neutrality Can’t Wait.” ACLU News and Commentary,
Jul. 9. https://www.aclu.org/news/free-speech/why-net-neutrality-cant-wait.
“Martin Luther and Antisemitism.” 2022. Wikipedia. Last modified Jul. 8. https://en
.wikipedia.org/w/index.php?title=Martin_Luther_and_antisemitism&oldid=1097
006280.
Masnick, Mike. 2006. “The Importance of Zero in Destroying the Scarcity Myth of
Economics.” TechDirt, Nov. 8. https://www.techdirt.com/2006/11/08/the-importance
-of-zero-in-destroying-the-scarcity-myth-of-economics/.
430 Works Cited
Masnick, Mike. 2007a. “Saying You Can’t Compete with Free Is Saying You Can’t
Compete Period.” TechDirt, Feb. 15. https://www.techdirt.com/2007/02/15/saying
-you-cant-compete-with-free-is-saying-you-cant-compete-period/.
Masnick, Mike. 2007b. “The Grand Unified Theory on the Economics of Free.” Tech-
Dirt, May 3. https://www.techdirt.com/2007/05/03/grand-unified-theory-econom
ics-free/.
Masnick, Mike. 2011a. “A Fifteenth Century Technopanic about the Horrors of the
Printing Press.” TechDirt, Feb. 25. https://www.techdirt.com/2011/02/25/fifteenth
-century-technopanic-about-horrors-printing-press/.
Masnick, Mike. 2011b. “E-PARASITE Bill: ‘The End of the Internet as We Know It.’”
TechDirt, Oct. 27. https://www.techdirt.com/2011/10/27/e-parasites-bill-end-internet
-as-we-know-it/.
Masnick, Mike. 2011c. “Yes, SOPA Breaks the Internet: By Breaking the Belief in Trust
and Sharing That Is the Internet.” TechDirt, Nov. 13. https://www.techdirt.com/2011/
11/16/yes-sopa-breaks-internet-breaking-belief-trust-sharing-that-is-internet/.
Masnick, Mike. 2013a. “Copyright Lobby: The Public Has ‘No Place in Policy Dis
cussions.’” TechDirt, Mar. 25. https://www.techdirt.com/2013/03/25/copyright-lobby
-public-has-no-place-policy-discussions/.
Masnick, Mike. 2013b. “HBO: The Key to Combating Piracy Is to Make Game of
Thrones More Available . . . Except Here.” TechDirt, Mar. 7. https://www.techdirt
.com/2013/03/07/hbo-key-to-combating-piracy-is-to-make-game-thrones-more-avail
able-except-here/.
Masnick, Mike. 2013c. “Mike Masnick’s Favorite TechDirt Posts of the Week.” Tech-
Dirt, May 4. https://www.techdirt.com/2013/05/04/mike-masnicks-favorite-techdirt
-posts-week-2/.
Masnick, Mike. 2013d. “NSA Gave Employees Ridiculous ‘Talking Points’ to Spread
among Friends and Family over the Holidays.” TechDirt, Dec. 3. https://www.tech
dirt.com/2013/12/03/nsa-gave-employees-ridiculous-talking-points-to-spread-among
-friends-family-over-holidays/.
Masnick, Mike. 2013e. “The War on Computing: What Happens When Authorities
Don’t Understand Technology.” TechDirt, Jan. 23. https://www.techdirt.com/2013/01/
23/war-computing-what-happens-when-authorities-dont-understand-technology/.
Masnick, Mike. 2019. “How a Right to Be Forgotten Stifles a Free Press and Free Ex-
pression.” TechDirt, Oct. 11. https://www.techdirt.com/2019/10/11/how-right-to-be
-forgotten-stifles-free-press-free-expression/.
Masnick, Mike. 2020a. “20 Years Ago Today: The Most Important Law on the Inter-
net Was Signed, Almost by Accident.” TechDirt, Feb. 8. https://www.techdirt.com/
2016/02/08/20-years-ago-today-most-important-law-internet-was-signed-almost
-accident/.
Masnick, Mike. 2020b. “Hello! You’ve Been Referred Here Because You’re Wrong
about Section 230 of the Communications Decency Act.” TechDirt, Jun. 23. https://
www.techdirt.com/2020/06/23/hello-youve-been-referred-here-because-youre-wrong
-about-section-230-communications-decency-act/.
Masnick, Mike. 2022. “EU Officials Finally Coming to Terms with the Fact That the
GDPR Failed; but Now They Want to Make It Worse.” TechDirt, Jun. 28. https://
Works Cited 431
www.techdirt.com/2022/06/28/eu-officials-finally-coming-to-terms-with-the-fact
-that-the-gdpr-failed-but-now-they-want-to-make-it-worse/.
Mason, Paul. 2012. Why It’s Kicking Off Everywhere: The New Global Revolutions. New
York: Verso.
Mathew, Ashwin J. 2016. “The Myth of the Decentralized Internet.” Internet Policy
Review 5(3) (Sep.). https://doi.org/10.14763/2016.3.425.
Matthews, Dylan. 2013. “Why I Resigned from the WikiLeaks Party.” The Guardian,
Aug. 21. https://www.theguardian.com/commentisfree/2013/aug/22/wikileaks-julian
-assange.
May, Timothy C. 1992. “The Crypto Anarchist Manifesto.” Activism.net, Nov. 22. http://
www.activism.net/cypherpunk/crypto-anarchy.html. Reprinted in Ludlow 2001,
61–63. [Talk written in 1988 and read at hacker and cypherpunk conferences in
1988, 1989, 1990, and 1992, and distributed to Cypherpunk mailing list.]
May, Timothy C. 1994. “The Cyphernomicon: Cypherpunks FAQ and More, Version
0.666.” https://web.archive.org/web/20110922120111/http:/www.cypherpunks.to/
faq/cyphernomicron/cyphernomicon.txt. [Post to Cypherpunk mailing list.]
May, Timothy C. 2005. “Commie Rag Praises MLK.” Jan. 18. https://scruz.general.nar
kive.com/29QgNUds/commie-rag-praises-mlk. [Post to scruz.general Usenet mail-
ing list.]
Mayer, Jane. 2017. Dark Money: The Hidden History of the Billionaires behind the Rise
of the Radical Right. New York: Anchor.
Mayer-Schönberger, Viktor. 2008. “Demystifying Lessig.” Wisconsin Law Review 4,
713–46.
Mazmanian, Adam, and Lauren C. Williams. 2021. “The US Senate Joined the House
of Representatives in Overturning President Trump’s Veto of the Annual Defense
Policy Bill in a Rare New Year’s Day Vote.” Business of Federal Technology, Jan. 1.
https://www.defenseone.com/defense-systems/2021/01/defense-bill-prevails-over
-trump-veto/195098/.
McCarthy, Ryan. 2020. “‘Outright Lies’: Voting Misinformation Flourishes on Face-
book.” ProPublica, Jul. 16. https://www.propublica.org/article/outright-lies-voting
-misinformation-flourishes-on-facebook.
McCormack, John. 2011. “Ron Paul Praises Occupy Wall Street.” Washington Exam-
iner, Dec. 29. https://www.washingtonexaminer.com/?p=1254933.
McCullagh, Declan. 2000. “Crypto-Convict Won’t Recant.” Wired, Apr. 14. https://
nettime.org/Lists-Archives/nettime-l-0004/msg00109.html.
McCullagh, Declan. 2001. “Cypherpunk’s Free Speech Defense.” Wired, Apr. 9. https://
www.wired.com/2001/04/cypherpunks-free-speech-defense/.
McCullagh, Declan. 2018. “Markets in Assassination? Everybody Panic!” Reason, Jul.
27. https://reason.com/2018/07/27/markets-in-assassination-everybody-panic/.
McCulloch, Craig. 2019. “Christchurch Call: Tech Companies Overhaul Organization
to Stop Terrorists Online.” Radio New Zealand, Sep. 24. https://www.rnz.co.nz/news/
political/399468/christchurch-call-tech-companies-overhaul-organisation-to-stop
-terrorists-online.
McElroy, Wendy. 2017. “An Introduction to ‘The Satoshi Revolution.’” FEE Stories,
Oct. 12. https://fee.org/articles/an-introduction-to-the-satoshi-revolution/.
432 Works Cited
McGonigal, Jane. 2011. Reality Is Broken: Why Games Make Us Better and How They
Can Change the World. New York: Penguin.
McKinnon, John D., and Ryan Knutson. 2017. “Want to See a World Without Net
Neutrality? Look at These Old Cellphone Plans.” Wall Street Journal, Dec. 11.
https://www.wsj.com/articles/mobile-wireless-market-might-be-our-post-net-neu
trality-world-1512988200.
McKinnon, John D., and Brody Mullins. 2019. “Nancy Pelosi Pushes to Remove Legal
Protections for Online Content in Trade Pact.” Wall Street Journal, Dec. 4. https://
www.wsj.com/articles/nancy-pelosi-pushes-to-remove-legal-protections-for-online
-content-in-trade-pact-11575503157?.
McLuhan, Marshall. 1962. The Gutenberg Galaxy: The Making of Typographic Man.
Toronto: University of Toronto Press.
McLuhan, Marshall. (1964) 1994. Understanding Media: The Extensions of Man. Cam-
bridge, Mass.: MIT Press.
McNeil, Joanne. 2022. “Crisis Text Line and the Silicon Valleyfication of Everything.”
Motherboard, Feb. 10. https://www.vice.com/en/article/wxdpym/crisis-text-line-and
-the-silicon-valleyfication-of-everything.
Miller, Cassie. 2022. “SPLC Poll Finds Substantial Support for ‘Great Replacement’
Theory and Other Hard-Right Ideas.” Southern Poverty Law Center, Jun. 1. https://
www.splcenter.org/news/2022/06/01/poll-finds-support-great-replacement-hard
-right-ideas.
Mirowski, Philip. 2002. Machine Dreams: Economics Becomes a Cyborg Science. New
York: Cambridge University Press.
Mirowski, Philip. 2009. “Defining Neoliberalism.” In Mirowski and Plehwe 2009,
417–55.
Mirowski, Philip. 2013. Never Let a Serious Crisis Go to Waste: How Neoliberalism Sur-
vived the Financial Meltdown. New York: Verso.
Mirowski, Philip. 2017. “What Is Science Critique? Lessig, Latour.” In David Tyfield,
Rebecca Lave, Samuel Randalls, and Charles Thorpe, eds., The Routledge Handbook
of the Political Economy of Science. New York, Routledge, 429–50.
Mirowski, Philip. 2018. “The Future(s) of Open Science.” Social Studies of Science 48(2):
171–203. https://doi.org/10.1177/0306312718772086.
Mirowski, Philip. 2019. “Hell Is Truth Seen Too Late.” boundary 2 46(1) (Feb.): 1–53.
https://doi.org/10.1215/01903659-7271327.
Mirowski, Philip, and Dieter Plehwe, eds. 2009. The Road from Mont Pèlerin: The
Making of the Neoliberal Thought Collective. Cambridge, Mass.: Harvard University
Press.
Mirowski, Philip, Jeremy Walker, and Antoinette Abboud. 2013. “Beyond Denial.” Over-
land Literary Journal 213 (Autumn). https://overland.org.au/previous-issues/issue
-210/feature-philip-mirowski-jeremy-walker-antoinette-abboud/.
Moffitt, Mike. 2018. “How a Racist Genius Created Silicon Valley by Being a Terrible
Boss.” SFGate, Aug. 21. https://www.sfgate.com/tech/article/Silicon-Valley-Shockley
-racist-semiconductor-lab-13164228.php.
Moglen, Eben. 2011. “Liberation by Software.” The Guardian, Feb. 24. https://www
.theguardian.com/commentisfree/cifamerica/2011/feb/24/internet-freedomofinfor
mation.
Works Cited 433
Moglen, Eben. 2013. “Snowden and the Future.” Four-part lecture series delivered at
Columbia Law School, Oct.–Dec. http://snowdenandthefuture.info/index.html.
Moon, David, Patrick Ruffini, and David Segal, eds. 2013. Hacking Politics: How Geeks,
Progressives, the Tea Party, Gamers, Anarchists and Suits Teamed Up to Defeat SOPA
and Save the Internet. New York: OR Books.
Morar, David, and Bruna Martins dos Santos. 2020. “Online Content Moderation
Lessons from Outside the US.” Brookings Institution, Jun. 17. https://www.brook
ings.edu/articles/online-content-moderation-lessons-from-outside-the-u-s/.
Morozov, Evgeny. 2011a. “Don’t Be Evil.” New Republic, Jul. 13. https://newrepublic
.com/article/91916/google-schmidt-obama-gates-technocrats.
Morozov, Evgeny. 2011b. The Net Delusion: The Dark Side of Internet Freedom. New
York: PublicAffairs.
Morozov, Evgeny. 2013a. “Ghosts in the Machine.” Der Feuilleton (blog), Oct. 10. Last
modified Sep. 21, 2023. https://blogs.sueddeutsche.de/feuilletonist/2013/10/10/
ghosts-in-the-machines/.
Morozov, Evgeny. 2013b. “How to Stop a Sharknado.” Die Zeit, Oct. 2. https://www
.zeit.de/digital/internet/2013-10/morozov-sharknado-chomsky-foucault.
Morozov, Evgeny. 2013c. “Open and Closed.” New York Times, Mar. 16. https://www
.nytimes.com/2013/03/17/opinion/sunday/morozov-open-and-closed.html.
Morozov, Evgeny. 2013d. To Save Everything, Click Here: The Folly of Technological
Solutionism. New York: PublicAffairs.
Morrison, Aimée Hope. 2009. “An Impossible Future: John Perry Barlow’s ‘Declara-
tion of the Independence of Cyberspace.’” New Media and Society 11(1–2): 53–72.
MSI Integrity. 2020. Not Fit-for-Purpose: The Grand Experiment of Multi-stakeholder
Initiatives in Corporate Accountability, Human Rights and Global Governance. Berke-
ley, Calif.: Institute for Multi-Stakeholder Initiative Integrity. https://www.msi-in
tegrity.org/wp-content/uploads/2020/07/MSI_Not_Fit_For_Purpose_FORWEB
SITE.FINAL_.pdf.
Mudde, Cas, and Cristóbal Rovira Kaltwasser. 2017. Populism: A Very Short Introduc-
tion. New York: Oxford University Press.
Mueller, Gavin. 2015. “Trickster Makes This Web: The Ambiguous Politics of Anony-
mous.” b2o Review, Feb. 11. https://www.boundary2.org/2015/02/trickster-makes-this
-web-the-ambiguous-politics-of-anonymous/.
Mueller, Gavin. 2019. Media Piracy in the Cultural Economy: Intellectual Property and
Labor under Neoliberal Restructuring. New York: Routledge.
Mueller, Gavin. 2021. Breaking Things at Work: The Luddites Are Right about Why You
Hate Your Job. New York: Verso.
Mueller, Milton. 2010. Networks and States: The Global Politics of Internet Governance.
Cambridge, Mass.: MIT Press.
Mumford, Lewis. (1934) 2010. Technics and Civilization. Chicago: University of Chi-
cago Press.
Mumford, Lewis. 1971. Technics and Human Development: The Myth of the Machine,
Vol. I. New York: Harcourt Brace Jovanovich.
Mumford, Lewis. 1974. Pentagon of Power: The Myth of the Machine, Vol. II. New York:
Harcourt Brace Jovanovich.
434 Works Cited
Murphy, Laura W., et al. 2020. “Facebook Civil Rights Audit.” Facebook, Jul. 8. https://
about.fb.com/wp-content/uploads/2020/07/Civil-Rights-Audit-Final-Report.pdf.
Murse, Tom. 2020. “Is Mark Zuckerberg a Democrat or a Republican?” ThoughtCo,
Jul. 4. https://www.thoughtco.com/members-of-congress-supported-by-facebook-33
67615.
Nakamoto, Satoshi. 2009. “Bitcoin: A Peer-to-Peer Electronic Cash System.” Bitcoin
.org, May 24. https://bitcoin.org/bitcoin.pdf.
Negroponte, Nicholas. 1996. Being Digital. New York: Vintage.
Neiwert, David. 2009. The Eliminationists: How Hate Talk Radicalized the American
Right. New York: Routledge.
Neiwert, David. 2018. Alt-America: The Rise of the Radical Right in the Age of Trump.
New York: Verso.
“Net Neutrality.” 2022. Wikipedia. Last modified Aug. 8. https://en.wikipedia.org/w/
index.php?title=Net_neutrality&oldid=1103227281.
Nevett, Joshua. 2021. “Nevada Smart City: A Millionaire’s Plan to Create a Local Gov-
ernment.” BBC News, Mar. 18. https://www.bbc.com/news/world-us-canada-56409
924.
Newfield, Christopher. 2013. “Corporate Open Source: Intellectual Property and the
Struggle over Value.” Radical Philosophy 181 (Sep./Oct.): 6–11.
Newhoff, David. 2012. “Anti-Piracy Battle Reveals Dysfunctional Thinking.” The Hill,
Jan. 18. https://thehill.com/blogs/congress-blog/technology/103104-anti-piracy-bat
tle-reveals-dysfunctional-thinking/.
Newhoff, David. 2020. “Internet Archive Uses Pandemic to Justify Looting.” Illusion
of More, Mar. 29. https://illusionofmore.com/internet-archive-uses-pandemic-to-jus
tify-looting/.
Newhoff, David. 2021. “Why Is the Press So Bumfuzzled about Copyright Issues?”
Illusion of More, Dec. 27. https://illusionofmore.com/why-is-the-press-so-bumfuzzled
-about-copyright-issues/.
Newman, Russell A. 2019. The Paradoxes of Network Neutralities. Cambridge, Mass.:
MIT Press.
Newton, Casey. 2020. “Mark in the Middle.” The Verge, Sep. 23. https://www.theverge
.com/c/21444203/facebook-leaked-audio-zuckerberg-trump-pandemic-blm.
New Zealand Ministry of Foreign Affairs and Trade. 2019. “Christchurch Call to Elim-
inate Terrorist and Violent Extremist Content Online.” https://www.christchurch
call.com/.
Noys, Benjamin. 2014. Malign Velocities: Accelerationism and Capitalism. Winchester,
UK: Zero Books.
O’Brien, Danny. 2018. “The Year of the GDPR: 2018’s Most Famous Privacy Regula-
tion in Review.” Electronic Frontier Foundation, Dec. 28. https://www.eff.org/es/
deeplinks/2018/12/year-gdpr-2018s-most-famous-privacy-regulation-review.
O’Brien, Luke. 2017. “The Making of an American Nazi.” The Atlantic, Dec. https://
www.theatlantic.com/magazine/archive/2017/12/the-making-of-an-american-nazi/
544119/.
O’Hagan, Andrew. 2016. “The Satoshi Affair.” London Review of Books 38(13) (Jun. 30).
https://www.lrb.co.uk/the-paper/v38/n13/andrew-o-hagan/the-satoshi-affair.
Works Cited 435
O’Neill, Patrick Howell. 2017. “Tor’s Ex-Director: ‘The Criminal Use of Tor Has
Become Overwhelming.’” CyberScoop, May 22. https://cyberscoop.com/tor-dark
-web-andrew-lewman-securedrop/.
Oberhaus, Daniel. 2017. “Nearly All of Wikipedia Is Written by Just 1 Percent of Its
Editors.” Vice, Nov. 7. https://www.vice.com/en/article/7x47bb/wikipedia-editors
-elite-diversity-foundation.
Ochigame, Rodrigo. 2019. “The Invention of ‘Ethical AI’: How Big Tech Manipulates
Academia to Avoid Regulation.” The Intercept, Dec. 20. https://theintercept.com/
2019/12/20/mit-ethical-ai-artificial-intelligence/.
Ogbunu, C. Brandon. 2020. “Don’t Be Fooled by Covid-19 Carpetbaggers.” Wired,
Apr. 5. https://www.wired.com/story/opinion-dont-be-fooled-by-covid-19-carpetbag
gers/.
Ogundeji, Olusegun. 2017. “ETH Proponents: Ethereum Will Democratize, Build
Trust, and Make Governments Transparent.” Cointelegraph, Jan. 20. Last modified
May 28, 2023. https://cointelegraph.com/news/eth-proponents-ethereum-will-dem
ocratize-build-trust-and-make-governments-transparent.
Ong, Walter S. J. 1982. Orality and Literacy: The Technologizing of the Word. New York:
Routledge.
Oreskes, Naomi, and Erik M. Conway. 2010. Merchants of Doubt: How a Handful of
Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming. New
York: Bloomsbury.
Oreskovic, Alexei. 2019. “Martin Luther King’s Daughter Slams Mark Zuckerberg for
Invoking the Civil Rights Movement and Said ‘Disinformation Campaigns’ Led to
MLK’s Killing.” Business Insider, Oct. 17. https://www.businessinsider.com/bernice
-king-daughter-mlk-criticizes-mark-zuckerberg-2019-10.
Packer, George. 2013. “Change the World.” New Yorker, May 20. https://www.new
yorker.com/magazine/2013/05/27/change-the-world.
Packer, George. 2014. “The Errors of Edward Snowden and Glenn Greenwald.” Prospect,
May 22. https://www.prospectmagazine.co.uk/essays/46323/the-errors-of-edward
-snowden-and-glenn-greenwald.
Parker, Ian. 2018. “The Bane of Their Resistance.” New Yorker, Aug. 27. https://www
.newyorker.com/magazine/2018/09/03/glenn-greenwald-the-bane-of-their-resis
tance.
Parks, Miles. 2021. “Outrage as a Business Model: How Ben Shapiro Is Using Face-
book to Build an Empire.” All Things Considered, Jul. 19. https://www.npr.org/2021/
07/19/1013793067/outrage-as-a-business-model-how-ben-shapiro-is-using-facebook
-to-build-an-empire.
Pasquale, Frank. 2014. “The Dark Market for Personal Data.” New York Times, Oct. 16.
https://www.nytimes.com/2014/10/17/opinion/the-dark-market-for-personal-data
.html.
Pasquale, Frank. 2015. “Reforming the Law of Reputation.” Loyola University Chicago
Law Journal 47, 515–30. https://lawecommons.luc.edu/luclj/vol47/iss2/6.
Pasquale, Frank. 2016. The Black Box Society: The Secret Algorithms That Control Money
and Information. Cambridge, Mass.: Harvard University Press.
Pasquale, Frank. 2020. New Laws of Robotics: Defending Human Expertise in the Age of
AI. Cambridge, Mass.: Harvard University Press.
436 Works Cited
Poulsen, Kevin. 2018. “Defector: WikiLeaks ‘Will Lie to Your Face.’” Daily Beast, May
9. https://www.thedailybeast.com/defector-wikileaks-will-lie-to-your-face.
Poulsen, Kevin. 2019. “Mueller Report: Assange Smeared Seth Rich to Cover for Rus-
sians.” Daily Beast, Apr. 18. https://www.thedailybeast.com/mueller-report-julian
-assange-smeared-seth-rich-to-cover-for-russians.
Powers, Shawn M., and Michael Jablonski. 2015. The Real Cyber War: The Political
Economy of Internet Freedom. Urbana: University of Illinois Press.
Powles, Julia. 2014. “Jimmy Wales Is Wrong: We Do Have a Personal Right to Be For
gotten.” The Guardian, Aug. 8. https://www.theguardian.com/technology/2014/aug/
08/jimmy-wales-right-to-be-forgotten-wikipedia.
Powles, Julia. 2015. “Results May Vary: Border Disputes on the Frontlines of the ‘Right
to Be Forgotten.’” Slate, Feb. 25. https://www.slate.com/articles/technology/future_
tense/2015/02/google_and_the_right_to_be_forgotten_should_delisting_be_global
_or_local.html.
Powles, Julia, and Enrique Chaparro. 2015. “How Google Determined Our Right to
Be Forgotten.” The Guardian, Feb. 18. https://www.theguardian.com/technology/
2015/feb/18/the-right-be-forgotten-google-search.
Prasanna. 2022. “What Is Monero (XMR) Crypto? Is Edward Snowden behind This
Project Too?” CryptoTicker, Jul. 28. https://cryptoticker.io/en/what-is-monero-xmr
-edward-snowden/.
“PROTECT IP Act.” 2020. Wikipedia. Last modified Sep. 11. https://en.wikipedia.org/
w/index.php?title=PROTECT_IP_Act&oldid=977889274.
“Protests against SOPA and PIPA.” 2020. Wikipedia. Last modified Sep. 29. https://
en.wikipedia.org/w/index.php?title=Protests_against_SOPA_and_PIPA&oldid=9
81004629.
Prud’homme, Rémy. 1995. “The Dangers of Decentralization.” World Bank Research
Observer 10(2) (Aug.): 201–20.
Purnell, Newley, and Jeff Horwitz. 2021. “Facebook Services Are Used to Spread Reli-
gious Hatred in India, Internal Documents Show.” Wall Street Journal, Oct. 23.
https://www.wsj.com/articles/facebook-services-are-used-to-spread-religious-hatred
-in-india-internal-documents-show-11635016354?mod=article_inline.
Purtill, James. 2019. “Fuelled by a Toxic, Alt-Right Echo Chamber, Christchurch
Shooter’s Views Were Celebrated Online.” TripleJ Hack, Mar. 15. https://www.abc
.net.au/triplej/programs/hack/christchurch-shooters-views-were-celebrated-online/
10907056.
“Quick Facts.” n.d. United States Census Bureau. Last modified Jan. 23, 2024. https://
www.census.gov/quickfacts/fact/table/US/HSD410219.
Rao, Leena. 2011. “Wael Ghonim: If You Want to Liberate a Government, Give Them
the Internet.” TechCrunch, Feb. 11. https://techcrunch.com/2011/02/11/wael-ghonim
-if-you-want-to-liberate-a-government-give-them-the-internet/.
Rasmussen, Scott, and Douglas Schoen. 2010. Mad as Hell: How the Tea Party Move-
ment Is Fundamentally Remaking Our Two-Party System. New York: HarperCollins.
Ratcliffe, Jonathan. 2020. “Rebooting the Leviathan: NRx and the Millennium.” b2o:
An Online Journal 4(2) (Apr. 2). https://www.boundary2.org/2020/04/jonathan-rat
cliffe-rebooting-the-leviathan-nrx-and-the-millennium/.
438 Works Cited
Raymond, Eric S. 1999. The Cathedral and the Bazaar: Musings on Linux and Open
Source by an Accidental Revolutionary. Sebastopol, Calif.: O’Reilly Media.
Raymond, Eric S. 2012. “Why I Think RMS Is a Fanatic, and Why That Matters.”
Armed and Dangerous, Jun. 11. http://esr.ibiblio.org/?p=4386.
Read, Max. 2011. “What Happened to Encyclopedia Dramatica?” Gawker, Apr. 16.
https://web.archive.org/web/20110514145854/http:/gawker.com/5792738/what-hap
pened-to-encyclopedia-dramatica.
Redman, Jamie. 2020. “$2 Trillion for Surveillance Capitalism: US Government
Promises $1,200 to Every American.” Bitcoin.com, Mar. 25. https://web.archive.org/
web/20200329001553/https:/news.bitcoin.com/2-trillion-surveillance-capitalism
-government-promises-1200/.
Reitman, Rainey. 2011. “Bitcoin: A Step toward Censorship-Resistant Digital Cur-
rency.” Electronic Frontier Foundation, Jan. 20. https://www.eff.org/deeplinks/2011/
01/bitcoin-step-toward-censorship-resistant.
Reitman, Rainey. 2021. “The Cryptocurrency Surveillance Provision Buried in the
Infrastructure Bill Is a Disaster for Digital Privacy.” Electronic Frontier Foundation,
Aug. 2. https://www.eff.org/deeplinks/2021/08/cryptocurrency-surveillance-provision
-buried-infrastructure-bill-disaster-digital.
Renda, Matthew. 2021. “Ninth Circuit Deals 3rd Blow to NSA Spying Case.” Court-
house News, Aug. 17. https://www.courthousenews.com/nsa-spying-case-is-dismissed
-again/.
Reynolds, Glenn. 2006. An Army of Davids: How Markets and Technology Empower
Ordinary People to Beat Big Media, Big Government, and Other Goliaths. New York:
Thomas Nelson.
Reynolds, Simon. 2005. “Simon’s Interview with CCRU (1998).” K-punk, Jan. 20.
http://k-punk.abstractdynamics.org/archives/004807.html.
Riccardi, Nicholas. 2021. “Zuckerberg’s Cash Fuels GOP Suspicion and New Election
Rules.” AP News, Aug. 8. https://apnews.com/article/elections-facebook-mark-zuck
erberg-d034c4c1f5a9fa3fb02aa9898493c708.
Rider, Karina, and David Murakami Wood. 2019. “Condemned to Connection? Net-
work Communitarianism in Mark Zuckerberg’s ‘Facebook Manifesto.’” New Media
and Society 21(3) (Mar.): 639–54. https://doi.org/10.1177/1461444818804772.
Riley, Duncan. 2015. “Edward Snowden: Bitcoin Is Flawed but the Basic Principles
Combined with Tokenization Are Interesting.” SiliconANGLE, Aug. 16. https://sili
conangle.com/2015/08/16/edward-snowden-bitcoin-is-flawed-but-the-basic-princi
ples-combined-with-tokenization-are-interesting/.
Rivlin, Gary. 2002. “The Madness of King George.” Wired, Jul. 1. https://www.wired
.com/2002/07/gilder-6/.
Roberts, Russ. 2009. “Jimmy Wales on Wikipedia.” EconTalk, Mar. 9. https://www
.econtalk.org/wales-on-wikipedia/.
Roberts, Sarah. 2019. Behind the Screen: Content Moderation in the Shadows of Social
Media. New Haven, Conn.: Yale University Press.
Robinson, Nathan J. 2021. “How to End Up Serving the Right.” Current Affairs, Jun.
17. https://www.currentaffairs.org/2021/06/how-to-end-up-serving-the-right.
“Roko’s Basilisk.” 2021. RationalWiki. Last modified May 10. https://rationalwiki.org/
w/index.php?title=Roko%27s_basilisk&oldid=2323690.
Works Cited 439
Romano, Aja. 2018. “A New Law Intended to Curb Sex Trafficking Threatens the
Future of the Internet as We Know It.” Vox, Jul. 2. https://www.vox.com/culture/
2018/4/13/17172762/fosta-sesta-backpage-230-internet-freedom.
Romm, Tony. 2019. “Zuckerberg: Standing for Voice and Free Expression.” Washing-
ton Post, Oct. 17. https://www.washingtonpost.com/technology/2019/10/17/zucker
berg-standing-voice-free-expression/.
Rosenbach, Marcel, and Holger Stark. 2010. “‘The Only Option Left for Me Is an
Orderly Departure’: Interview with ‘Daniel Schmitt.’” Der Spiegel, Sep. 27. https://
www.spiegel.de/international/germany/wikileaks-spokesman-quits-the-only-option
-left-for-me-is-an-orderly-departure-a-719619.html.
Ross, Alexander Reid. 2017. Against the Fascist Creep. Chico, Calif.: AK Press.
Ross, Janell. 2019. “Civil Rights Leaders Criticize Zuckerberg’s Free Speech Address.”
NBC News, Oct. 17. https://www.nbcnews.com/news/nbcblk/civil-rights-leaders-re
buke-zuckerberg-s-free-speech-address-n1068461.
Roszak, Theodore. (1969) 1995. The Making of a Counter Culture: Reflections on the
Technocratic Society and Its Youthful Opposition. 2nd ed. Berkeley: University of
California Press.
Roszak, Theodore. (1986a) 1994. The Cult of Information: A Neo-Luddite Treatise on
High-Tech, Artificial Intelligence, and the True Art of Thinking. 2nd ed. Berkeley:
University of California Press.
Roszak, Theodore. 1986b. From Satori to Silicon Valley: San Francisco and the American
Counterculture. San Francisco: Don’t Call It Frisco Press.
Rothbard, Murray. 2000. Egalitarianism as a Revolt against Nature, and Other Essays.
Auburn, Ala.: Ludwig von Mises Institute.
Rozen, Jacob. 2021. “Code Isn’t Law—Law Is Law.” CoinGeek, Mar. 24. https://coin
geek.com/code-isnt-law-law-is-law/.
Rozsa, Matthew. 2017. “Is Julian Assange a Misogynist, or Just Seething with Rage
against Hillary Clinton? We Wonder.” Salon, Apr. 16. https://www.salon.com/2017/
04/16/is-julian-assange-a-misogynist-or-just-seething-with-rage-against-hillary-clin
ton-we-wonder/.
Ruane, Kathleen Anne. 2014. Freedom of Speech and Press: Exceptions to the First Amend-
ment. Washington, D.C.: Congressional Research Service. https://sgp.fas.org/crs/
misc/95-815.pdf.
Sacasas, L. M. 2013a. The Borg Complex (Tumblr blog). Last modified Sep. 11. https://
borgcomplex.tumblr.com/.
Sacasas, L. M. 2013b. “Borg Complex: A Primer.” Frailest Thing, Mar. 1. https://the
frailestthing.com/2013/03/01/borg-complex-a-primer/.
Sandifer, Elizabeth. 2017. Neoreaction A Basilisk: Essays on and around the Alt Right.
n.p., Eruditorum Press.
Sankin, Aaron. 2021. “What Does Facebook Mean When It Says It Supports ‘Internet
Regulations’?” The Markup, Sep. 16. https://themarkup.org/the-breakdown/2021/
09/16/what-does-facebook-mean-when-it-says-it-supports-internet-regulations.
Sauerberg, Lars Ole. 2009. “The Encyclopedia and the Gutenberg Parenthesis.” Paper
presented at “Stone and Papyrus: Storage and Transmission,” Media in Transition
6, Cambridge, Mass., Apr. 24–26. https://web.mit.edu/comm-forum/legacy/mit6/
papers/sauerberg.pdf.
440 Works Cited
“Save the Internet: Join Us.” 2007. SavetheInternet.com Coalition. Last modified Oct.
12. https://web.archive.org/web/20071012025915/http:/www.savetheinternet.com/=
coalition.
“Save the Internet: Members.” 2007. SavetheInternet.com Coalition. Last modified
Oct. 12. https://web.archive.org/web/20071011164621/http:/savetheinternet.com/=
members.
Schmidt, Eric, and Jared Cohen. 2013. The New Digital Age: Reshaping the Future of
People, Nations, and Business. New York: Knopf.
Schneider, Nathan. 2019. “Decentralization: An Incomplete Ambition.” Journal of
Cultural Economy 12(4): 265–85. https://doi.org/10.1080/17530350.2019.1589553.
Schneiderman, Eric T. 2017. “A.G. Schneiderman: I Will Sue to Stop Illegal Rollback
of Net Neutrality.” Press release, New York Office of the Attorney General, Dec. 14.
https://ag.ny.gov/press-release/2017/ag-schneiderman-i-will-sue-stop-illegal-roll
back-net-neutrality.
Schradie, Jen. 2019. The Revolution That Wasn’t: How Digital Activism Favors Conser
vatives. Cambridge, Mass.: Harvard University Press.
Schrager, Nick. 2017. “WikiLeaks Founder Julian Assange Is an Egomaniacal, Sex-
ist Creep in Risk.” Daily Beast, May 6. https://www.thedailybeast.com/wikileaks
-founder-julian-assange-is-an-egomaniacal-sexist-creep-in-risk.
Schriever, Leigh Anne. 2018. “Uber and Lyft Lobby Their Way to Deregulation and
Preemption.” Regulatory Review, Jun. 28. https://www.theregreview.org/2018/06/28/
schriever-uber-lyft-lobby-deregulation-preemption/.
Schrodt, Paul. 2016. “Edward Snowden Just Made an Impassioned Argument for Why
Privacy Is the Most Important Right.” Business Insider, Sep. 15. https://www.business
insider.com/edward-snowden-privacy-argument-2016-9.
Schwarz, Mattathias. 2008. “The Trolls among Us.” New York Times, Aug. 3. https://
www.nytimes.com/2008/08/03/magazine/03trolls-t.html.
Scott, Allen. 2018. “Vitalik Buterin: I Quite Regret Adopting the Term ‘Smart Con-
tracts’ for Ethereum.” Bitcoinist, Oct. 14. https://bitcoinist.com/vitalik-buterin-ethe
reum-regret-smart-contracts/.
Seetharaman, Deepa, and Emily Glazer. 2020. “How Mark Zuckerberg Learned Poli-
tics.” Wall Street Journal, Oct. 16. https://www.wsj.com/articles/how-mark-zucker
berg-learned-politics-11602853200.
Segal, David. 2013. “A Moment for Aaron.” In Moon, Ruffini, and Segal 2013, vii–xiv.
Selinger, Evan, and Woodrow Hartzog. 2019. “What Happens When Employers Can
Read Your Facial Expressions?” New York Times, Oct. 17. https://www.nytimes.com/
2019/10/17/opinion/facial-recognition-ban.html.
Selwyn, Neil. 2013. Education in a Digital World: Global Perspectives on Technology and
Education. New York: Routledge.
Shapiro, Gary. 2011. “The Copyright Lobby Comeuppance.” The Hill, Dec. 11. https://
thehill.com/blogs/congress-blog/technology/100024-the-copyright-lobby-come
uppance/.
Shaw, Aaron. 2012. “Centralised and Decentralised Gatekeeping in an Open Online
Collective.” Politics and Society 40, 349–88, https://doi.org/10.1177/0032329212449009.
Works Cited 441
Shaw, Aaron, and Benjamin Mako Hill. 2014. “Laboratories of Oligarchy? How the
Iron Law Extends to Peer Production.” Journal of Communication 64(2) (Apr.):
215–38. https://doi.org/10.1111/jcom.12082.
Shaw, Tamsin. 2018. “Edward Snowden Reconsidered.” New York Review of Books, Sep.
13. https://www.nybooks.com/online/2018/09/13/edward-snowden-reconsidered/.
Sherman, Cary. 2012. “What Wikipedia Won’t Tell You.” New York Times, Feb. 7.
https://www.nytimes.com/2012/02/08/opinion/what-wikipedia-wont-tell-you.html.
Shillingsburg, Peter L. 2006. From Gutenberg to Google: Electronic Representations of
Literary Texts. New York: Cambridge University Press.
Shirky, Clay. 2008a. Here Comes Everybody: The Power of Organizing without Organiza-
tions. New York: Penguin.
Shirky, Clay. 2008b. “Why Abundance Is Good: A Reply to Nick Carr.” Encyclopedia
Britannica Blog, Jul. 17. https://web.archive.org/web/20150227155605/http:/blogs.bri
tannica.com/2008/07/why-abundance-is-good-a-reply-to-nick-carr/.
Silverman, Jacob. 2021. “What if the idyllic, before-the-fall internet . . .” X, May 29.
https://twitter.com/SilvermanJacob/status/1398836476109000713. [Tweet since
deleted.]
Skocpol, Theda, and Vanessa Williamson. 2012. The Tea Party and the Remaking of
Republican Conservatism. New York: Oxford University Press.
Slobodian, Quinn. 2019. “Anti-’68ers and the Racist-Libertarian Alliance: How a
Schism among Austrian School Neoliberals Helped Spawn the Alt Right.” Cultural
Politics 15(3) (Nov. 1): 372–86. https://doi.org/10.1215/17432197-7725521.
Smith, Harrison, and Roger Burrows. 2021. “Software, Sovereignty, and the Post-
Neoliberal Politics of Exit.” Theory, Culture, and Society 38(6): 143–66. https://doi
.org/10.1177/0263276421999439.
“Snowden Revelations.” 2014. Lawfare, Jan. 22. https://www.lawfaremedia.org/article/
catalog-snowden-revelations.
Snyder, Timothy. 2017. On Tyranny: Twenty Lessons from the Twentieth Century. New
York: Crown.
Soave, Robby. 2021. Tech Panic: Why We Shouldn’t Fear Facebook and the Future. New
York: Simon & Schuster.
Soldatov, Andrei, and Irina Borogan. 2015. The Red Web: The Kremlin’s War on the
Internet. Washington, D.C.: PublicAffairs.
Solon, Olivia. 2020. “Child Sexual Abuse Images and Online Exploitation Surge Dur-
ing Pandemic.” NBC News, Apr. 23. https://www.nbcnews.com/tech/tech-news/child
-sexual-abuse-images-online-exploitation-surge-during-pandemic-n1190506.
Southern Poverty Law Center. n.d. “Andrew ‘Weev’ Auernheimer.” SPLC Extremist
Files. https://www.splcenter.org/fighting-hate/extremist-files/individual/andrew-
%E2%80%9Cweev%E2%80%9D-auernheimer.
Southern Poverty Law Center. 2015. “Third Positionism on the Web.” SPLC Intelli-
gence Report. https://www.splcenter.org/fighting-hate/intelligence-report/2015/third
-position-web.
Srinivasan, Balaji S. 2013. “Balaji Srinivasan at Startup School 2013.” YouTube, Oct. 25.
https://www.youtube.com/watch?v=cOubCHLXT6A.
Srinivasan, Balaji S. 2020a. “Guy who has built nothing . . .” X, Apr. 20. https://twitter
.com/balajis/status/1252276198983385090.
442 Works Cited
Szóka, Berin, and Ashkhen Kazaryan. 2020. “Section 230: An Introduction for Anti-
trust and Consumer Protection Practitioners.” Global Antitrust Institute, Nov. 11.
https://dx.doi.org/10.2139/ssrn.3733746.
Szóka, Berin. 2020. “Bill Barr Declares War on the Internet as We Know It.” Morning
Consult, Feb. 19. https://morningconsult.com/opinions/bill-barr-declares-war-on-the
-internet-as-we-know-it/.
Tabarrok, Alexander, and Tyler Cowen. 1992. “The Public Choice Theory of John C.
Calhoun.” Journal of Institutional and Theoretical Economics / Zeitschrift für die
gesamte Staatswissenschaft 148(4) (Dec. 1992): 655–74. https://www.jstor.org/stable/40
71557.
Taleb, Nassim Nicholas. 2022. “A Clash of Two Systems.” Medium, Apr. 19. https://
medium.com/incerto/a-clash-of-two-systems-47009e9715e2.
Tapscott, Don, and Alex Tapscott. 2018. Blockchain Revolution: How the Technology be-
hind Bitcoin and Other Cryptocurrencies Is Changing the World. New York: Portfolio.
Tapscott, Don, and Anthony D. Williams. 2013. Radical Openness: Four Unexpected
Principles for Success. New York: TED Conferences.
Tech Transparency Project. 2017a. “Google Academics Inc.” Jul. 11). https://www.tech
transparencyproject.org/articles/google-academics-inc.
Tech Transparency Project. 2017b. “Google Funds Dozens of Groups Fighting Sex
Trafficking Bill.” Sep. 27. https://www.techtransparencyproject.org/articles/google
-funds-dozens-groups-fighting-sex-trafficking-bill.
Tech Transparency Project. 2020. “White Supremacist Groups Are Thriving on Face-
book.” May 21. https://www.techtransparencyproject.org/articles/white-suprema
cist-groups-are-thriving-on-facebook.
Tech Transparency Project. 2022. “Funding the Fight against Antitrust: How Face-
book’s Antiregulatory Attack Dog Spends Its Millions.” May 17. https://www.tech
transparencyproject.org/articles/funding-fight-against-antitrust-how-facebooks
-antiregulatory-attack-dog-spends-its-millions.
Tenney, Claudia. 2021. “New Information Confirms Zuckerberg-Connected Group
Funneled Majority of Election Payments to Democrat-Leaning Counties.” Press
release, Dec. 20. https://tenney.house.gov/media/press-releases/new-information-con
firms-zuckerberg-connected-group-funneled-majority-election.
Thiel, Peter. 2009. “The Education of a Libertarian.” Cato Unbound, Apr. 13. https://
www.cato-unbound.org/2009/04/13/peter-thiel/education-libertarian/.
Thierer, Adam. 2014. “Embracing a Culture of Permissionless Innovation.” Cato
Online Forum, Nov. 17. https://www.cato.org/cato-online-forum/embracing-culture
-permissionless-innovation.
Thierer, Adam. 2016. Permissionless Innovation: The Continuing Case for Comprehensive
Technological Freedom. Arlington, Va.: Mercatus Center.
Thierer, Adam. 2018. “GDPR Compliance: The Price of Privacy Protections.” Technol-
ogy Liberation Front, Jul. 9. https://techliberation.com/2018/07/09/gdpr-compliance
-the-price-of-privacy-protections/.
Thierer, Adam. 2019. “The Great Facial Recognition Technopanic of 2019.” The
Bridge, May 17. https://www.mercatus.org/economic-insights/expert-commentary/
great-facial-recognition-technopanic-2019.
444 Works Cited
Thierer, Adam, and Berin Szóka. 2009. “Cyber-Libertarianism: The Case for Real
Internet Freedom.” Technology Liberation Front, Aug. 12. https://techliberation
.com/2009/08/12/cyber-libertarianism-the-case-for-real-internet-freedom/.
Thompson, Alex. 2020. “Why the Right Wing Has a Massive Advantage on Face-
book.” Politico, Sep. 26. https://www.politico.com/news/2020/09/26/facebook-con
servatives-2020-421146.
Tkacz, Nathaniel. 2012. “From Open Source to Open Government: A Critique of
Open Politics.” Ephemera 12(4): 386–405.
Tkacz, Nathaniel. 2015. Wikipedia and the Politics of Openness. Chicago: University of
Chicago Press.
Topinka, Robert. 2019. “‘Back to a Past That Was Futuristic’: The Alt-Right and the
Uncanny Form of Racism.” b2o: An Online Journal 4(2) (Oct. 14). https://www
.boundary2.org/2019/10/robert-topinka-back-to-a-past-that-was-futuristic-the-alt
-right-and-the-uncanny-form-of-racism/.
“Tor Project: Overview.” 2019. Tor Project. https://2019.www.torproject.org/about/
overview.html.en.
Turkewitz, Neil. 2018. “Freedom House Report on Internet Freedom: How Can You
Rank What You Don’t Understand?” Medium, Nov. 6. https://medium.com/@ntur
kewitz_56674/freedom-house-report-on-internet-freedom-how-can-you-rank-what
-you-dont-understand-74495f1d5e6f.
Turner, Fred. 2006a. From Counterculture to Cyberculture: Stewart Brand, the Whole
Earth Network, and the Rise of Digital Utopianism. Chicago: University of Chicago
Press.
Turner, Fred. 2006b. “How Digital Technology Found Utopian Ideology: Lessons
from the First Hackers Conference.” In David Silver and Adrienne Massanari, eds.,
Critical Cyberculture Studies: Current Terrains, Future Directions. New York: NYU
Press, 345–61.
Turner, Fred. 2018. “Trump on Twitter: How a Medium Designed for Democracy
Became an Authoritarian’s Mouthpiece.” In Boczkowski and Papacharissi 2018,
143–50.
Turner, Fred. 2019. “Machine Politics: The Rise of the Internet and a New Age of
Authoritarianism.” Harper’s Magazine, Jan. https://harpers.org/archive/2019/01/
machine-politics-facebook-political-polarization/.
United Nations. 2022. “Social Media Poses ‘Existential Threat’ to Traditional, Trust-
worthy News: UNESCO.” UN News, Mar. 10. https://news.un.org/en/story/2022/
03/1113702.
UnKoch My Campus. n.d. “Austrian Economics: A Gateway to Extremism.” Part of
Advancing White Supremacy through Academic Strategy. https://static1.squarespace
.com/static/5400da69e4b0cb1fd47c9077/t/636934fb4d078058a8292ee3/1667839
235779/Academic+White+Supremacy+Report.pdf.
UK Ofcom. 2010. “Ofcom’s Approach to Net Neutrality.” Nov. 24. https://www.ofcom
.org.uk/__data/assets/pdf_file/0011/50510/statement.pdf.
U.S. House Committee on Energy and Commerce. 2019. “Fostering a Healthier Inter-
net to Protect Consumers.” Hearing of the Subcommittee on Consumer Protection
Works Cited 445
Warner, Michael. 1990. The Letters of the Republic: Publication and the Public Sphere in
Eighteenth-Century America. Cambridge, Mass.: Harvard University Press.
Warner, Michael. 2002. Publics and Counterpublics. New York: Zone.
Watson, Libby. 2017. “Group That Takes Money from Tech Industry Complains That
Tech Coverage Is Too Negative.” Gizmodo, Feb. 23. https://gizmodo.com/tech-think
-tank-whines-that-journalists-are-too-mean-1792673883.
Watters, Audrey. 2014. “From ‘Open’ to Justice.” Hacked Education, Nov. 16. https://
hackeducation.com/2014/11/16/from-open-to-justice.
Weber, Max. 2004. The Vocation Lectures: “Science as a Vocation”; “Politics as a Voca-
tion.” Edited by David Owen and Tracy B. Strong. Translated by Rodney Living-
stone. Indianapolis: Hackett.
Weber, Steven. 2004. The Success of Open Source. Cambridge, Mass.: Harvard Univer-
sity Press.
Weber, Tripp. 2014. “How the Internet and Advertising Technology Destroyed News-
papers.” Leader’s Edge, Jan. 27. https://jhucle.wordpress.com/2014/01/27/how-the
-internet-and-advertising-technology-destroyed-newspapers/.
The Week Staff. 2016. “Peter Thiel’s 6 Favorite Books That Predict the Future.” The
Week, May 2. https://theweek.com/articles/443683/peter-thiels-6-favorite-books-that
-predict-future.
Weyl, E. Glen. 2022. “Sovereign Nonsense: A Review of The Sovereign Individual by
James Dale Davidson and Lord William Rees-Mogg.” RadicalxChange, Jan. 18.
https://www.radicalxchange.org/media/blog/sovereign-nonsense/.
Wheeler, Marcy. 2021a. “Insurance File: Glenn Greenwald’s Anger Is of More Use to
Vladimir Putin Than Edward Snowden’s Freedom.” Emptywheel, May 21. https://
www.emptywheel.net/2021/05/21/insurance-file-glenn-greenwald-is-of-more-use
-to-vladimir-putin-than-edward-snowden/.
Wheeler, Marcy. 2021b. “Liar’s Poker: The Complexity of Julian Assange’s Extradi-
tion.” Emptywheel, Dec. 10. https://www.emptywheel.net/2021/12/10/liars-poker-the
-complexity-of-julian-assanges-extradition/.
Wheeler, Marcy. 2021c. “WikiLeaks and Edward Snowden Champion Sociopathic
Liars and Sloppy Thinking.” Emptywheel, Jun. 27. https://www.emptywheel.net/20
21/06/27/wikileaks-and-edward-snowden-champion-sociopathic-liars-and-sloppy
-thinking/.
White, Nathan. 2017. “The Internet as We Know It Is at Risk.” AccessNow, Jul. 12.
https://www.accessnow.org/internet-know-risk/.
Whitehead, Laurence. 2002. Democratization: Theory and Experience. New York: Oxford
University Press.
Wilentz, Sean. 2014. “Would You Feel Differently about Snowden, Greenwald, and
Assange If You Knew What They Really Thought?” New Republic, Jan. 19. https://
newrepublic.com/article/116253/edward-snowden-glenn-greenwald-julian-assange
-what-they-believe.
Wille, Matt. 2022. “Facebook Might Stop Removing So Much COVID-19 Misin
formation.” Input, Jul. 26. https://www.inverse.com/input/tech/facebook-covid19-mis
information-oversight-board.
Works Cited 447
Williams, Sam. 2002. Free as in Freedom: Richard Stallman’s Crusade for Free Software.
Sebastopol, Calif.: O’Reilly Media.
Williamson, Kevin D. 2021. “Mark Zuckerberg’s Facebook Fight Is Really about Silenc-
ing Right-Wing Voices.” New York Post, Oct. 30. https://nypost.com/2021/10/30/
facebooks-fight-is-really-about-silencing-right-wing-voices/.
Wilson, Cody. 2016. Come and Take It: The Gun Printer’s Guide to Thinking Free. New
York: Gallery Books.
Winn, Joss. 2012. “Open Education: From the Freedom of Things to the Freedom of
People.” In Michael Neary, Howard Stevenson, and Les Bell, eds., Towards Teaching
in Public: Reshaping the Modern University. London: Continuum, 133–47.
Winner, Langdon. 1986. The Whale and the Reactor: A Search for Limits in an Age of
High Technology. Chicago: University of Chicago Press.
Winner, Langdon. 1997. “Cyberlibertarian Myths and the Prospects for Community.”
ACM SIGCAS Computers and Society 27(3) (Sep.): 14–19.
Wolfe, Liz. 2021. “Elon Musk: Government Is ‘The Biggest Corporation, with a
Monopoly on Violence, Where You Have No Recourse.’” Reason, Dec. 8. https://
reason.com/2021/12/08/elon-musk-government-is-the-biggest-corporation-with-a-
monopoly-on-violence-where-you-have-no-recourse/.
Wong, Julia Carrie. 2021. “Revealed: The Facebook Loophole That Lets World Leaders
Deceive and Harass Their Citizens.” The Guardian, Apr. 12. https://www.theguard
ian.com/technology/2021/apr/12/facebook-loophole-state-backed-manipulation.
Wozniak, Steve, and Michael Copps. 2017. “Ending Net Neutrality Will End the
Internet as We Know It.” USA Today, Sep. 29. https://www.usatoday.com/story/
opinion/2017/09/29/ending-net-neutrality-will-end-internet-we-know-steve-woz
niak-michael-copps-column/704861001/.
Wu, Tim. 2002. “A Proposal for Network Neutrality.” University of Virginia, Jun.
http://www.timwu.org/OriginalNNProposal.pdf.
Wu, Tim. 2003. “Network Neutrality, Broadband Discrimination.” Journal of Telecom-
munications and High Technology Law 2, 141–79. https://scholarship.law.columbia
.edu/faculty_scholarship/1281/.
Wu, Tim. 2019. “Beyond First Amendment Lochnerism.” Knight First Amendment
Institute at Columbia University, Aug. 21. https://knightcolumbia.org/content/
beyond-first-amendment-lochnerism-a-political-process-approach.
Wyett, Todd A. 1991. “State Lotteries: Regressive Taxes in Disguise.” Tax Lawyer 44(3)
(Spring): 867–83. https://www.jstor.org/stable/20771362.
Wylie, Christopher. 2019. Mindf*ck: Inside Cambridge Analytica’s Plot to Break the
World. New York: Random House.
Yates, Andy. 2020. “Liberal Mark Zuckerberg’s Facebook Platform Silences Trump and
Republican Candidates.” Washington Times, Sep. 16. https://www.washingtontimes
.com/news/2020/sep/16/liberal-mark-zuckerbergs-facebook-platform-silence/.
York, Jillian C. 2021. Silicon Values: The Future of Free Speech under Surveillance Capi-
talism. New York: Verso.
Zara, Christopher. 2017. “The Most Important Law in Tech Has a Problem.” Wired, Jan.
3. https://www.wired.com/2017/01/the-most-important-law-in-tech-has-a-problem/.
448 Works Cited
Zuboff, Shoshana. 2019. The Age of Surveillance Capitalism: The Fight for a Human
Future at the New Frontier of Power. New York: PublicAffairs.
Zuboff, Shoshana. 2021. “Shoshana Zuboff Speaks on Big Tech Regulation: Who
Knows? Who Decides Who Decides?” Observer Research Foundation America,
May 21, YouTube video, 00:42:46. https://youtu.be/7W9Teyj_yF0?.
Zucchi, Kristina. 2020. “Is Bitcoin Mining Profitable?” Investopedia, Jun. 30. https://
www.investopedia.com/articles/forex/051115/bitcoin-mining-still-profitable.asp.
Zwolinski, Matt. 2016. “The Libertarian Non-Aggression Principle.” Social Philosophy
and Policy 32(2) (Spring): 62–90. https://doi.org/10.1017/S026505251600011X.
Index
449
450 Index
blockchain, 85, 114, 194, 281, 288–89, cable television packages, NN and,
293, 295, 300, 313, 314, 333, 357; appli- 264–65
cations, 278; communities, 84, 86; CALEA. See Communications Law
decentralization in, 291; development, Enforcement Assistance Act
294; discourse, 278; law, 290; permis- Calhoun, John C., 352, 353
sionless, 290; software, 114, 288, 289, California ideology, 7, 66
293, 294, 295, 314 “California Ideology, The” (Barbrook
Blockchain Revolution, The (Tapscott and and Cameron), 3, 238
Tapscott), 23 California Privacy Protection Act, 184
Blockchains (company), 278 Cambridge Analytica, 51, 96–97, 133
Blue, Violet, 23 Cameron, Andy, 3, 7, 59, 70; anti-
Book of the Dead (Lovecraft), 111 statism and, 16
Bookchin, Murray, 282 Canadian Centre for International
Boorstin, Bob, 34 Governance Innovation, 315–16
Booz Allen Hamilton, 117 capital: accumulation of, 52; business
Border Gateway Protocol (BGP), 286, power and, 218; dark will of, 384;
287 democratizing, 278
Borgman, Christine, 192 capitalism, 20, 66, 67; accelerating, 385;
Borogan, Irina, 119, 120 communicative, 387; digital tran
Borsook, Paulina, 3; cyberlibertarianism scendence of, 239; free-market, 49,
and, 11; on Gilder, 13; philosophical 142, 241; frontier zones of, 384; hege-
cyberlibertarianism and, 14; on philo- mony of, 387; liberal, 21; runaway,
sophical libertarianism, 10–11; ravers 383; unregulated, 234, 249. See also
and, 12, 14 surveillance capitalism
boyd, danah, 104, 106, 107–8, 318 Carlson, Tucker, 51, 115, 116
Boyle, James, 242 Carolan, Jennifer, 193
Brand, Stewart, 58, 60, 61, 62, 63, 67, Carr, Nicholas, 194, 276, 211, 275
101, 200; computers and, 66; cyber Carusone, Angelo, 51
libertarianism and, 59; Media Lab Casey, Michael, 85, 86
and, 100; “new communalist” vision Castro, Daniel, 162, 184
of, 100; political libertarianism of, 66; Cathedral, 373, 385, 386
Turner on, 65–66 Cathedral and the Bazaar, The
Brandom, Russell, 140 (Raymond), 241
Breitbart, 50, 52 Catholic Church, 194, 195, 204, 205, 214,
Brexit, 18, 40, 51, 83, 387 216, 354; colonialism/violence and,
Brin, Sergey, 220 208; Reformation and, 208
Brooke, Heather, 23 Catholicism, Protestantism and, 204,
Brown v. Board of Education (1954), 5, 206
351 Cato Institute, 27, 130, 144, 371, 374
Buffalo, mass murder in, 180 CCDH See Center for Countering Digi-
Burning Man festival, 14 tal Hate
Burrows v. Superior Court (1974), 301 CCRU. See Cybernetic Culture Research
Buterin, Vitalik, 289, 290, 333, 377 Unit
BuzzFeed, Squire and, 44 CDA. See Communications Decency
BuzzFeed News, 44 Act
Index 453
CDT. See Center for Democracy and Christchurch Call to Eliminate Terrorist
Technology and Violent Extremist Content
censorship, xix, 35, 36, 39, 90, 91, 184, Online, 178
214, 216, 217, 224, 258, 265–66, 318, Christchurch massacre, 44, 179
325, 363, 364, 369, 393; anti-corporate, Christensen, Clayton, 72, 80, 143; dis-
267; circumventing, 257; defining, ruptive innovation and, 144, 192, 210
312, 313, 314, 315; free speech and, Christian Coalition of America, 137
302–17; history of, 315; internet and, Christian Identity, 18
34, 312, 313 Christianity, 10, 13, 14, 208, 214, 216, 354
Census Bureau, 295 Chrome, 96
Center for American Progress, 50 Chu, Arthur, 375–76
Center for Countering Digital Hate Chun, Wendy, 356
(CCDH), 41, 44, 180 Church Committee, 122
Center for Data Innovation, 184 CIA, 70, 358
Center for Democracy and Technology Cisco, 53, 59
(CDT), 9, 35, 52, 53, 89, 92, 94, 109, Citizens United case, 97, 308, 314
148, 253, 318, 319 citizenship: modified form of, 355; obso-
Center for Right-Wing Studies, 344 lescence of, 354; property ownership
Central Hudson v. Public Service Commis- and, 94; responsible, 31
sion (1980), 307, 308 Citron, Danielle Keats, 149, 154, 160,
centralization, 141, 281, 285, 286, 287; 163
advocates of, 282; decentralization City in History, The (Mumford), 199
and, 282, 284, 291; disdain for, 283 civil liberties, 33, 70, 98, 125, 156, 164,
Cerf, Vint, 145 366, 367; protecting, 72; rhetoric for,
CERN, 141 41
CFAA. See Computer Fraud and Abuse civil rights, 7, 8, 39, 90, 98, 134, 141, 156,
Act 173, 179, 283, 296, 297, 398, 399;
channel 2, licensing of, 159 defenders of, 147; democracy and, 40,
chans, 359–63 249; discourse of, 127–28; initiatives,
Chaos Communication Congress, 110 60; language of, 266; movement, 5,
Charles Schwab, 87 37, 38, 43, 44, 47; NN and, 261, 263,
Chee, Alexander, 248 264, 265, 267; organizations, 40, 45,
Chenou, Jean-Marie, 173 94, 99; privacy and, 97; setbacks for,
Chicago School of Economics, 30, 58, 41; struggles with, 71, 118; supporting,
326, 332, 351 59; threats to, 41
child abuse, 135, 160, 163, 304 civil servants, 367, 390
Chilling Effects Project, The, 304 civil society, 21, 161, 176, 178–79, 250,
Chollet, François, 274, 275 273; multistakeholder, 180
Christchurch Call, 179–80 civil war, 121, 352, 386
“Christchurch Call: Are We Multistake- class warfare, 31
holder Yet?, The” (Badii), 179–80 Clegg, Nick, 47, 48, 49
Christchurch Call Advisory Network, climate change, 64, 81, 108, 130, 369;
179 denial, 27, 46, 49, 80, 82, 116
“Christchurch Call to Action Summit,” Clinton, Bill, 152, 355
178 Clinton, Hillary, 91, 323, 355
454 Index
80, 81; spread of, 140; strategies for, of, 89; cryptocurrency and, 88; human
141–42 rights and, 9; promotion of, 32; sur-
Denning, Dorothy, 249–50, 385 veillance capitalism and, 84
deregulation, 142, 147, 197, 229, 403 Digital Services Act (DSA) (2022),
determinism: generalized, 4; technologi- 184–85
cal, 7, 198, 201, 221–22 digital spaces, 287, 347
development, 345; commercial, 218; digital technology, 7, 29, 31, 32, 57, 58,
cultural, 207; historical, 225; housing, 90, 91, 113, 114, 138, 151, 152, 156, 173,
341, 342; methodology, 230; political, 177, 192, 201, 207, 256, 257, 261, 272,
222; social, 207, 222; software, 238; 277; advent of, 22, 74, 195, 236, 251;
technological, 59, 185, 224, 383 advocating for, 130, 144; anonymized/
Dewar, James, 202, 219 encrypted, 135; apocalyptic vision and,
Diamond and Silk, 50 382; assertions about, 23; behavioral
Die Zeit, 366 management and, 306–7; benefits of,
digital advocacy, xix, 84, 104, 108, 205, 203; criticism of, 129, 202; cyberliber-
227, 228, 358; rhetorical moves and, tarianism and, 29, 91, 160, 227;
82 debates surrounding, 52; democratiza-
digital culture, 59, 65, 101, 197, 230, 249, tion and, 22–23, 227, 273, 279, 338;
360, 382; development of, 112; fascist development of, 4, 66, 70, 92, 189,
currents in, 343–44; free and, 237; 195, 255; digital utopians and, 271;
primacy for, 209; print culture and, discussions of, xxiv, 99; embracing,
209–14, 221; studying, 45 57, 213; encryption issue and, 253;
Digital Currency Initiative (MIT), 85 engagement with, 360; ethical issues
digital enthusiasts, xxii, 6, 54, 55, 218 and, 100; evaluation of, 220–26; expe-
digital industry, growth/development of, riencing, 263; far-right politics of,
53 xxiv, 338, 339; fascism and, 342–59;
Digital Markets Act (DMA) (2022), free speech and, 302, 307; identifica-
184–85 tion with, 395; impact of, 58, 84, 102,
digital media, 18, 19, 75, 79, 212, 327, 108, 196, 210–11, 213; introduction of,
345, 347, 348; political work and, 340; 202; legality of, 154; multistakehold-
Renaissance and, 207; revolution, 24 erism and, 165; political libertarianism
Digital Millennium Copyright Act and, 26; politics and, 48, 101, 102, 338;
(1998), 93 power of, 115, 308, 381; print and, 82,
digital revolution, 54, 196, 198, 199–200, 203, 213, 226; proliferation of, 53, 130,
210; computers and, 338; impact of, 216, 401–2; propagandistic framing of,
212; transforming, 201 160; purpose/function of, 83; regulat-
digital rights, 9, 89–109, 128, 137, 144, ing, xxi, 35, 95, 148, 207; rightward
185, 280, 303, 306, 312, 315, 339, 362; extreme of, 338–39; spread of, 302,
advocating, 131, 363; defenders of, 104; 338–39; walls of separation and, 143;
free speech and, 303; lobbying for, world governance and, 164
164 Digital Technology and Democratic Theory
digital rights management (DRM), 92, (Bernholz, Landemore, and Reich),
244 280
digital rights organizations, 8, 35, 88–89, digital theory movement, 283, 382
110, 228, 297–98, 304, 339; criticism digital tools, 58, 219, 339, 340
458 Index
digital utopians, 101, 202, 222, 271, 272; Dungeon Master, 372
term, 54 Dyson, Esther, 23, 66
digital wallets, 278 dystopianism, 6, 52, 358; digital, 280;
digitization, 205, 220, 272 fascist, 393
Dingledine, Roger, 249, 259, 260, 261
Directors Guild of America, 32 East Coast Code, 327, 331, 332
Discord, 286 eBay, 35, 157
discourse: cyberlibertarian, 205; digital, e-commerce, 239
31, 279, 286, 287; management, 81, 86; Eco, Umberto, 343, 344
political, 104, 225, 238, 284, 314; pro- ecofascism, 178
digital, 349; public, 81, 104, 148 economic activity, xxiii, 308, 314; unreg-
discrimination, 264, 268; gender/racial, ulated, 99
263; term, 262 economic growth, 15, 59, 142, 179;
disinformation, 33, 38, 190, 280, 369; internet-based, 145; technology and,
Covid-19, 47, 138; cyberlibertarianism 145; undermining, 132
as, 72–80; industry-based, 316 economic issues, 15, 248
DNS. See Domain Name System economic planning, 75, 76, 383
Dr. Dre, 8 economic theory, 13
Doctorow, Cory, 8, 13, 30, 135, 136, 242, economics, 3, 240, 353; neoliberal, 323–
318; reasonable discourse and, 84 24; open-market, 241
dogma: anarcho-capitalist, 389; anti e-democracy, 243
government, 298; cyberlibertarian, xx, education, 178, 220; digital replacements
xxi, xxii–xxiii, xxiv, 21–25, 25–26, 33, for, 80; disparaging, 80; higher, 80;
58, 68, 71, 90, 99, 111, 115, 186, 189, public, 7, 68, 351
221–22, 224, 236, 243, 258, 315, 324, Edwards, Paul, 57
339, 348, 359; cypherpunk, 88; inno EFF. See Electronic Frontier Foundation
vation, 143 egalitarianism, 5, 6, 292, 296, 386
Domain Name System (DNS), 257, 285 Egalitarianism as a Revolt against Nature
Domscheit-Berg, Daniel, 365, 366, 368 (Rothbard), 17
Donohue, Tom, 133 Eisenhower, Dwight, 59
DontKillCrypto.com, 301 Eisenstat, Yaël, 144
Dorgan, Byron, 391 Eisenstein, Elizabeth, 190, 193, 204,
Dorsey, Jack, 300 206–7, 211, 213, 222, 225; Jarvis and,
dot-com bubble, 13 212; McLuhan and, 200, 201; politics/
“double truth” doctrine, 78, 233 human agency and, 223; scholarship
Douglass, Frederick, 37 of, 205
Drexler, Eric, 374 Ekeland, Tor, 398
Driscoll, Kevin, 261 Electronic Frontier Foundation (EFF),
DRM. See digital rights management 9, 11, 12, 35, 52, 53, 88, 89, 90, 91, 98,
Drudge, Matt, 231 99, 109, 123, 136, 148, 156, 161, 163,
drug abuse, 382 185, 253, 268, 269, 298, 304, 307, 310,
drug dealers, 134, 135 316, 317, 318, 319; activism of, 155;
drug markets, 136 attack on democracy and, 96; Big
due process, xxiii, 170, 253, 307 Tech and, 302; boyd and, 105; cen
Dulong de Rosenay, Melanie, 287 sorship and, 315; code is speech and,
Index 459
308–9; cryptocurrency explainers by, European Union (EU), 47, 149, 402,
314; digital privacy and, 141; digital 403; Internet governance in, 180–86;
rights and, 93; First Amendment and, technology regulation by, 183
311–12; future of, 94; lobbying by, 164; exceptionalism: digital, 314; internet, 28,
position paper by, 182; Privacy Badger 173, 174
of, 96; privatization and, 94, 95; regu- Exon, James, 149
lation and, 184; tech corporations and, extremism, 249, 374, 395; antidemo-
97; Twitter and, 303; worldview of, 155 cratic, 51; far-right, 18, 44, 390
Electronic Privacy and Information Extropians, 372
Center, 52, 90 ExxonMobil, 371
elites, 246, 276, 277, 344
Ellison, Larry, 36 Facebook, 33, 36–52, 68, 85, 103, 109,
Ellsberg, Daniel, 119 127, 133, 137, 139, 144, 148, 179, 183,
Ellul, Jacques, 340 184, 186, 190, 218, 235, 259, 265; analy-
“Embracing a Culture of Permissionless sis by, 41; bias of, 51; breaking up, 83;
Innovation” (Thierer), 144 business model of, 97; campaigns
empowerment, 5, 72; technological, 338 related to, 96; censorship and, 91; civil
encryption, 91, 130, 134, 363, 385, 389; liberties and, 41; civil rights groups
advocates of, xx; anarcho-capitalist, and, 41, 43; Community Standards
364; anonymization and, 249–61; and, 42, 44; criticism of, 37, 132;
antidemocratic politics and, 250–51; democracy suppression and, 44–45;
end-to-end, xxiii; implementing, 250– fact-checking and, 50; free expression
51; perfect, 253; source code, 310; and, 38, 69; hate promotion and,
using, 254, 255 44–45, 49, 50; influence of, 285–86;
Encyclopedia Dramatica, 394 integrity of, 47; privacy on, 96;
engagement, xxii, 46, 51, 73, 96, 165, Related Pages, 43; resources of, 286;
276, 280; citizen, 25; corporate, 168; Section 230 and, 157; SOPA/PIPA
intellectual, 79; lack of, 5; policy, 109; and, 35; voter suppression and, 40
political, 25, 224, 277 Facebook’s Community Standards, 44
Engelbart, Douglas, 61, 63 facial recognition, 298–99
English Civil Wars, 222–23 Fairchild Semiconductor, 59
Enlightenment, 19, 54, 206, 212, 213, fairness doctrine, 196
214–15, 216, 217, 220, 367, 386; politi- Fake Matt, 160–61
cal theories of, 20; promise of, 215 Faludi, Susan, 13
entrepreneurship, 67, 145, 153, 237 far right, 343, 345, 376, 394; anarcho-
Epstein, Jeffrey, 28, 102, 103, 104, 105 capitalism and, 337–42; rationalism/
equality, 29, 31, 220, 402; democracy AI and, 376; rhetoric of, 349
and, 385; justice and, 229; social, 5 far-right politics, 344, 345, 357, 363, 368,
Erhard Seminars Training, 60 371; digital technology and, xxiv, 281,
Ethereum, 293 295, 333, 377, 392; block- 338, 339
chain, 288–89, 294 Farage, Nigel, 368
ethics, 100, 102, 103, 107, 109, 362; AI, Farid, Hany, 163
104; cypherpunk, 367; digital, 109, 147 fascism, 20, 113, 208, 232, 233, 280, 337,
eugenics, 59, 385, 386 361, 362, 366, 367, 371; big government
European Data Protection, 185 and, 393; causes of, 374; communism
460 Index
Free Press organization, 45, 318; saving From Gutenberg to Google (Shillings-
the internet and, 137 burg), 192
free software (FS), xix, xx, 7, 24, 217, From Gutenberg to the Global Informa-
218, 219, 237, 238, 240–41, 244, 359 tion Infrastructure (Borgman), 192
Free Software Foundation, 91, 240 FS. See free software
Free Software, Free Society (Stallman), Future Imperfect: Technology and Freedom
237 in an Uncertain World (Friedman),
free speech, xix, xxiii, 35, 90, 93, 112, 128, 374
129, 158–59, 161, 162, 238, 297–98, 321,
332, 359–63, 369; absolute, xx, 69, 111, Gab, 363
116, 361, 363; censorship and, 302–17; Gaiman, Neil, 248
decentralization and, 291; democracy Galloway, Alexander, 285
and, 302; digital rights and, 303; digi- Game of Thrones (HBO), 245
tal technology and, 307; doctrine of, GamerGate, 18, 345, 362
308; jurisprudence, 304, 305; libre Garcia, Jerry, x
and, 237; loss of, 140; online, 307; García-Martínez, Antonio, 348
privacy and, 296, 297; protection of, Garza, Alicia, 39
305; restraint on, 308; software and, gatekeeping, 25, 192, 247, 276, 277
218; technology and, 302, 307; term, Gates, Bill, 6, 63
xx; tools/organizations and, 272; vio- GCHQ, 124
lation of, 302, 312; walls, 306 Gellman, Barton, 117
Free Speech Party, 303 Generalized Data Protection Regulation
freedom, xix, xx, 3, 29, 68, 93, 130, 139, (GDPR), 184, 185
235, 244, 312, 350–51; choice and, 332; George Mason University, 27, 144, 352,
consumer, 131; control and, 90; 353, 371, 388
democracy and, 92; digital, 129; eco- Georgetown University, Zuckerberg at,
nomic, 220, 355; human, 321, 323, 355; 37, 41
individual, 28, 220; language of, 119, Gettysburg Address (Lincoln), 30
333; liberty and, 236, 238; negative, gift economy, 239, 241
220, 238; notion of, 228, 272; political, Gilder, George, xxi, 14, 28, 93, 349, 353,
23, 220, 353; principles of, 252; pursuit 357; cyberspace and, 12–13
of, 220, 296; rhetoric of, 129; social, 4, Gilder Technology Report, 13
23; specialized conception of, 236. See gilders, 12–13; described, 14–15
also internet freedom Gilman, Nils, 144
Freedom House, 324 Gilmore, John, 93, 312, 313, 314
freedom to connect, 319, 323 Gingrich, Newt, xxi, 93
Freedom2Contact Foundation, 91 Ginn, Aaron, 277
FreedomBoxes, 218, 219 Gitlin, Todd, 60
French Revolution, 215, 216, 217 Glaser, April, 96, 97
Friedman, David, 29, 110, 241, 374 Global Business Network (GBN), 67
Friedman, Milton, 26, 28, 30, 110, 326 Global Sustainable Tourism Council, 168
Friedman, Patri, 110, 374 global village, 197, 199
Friedman, Thomas, 357 global warming, 81, 382
From Counterculture to Cyberculture Gmail, 94, 95
(Turner), 4, 57 GNXP, 372
462 Index
Hayek, Friedrich von, 12, 26, 28, 30, 31, HTTP, 285
72, 73, 76, 77, 195, 233, 235, 326; HUD, 328
antidemocratic extremism of, 38; Huffman, Steve, 163
centralization and, 283; collectivist Hughes, Eric, 111, 250
values and, 234; ignorance/political Hulu, 268
theory and, 74; on rational economic human rights, 8, 90, 121, 127, 134, 141,
order, 75 162, 168, 173, 174, 182, 183, 256, 259,
HBO, 245–46 264, 283, 297, 301, 312, 322, 404; con-
health care, 178, 229, 251, 403 tempt for, 178; digital rights and, 9;
Hegel, Georg Wilhelm Friedrich, 232 guarantors of, 26; law, 181; rhetoric of,
Heinlein, Robert, 374 296; trampling, 145; values, 186
Hemmati, Minu, 165, 166, 171 Human Rights Commission, 120
Hepting v. AT&T (2008), 123, 124 Human Rights Watch, 53
Here Comes Everybody (Shirky), 191 humanism, 101, 383
Heritage Center, 144
Heritage Foundation, 27, 148 I Hate the Internet (Kobek), 248–49
Herman, Bill D., 90 IA. See Internet Archive
Herrick, Matthew, 160, 161 IANA. See Internet Assigned Numbers
Herrick v. Grindr (2018), 161 Authority
Hidden Services, 256–57, 258 IBM, 103
High Noon on the Electronic Frontier ICANN. See Internet Corporation for
(Ludlow), 385, 396 Assigned Names and Numbers
Hindman, Matthew, 281 Icke, David, 116
history, 19, 196, 229; deterministic view identity, 233, 360; digital, 139; in-group,
of, 222; rewriting of, 270; social, 332; 49; political, 338; sexual, 46; social,
technological, 332 344
Hitler, Adolf, 342, 343, 356 ideology, 317; cyberlibertarian, 302;
HIV, 183, 382, 383 right-wing, 29; totalitarian, 358
Hobby Lobby decision, 312 IEEE, 285
Hofstader, Richard, 115 Ifill, Sherrilyn, 38, 39
Holder, Eric, 135 Ignatius, David, 139, 140
Holocaust, 135, 359; denial, 111, 369 ignorance, 73, 76, 77, 141, 215, 276;
Hoofnagle, Chris, 141; denialism and, knowledge and, 75; political theory
130–31 and, 74; promoting, 80; smooth wall
Hoover, J. Edgar, 122 of, 78; veil of, 160
Hoppe, Hans-Hermann, 26, 110, 128, Industrial Revolution, 210
341, 342, 374; libertarianism and, 16 information: communication and, 170;
Hoppe Caucus, 341 compiling, 252; controlling, 210, 250;
Horowitz, Ben, 348 democratization of, xxiii, 157; free, 80;
Horowitz, Steve, 341 government, 231–32; material appear-
How to Be a Digital Revolutionary (Blue), ance of, 193; personal, 251; private,
23 203; provision of, 181; sharing, 240;
How to Destroy Surveillance Capitalism spread of, 202; transnational flows of,
(Doctorow), 84 323
HP, 59 Information Age, 202, 354
464 Index
information technology, 11, 86, 109, 208, of, 225; operational management of,
217 176; original, 247, 332; power of,
Information Technology and Innovation 376; remoteness and, 223; saving,
Foundation (ITIF), 132, 162; techlash 136–39, 139–41; as second coming of
and, 133 print, 210; unregulated, 219; visions
InfoWars, 370 of, 239
infrastructure, 172; bills, 88; critical, 349; Internet Archive (IA), 9, 246–47
digital, 166, 290; recovery, 252; tech- Internet Assigned Numbers Authority
nological, 257 (IANA), 166, 167
Infrastructure Investment and Jobs Act Internet Association, 154–55, 164
(2021), 300 Internet Corporation for Assigned
Infrastructure Transparency Initiative, Names and Numbers (ICANN), 166,
168 167, 168, 174, 176
Inglis, Chris, 140 Internet Engineering Task Force (IETF),
Innis, Harold, 200, 202, 225 287
innovation, 31, 87, 141–45, 162, 186, 229, internet freedom, xix, 11, 68, 84, 90, 91,
301, 319, 320, 392; disruptive, 144, 192, 120, 140, 155, 161, 238, 258, 272, 317–
220; economic, 109; permissionless, 25, 339; discourse of, 324; term, 30;
144, 306, 402, 403; stifling, 141, 143; threat to, 324
technological, 98–99; term, 92; zones, Internet Freedom Coalition, 91
278 Internet Governance Lab, 168
Innovation Delusion: How Our Obsession Internet Histories: Digital Technology,
with the New Has Disrupted the Work Culture and Society, 112
That Matters Most, The (Vinsel and Internet Observatory, 9, 99, 109, 140
Russell), 142 Internet Registries, 176
Instagram, 41, 45, 48, 59 internet service providers (ISPs), 124,
Institute for Democracy and Coopera- 262, 263, 264, 270; censorship powers
tion, 120 for, 265–66; NN and, 267, 268
Intel, 52, 388 Internet Society, 316, 325
intellectual property, 24, 242, 320, 325, Investopedia, 294
332; ownership of, 8; protecting, 32 iOS, 48
intelligence, 25, 284, 378; collective, 231 iProphet, 395. See also Auernheimer,
intelligence services, 122, 231 Andrew
interactive computer service (ICS), 151 IQ tests, 379
Intercept, The, 129 IQ theories, 59
Internal Revenue Service, 87 IRA, 301
International Students for Liberty Iraq War, 116
Conference, 341 Islamic State of Iraq, 49, 124
internet: as civilization threat, 364; con- ISPs. See internet service providers
trol over, 167–68, 249; creating, 154; It Came from Something Awful (Beran),
democratic, 216; freedom space, 33; 359
future of, 365; magical powers of, 240; ITIF. See Information Technology and
martyrdom of, 331–32; multinational Innovation Foundation
corporations and, 215–16; national Ito, Joi, 101, 102
sovereignty and, 169; negative effects It’s Complicated (boyd), 105
Index 465
Landa, Ishay, 20–21, 342, 350, 351, 356, political, 7, 12, 15, 16, 17, 18, 26, 66,
386, 387; protofascism and, 353 111, 113, 114, 141, 245, 359; promotion
Lapham, Lewis, 372 of, 28; radical, 110; right-wing, 113,
law enforcement, xx, 136, 253, 255; 117, 363, 364, 370, 388
biometrics and, 299, 300 liberty, 125, 126, 127; computer-enabled,
Lawfare, 124 129; economic, 352, 353; freedom and,
Leahy, Patrick, 32, 34 236; political, 351, 356; privacy and,
Leary, Timothy, 14, 59, 60 128; term, 236–37
Ledyard, John, 391 Liberty and Tyranny: A Conservative
left wing, 19, 352 Manifesto (Levin), 236
Lemley, Mark, 262 Liberty League, 383
Lenin, Vladimir, 239, 383 Liberty under Attack Publications, 391
Lepore, Jill, 71, 108, 143 Licklider, J. C. R., 140
Lessig, Lawrence, 8, 28, 137, 231, 242, 331, Liebling, A. J., 215
332, 333; code and, 327, 328, 330; code Lighthizer, Robert, 164
is law and, 325; on cyberspace, 329; Lincoln, Abraham, 30
internet law and, 262; rhetoric of, 329 Linux Foundation, 9
LessWrong, 372, 376, 377, 379, 380 LiveJournal blog, 395
Letter from Birmingham Jail (King), 37 Lochnerism, 307, 308, 312, 315
Letters of the Republic (Warner), 201, 222 Locke, John, 20, 21, 72, 237, 356
Leviathan, 386 Loris.ai, 106, 107
Levin, Mark, 65, 236 Lotus Corp., 93
Levine, Alexandra S., 106 Lovecraft, H. P., 111, 381
Levine, Yasha, 35–36, 108; on EFF, 93, Lovink, Geert, 174
95, 97; Lowery and, 36; privacy and, Lowery, David, 35–36
94; on SOPA/PIPA, 35 Lublin, Nancy, 106
Levy, Pierre, 63, 231 Luddites, 131–34, 401
Lewinsky, Monica, 231 Ludlow, Peter, 250, 385, 396
Lewman, Andrew, 136, 257, 258 Ludwig von Mises Institute, 341
LGBTQ literature, 395 Ludwin, Adam, 314
Liberal Democrat Party, 72 Lukin, Vladimir, 120
liberalism, 20, 269; democratic gover- Luther, Martin, 194, 208, 201, 204, 210
nance and, 172; denationalized, 172; Lyft, 31, 54, 228
economic, 7, 20, 21, 170, 171–72; lan-
guage of, 356; market, 170; political, Machine Dreams (Mirowski), 57
20, 170; term, 170 Machine Intelligence Research Institute
libertarian, xx, 279, 351; doctrine, 15, 326; (MIRI), 372, 376, 377–78, 380, 381
term, 10 machine learning (ML), 274
Libertarian Party, 388 Machinery of Freedom, The (Friedman),
libertarianism, 23, 59, 115, 390; alt-right 374
and, 341; characterization of, 16; as “Machinic Desire” (Land), 384
conspiracy theory, 18; contemporary, Mack, Zachary, 266
77; cyberlibertarianism and, 9–16, 18; MacLean, Nancy, 349, 351
digital technology and, 26; neolib Mad As Hell: How the Tea Party Move-
eralism and, 31; philosophical, 15; ment Is Fundamentally Remaking Our
Index 467
Mill, John Stuart, 20, 133, 237, 314, 327, Mother of All Demos, 61
351 Motherboard, 106
Milton, John, 314 Mozilla, 9, 35, 52
Minsky, Marvin, 382 MPS. See Mont Pelerin Society
MIRI. See Machine Intelligence MSI Integrity, research methodology
Research Institute and, 168
Mirowski, Philip, 28, 57, 73, 76, 77, 81, Mueller, Milton, 174, 179; analysis by,
104, 284, 324, 326; climate change 169; anarcho-capitalism and, 172;
and, 80; climate denialists and, 131; far-right ideas and, 169–70; globalism
“double truth” doctrines and, 233; on and, 170; on ICANN, 167, 168; liber-
Hayek, 74; on marketplace of ideas, alism and, 170, 171–72; on multi
74; murketing by, 234–35; neoliberal- stakeholderism, 173; Russia hoax and,
ism and, 54, 72, 74; on neoliberals/ 115
double truth, 77; open-source/free Multi-stakeholder Initiative Integrity,
software and, 235–36; on populist 168
philosophy, 74; on technology, 54; multistakeholder initiatives (MSIs), 168
Wikipedia and, 77, 241 multistakeholder processes (MSPs), 166
Mises, Ludwig von, 26, 28, 30, 73, 326 multistakeholderism, 149, 155, 164, 165,
misinformation, 36, 39, 40, 47–48, 174; criticism of, 170–71; democratic,
50–51, 181; public, 269 177; digital roots of, 166–67; internet
MIT. See Massachusetts Institute of governance and, 168; language, 166;
Technology policymaking and, 173; politics and,
Mitchell, William, 329 169; promoting, 175; vision of, 166
modernization theory, 323 “Multistakeholderism vs. Democracy”
Moglen, Eben, 214, 215, 216–17, 218, 243; (Gurstein), 177
corporate power and, 219; print- Multnomah County Common Law
digital parallel and, 219 Court, 399
Moldbug, Mencius, 357, 371, 372, 373, Mumford, Lewis, 199, 200, 340
374, 375, 385, 386, 387 murketing, concept of, 234–35
money: dark, 104; democratizing, 278 Musiani, Francesca, 287
money is speech, 308 Musk, Elon, 17, 2930, 36
money laundering, 134, 135 Mussolini, Benito, 342, 343, 383
Mont Pelerin Society (MPS), 20, 30, 35, Myth of Digital Democracy, The
73, 77, 233, 326, 351 (Hindman), 281
moral panic, 131–34
Moravec, Hans, 382 NAACP, 41, 45
Morozov, Evgeny, 53, 205, 228–29, 325; NAACP Legal Defense Fund, 38, 40
law/economics tradition and, 326; on Nakamoto, Satoshi, 194, 195, 296;
open government/accountability, 231; Bitcoin and, 291, 293
solutionism and, 52; on technology/ Nameless World, 360
anti-technology, 52; on technology/ Nanashii Warudo, 360
neoliberalism, 54 Napoleon Bonaparte, 217
Morrison, Aimée Hope, 12 Narula, Neha, 293
Morrison, Toni, 218 National Center for Supercomputing
Mosaic web browser, 349 Applications, 349
Index 469
National Defense Authorization Act Netflix, 263, 264, 266, 267, 268
(2020), 163 “Network Neutrality, Broadband Dis-
National Economic Council, 300 crimination” (Wu), 262
National Emergency Library (NEL), networks, 169, 263, 292, 293; digital, 262;
246, 247, 248 distributed, 285; insecure, 251; public,
National Health Service, 283 251
National Review, 266 Networks and States (Mueller), 169
National Rifle Association, 35 Never Let a Serious Crisis Go to Waste
National Security Agency (NSA), 114, (Mirowski), 234
117, 123, 124, 126, 219, 255 New America, 28, 96
nationalism, 28, 198, 343; fascist, 347 New America Foundation, 148, 318
nationalists, 394; Hindu, 49 New Communalists, 12, 61
nativism, 347, 359 New Deal, 236, 307
Nature of the Book, The (Johns), 201 New Digital Age, The (Schmidt and
Nature of the Future, The (Gorbis), 23 Cohen), 23, 53, 323
Nazis, 339, 398; Jews and, 367 New Economy, 14, 67
Nazism, 111, 112, 208, 359, 397, 399; New Germany, 343
cyberlibertarianism and, 337; survival New Information Economy, 241
and, 356 New Left, 60
NBA, 32 New Media, 376
NBC News, 45 New Republic, 139
Necronomicon (Lovecraft), 111 New Scientist, 384
Negroponte, Nicholas, 23, 28, 79; cyber- New York Public Library, 150
libertarian politics and, 100; One New York Times, 262, 264, 265, 349, 366,
Laptop per Child program and, 101 396
NEL. See National Emergency Library New York v. Lochner (1905), 307
Nelson, Thomas, 310 New Yorker, 115
Neoclassical Price Theory, 326 Newfield, Christopher, 240
neofascism, 346 Newhoff, David, 35, 137, 248
Neoliberal Thought Collective (NTC), Newman, Russell A., 263, 269, 270
20, 58, 73, 74, 77, 233, 234, 332 Newmark, Craig, 137
neoliberalism, 23, 72, 232, 236, 241, 325– Newsmax, 70
26, 351; contemporary, 332; double NGOs, 52, 156, 164, 167, 287; digital
truth of, 74, 77; libertarianism and, rights, 99, 100; internet freedom and,
31; as populist philosophy, 74; right- 318; rise of, 89–90; Tor and, 257
wing, 283; technology and, 54; under- Ni channeru (2channel), 360
standing, 77 Nike, 32
neoliberals, described, 28–29 Ninth Circuit of Appeals, 123, 124
neo-Nazis, 44, 394, 395 Nishimura, Hiroyuki, 360
neoreaction (NRx), 370–81, 385, 387 NN. See net neutrality
Net Exchange, 391 nonaggression principle (NAP), 389
net neutrality (NN), 91, 261–70, 298, neo-fascist movement, 339, 351
363; cellular networks and, 264; nonprofit organizations, 8, 92
debate over, 263; ending, 137; service Norquist, Grover, 399
blocking and, 264; support for, 307 North, Oliver, 35
470 Index
238, 273, 284, 285, 368, 380; social, concerns about, 83; corporate abuses
190; technology, 136, 179 of, 127; digital, 97, 141, 315; free speech
Powers, Shawn, 323 and, 296, 297; legislation, 131; liberty
PragerU, 50 and, 128; notion of, 128, 298; princi-
press freedom, 37, 128, 216, 370 ples of, 252; promoting of, 81; protect-
price theory, 326 ing, 181, 318; right to, 98, 259, 260,
print, 223; control of, 214; cultural sig 296; surveillance and, 94, 296–302
nificance of, 193; democracy and, 221, Privacy Badger, 96
224; digital technology and, 213; “Privacy Week,” 301
esteem/distinction and, 225; negative privatization, 31, 94, 167, 178
effects of, 225; politeness and, 225; Prodigy, 150
positive effects of, 214; republican producerism, 347, 348, 359
ideology of, 224; world before, Program on Platform Regulation, 147
203–5 Progress and Freedom Foundation, 144
print culture, 201, 225; digital culture progressivism, 20–21, 200, 340, 364, 375
and, 209–14, 221; scholarship on, 202, propaganda, 120, 275, 365; America First,
210; school system and, 198 325; anarcho-capitalist, 350; anti-
print-digital analogy, 219, 221–22, 226; Catholic, 355; antidemocratic, 18, 46,
deployment of, 209–14; meaning of, 119; anti-left, 359; anti-science, 65;
220 anti-vaccine, 46; cyberlibertarian,
print technology, 194, 214; democratiza- 34, 339; cypherpunk, 367; digital
tion and, 223; social change and, 222 technology, 58; fascist, 340; industry-
printing press, 201–2, 206, 208, 211, 213; supporting, 58, 64; neoliberal, 80;
analogy with, 190–95, 214–20, 220– protofascist, 114; rightist, 242;
26; benefits of, 221; digital technology tobacco, 64
and, 82; impact of, 207; invention of, property, 94, 323, 364; citizenship, 94;
198, 203, 206; metaphor with, 191; private, 127, 128; right to, 356. See also
myth, 214–20, 220–26; regulating, intellectual property
219–20; second coming of, 221 “Proposal for Network Neutrality, A”
Printing Press as an Agent of Change, The (Wu), 262
(Eisenstein), 200, 201 ProPublica, 40
printing revolution, 190, 192, 193, 196, PROTECT IP (Preventing Real Online
204, 225; analysis of, 222; computers Threats to Economic Creativity and
and, 189; defining, 212; democratic Theft of Intellectual Property) Act, 91,
governments and, 213–14; digital tech- 95, 109, 129, 137, 138, 139, 181, 261;
nology and, 203; impact of, 205–9, attacks on, 36; blackouts and, 36;
224; power of, 226; second coming of, campaign against, 32–36; censorship
189, 210, 221; social/political conse- and, 34; opposition to, 32–33, 35; pro-
quences of, 202; as technological/ testing, 33, 34–36; protectionism, 21,
social change, 221 105
privacy, xx, xxi–xxii, 28, 70, 94, 95, 96, Protestantism, 209; Catholicism and,
107, 125, 129, 256, 320, 322, 332; abso- 204, 206; colonial history and, 208
lute, 128; advocating for, 203; Protocol: How Control Exists after Decen-
anonymity and, 134, 260; civil rights tralization (Galloway), 285
and, 97; conceptualization of, 298; Protocols of the Elders of Zion, The, 111
Index 473
323; responsibilities and, 237; tradi- Schneiderman, Eric T., 137, 265
tional, 311. See also digital rights; Schoen, Douglas, 70, 71
human rights Schumpeter, Joseph, 142, 143–44, 210
Rinehart, Will, 184 Schwarzman College of Computing, 104
Rivlin, Gary, 14 science: development of, 82; real, 19;
Road to Serfdom, The (Hayek), 283 tobacco, 49, 63, 108, 130
Roark, Howard, 72 scientific revolution, 202, 206, 212, 222
Robinson, Rashad, 39, 45 search engines, 154, 182
Rockwell, Lew, 341 seasteading, 356
Roko’s basilisk, 379, 380, 381 Second Amendment, 393–94
Romero, Anthony, 99 Secrets of Silicon Valley (Bartlett), 155
Roose, Kevin, 349 Section 230 (CDA), xxiv, 28, 68, 81, 82,
Roosevelt, Franklin D., 383 84, 109, 129, 147, 228, 325; blanket
Rosenworcel, Jessica, 265–66 immunity for, 155; criticism of, 162;
Ross, Alexander Reid, 337 cyberlibertarian aims of, 156; dealing
Rossetto, Louis, 13 with, 230; defenses of, 156; enactment
Roszak, Theodore, 60 of, 153, 158; expansion of, 150, 164;
Rothbard, Murray, 16, 17, 26, 28, 72, 110, First Amendment and, 160; free
113, 195, 214, 241, 340, 341, 374, 389 speech/censorship and, 307; function
Rove, Karl, 27, 29 of, 291; global frame for, 161–64; his-
RTBF. See right to be forgotten tory of, 149–54, 157–58; ignorance
Ruane, Kathleen Anne, 304–5 around, 160; immunity from, 154;
Rubin, Jerry, 60 interpreting, 154–61; passage of, 152;
rule-making, 76 protections from, 157, 159–60, 163;
rule of law, 92, 118, 119, 276, 324, 325; repeal of, 138, 149, 163, 164; revising,
cynicism of, 117; democratic, 370 162; social media and, 155–56; support
runaway tendencies, 384 for, 163
Rusbridger, Alan, 118 Segway, 142
Russell, Andrew, 142, 143 self-interest, 53, 174–75, 276, 358
Selkis, Ryan, 88
SA. See Something Awful SemiDisk Systems, 388
St. Laurent, Andrew M., 230 Sequoia Partners, 53
Salon, 71 SESTA, 138, 139, 148
San Bernardino terror attack, 308 sex trafficking, 109, 138
Sandberg, Sheryl, 44, 133 sexual abuse, 43
Sandifer, Elizabeth, 380, 388 Shapiro, Ben, 51
Santa Clara Fire Department, 266 sharing economy, 15, 31
Saturnalia, 360 Shaw, Tamsin, 78, 117, 288; on legality/
Sauerberg, Lars Ole, 192–93 legitimacy, 118; on Snowden/Putin,
SavetheInternet.com, 137 120
Scahill, Jeremy, 129 Shillingsburg, Peter L., 192
Scaife, Richard Mellon, 27 Shirky, Clay, 4, 13, 23, 29, 30, 54, 67, 72,
Schmidt, Eric, 23, 53, 54, 183, 220, 375 79, 80, 191, 192, 200, 201, 205, 213,
Schmitt, Carl, 118, 171, 350 218, 226, 231, 271; on books, 210;
Schneider, Nathan, 281 Trithemius and, 194
Index 475
transparency, xxi, 162, 177, 319, 369; Understanding Media (McLuhan), 197,
conspiracy and, 366; government, 199
278; partisan gain and, 368; radical, Understanding Open Source and Free
365 Software Licensing (St. Laurent), 230
Trap: What Happened to Our Dream of Uniswap, 87
Freedom, The (Curtis), 4 Unite the Right, 44
Trithemius, Johannes, 191, 192, 194 United Nations, 161, 164, 165, 166, 175,
Trudeau, Justin, 316 176, 322; multistakeholderism and,
Trump, Donald, 18, 49, 51, 83, 118, 129, 173; technology and, 174
156, 162–63, 179, 370, 373–74, 387, United Nations Commission on Science
392, 394; Assange and, 369; election and Technology for Development, 175
of, 40, 99, 340–41; fascism and, 347, United States v. Moalin (2020), 123
368; hate campaigns/conspiracy theo- Universal Declaration of Human Rights
ries of, 362; populism and, 71; rhetoric (UDHR) (1948), xxiii, 90, 91, 93, 237,
of, 115; secrecy of, 369; Section 230 258, 297, 318, 321, 323, 324; Article 12
and, 138, 149, 163, 164; third position- of, 322; Article 19 of, 322
ism and, 70 University of Chicago Law School, 325,
Trump and the Media (Boczkowski and 333
Papacharissi), 276 Unmanned Aircraft Systems (UASs), 145
Truth Social, 363 U.S. Capitol, insurrection at, 46
TTP. See Tech Transparency Project U.S. Chamber of Commerce, 32, 133
Tucker Carlson Tonight, 116 U.S. Congress, 300, 304
Tunney, Justine, 374, 375 U.S. Constitution, 5, 92, 93, 256, 296,
Turkewitz, Neil, 324 301, 304, 329
Turner Diaries, The, 393 U.S. Defense and Advanced Research
Turner, Fred, 4, 57–58, 59, 60, 61, 62, 67, Projects Agency (DARPA), 391
108, 112, 276; on Brand, 65–66, 100; U.S. Defense Department, 391
New Communalists and, 12 U.S. Federal Reserve System, 296
Tushner, Rebecca, 150 U.S. House Energy and Commerce
21 Digital Myths: Reality Distortion Committee, 163
Antidote (Strömbäck), 189 U.S. House Intelligence Committee, 125
Twenty-Six Words That Created the Inter- U.S. House of Representatives, 299
net, The (Kosseff), 157 U.S.-Mexico-Canada Agreement
Twitter, 30, 33, 52, 129, 150, 179, 202, (USMCA), 162, 163, 164
259, 265, 272, 300, 302, 392; EFF and, U.S. National Security Agency, 116
303; Section 230 and, 157; SOPA/PIPA U.S. Naval Research Lab (NRL), 256
and, 35. See also X (Twitter) U.S. State Department, 11, 272, 368
Twitter Revolution, 271 U.S. Supreme Court, 5, 38, 69, 97, 152,
2Blowhards, 372 301, 312, 353; free speech and, 305
2channel, 360, 361 U.S. Treasury Department, 300
“Use of Knowledge in Society, The”
Uber, 15, 31, 54, 156, 183, 228 (Hayek), 75, 76, 233
UDHR. See Universal Declaration of Usenet, 158, 313, 361
Human Rights USMCA. See U.S.-Mexico-Canada
Ulbricht, Ross, 257 Agreement
Index 479
Williams, Alex, 194 and, 91; digital rights and, 137; free
Williamson, Vanessa, 71 expression and, 69; peer-to-peer and,
Wilson, Cody, 191, 393, 394 293. See also Twitter
Winn, Joss, 321
Winner, Langdon, xxi, 13, 28, 59, 70, Y Combinator, 8, 99, 348, 373
281; centralization/decentralization Yahoo, 59; SOPA/PIPA and, 35
and, 282; cyberlibertarianism and, 3; Yarvin, Curtis, 357, 371, 372. See also
decentralization and, 282, 283 Moldbug, Mencius
Wire, The, 245 Yellen, Janet, 88
Wired, 3, 13, 93, 198, 243, 277, 357, 360, York, Jillian C., 315, 318
385; neoliberal fantasies of, 240; NN YouTube, 9, 33, 68, 179, 247, 267; PIPA
and, 268 and, 35; Section 230 and, 157
Wittes, Benjamin, 154, 160 Yudkowsky, Eliezer, 376, 379, 381, 387;
Wolff, Edward, 295 MIRI and, 377, 378; rationalism and,
work-as-commodity, 239 371
World Economic Forum, 133
World of Warcraft, 263 zcash, 128
World War II, 18, 62, 232, 324, 342, 367 Zeran v. America Online (1997), 151, 152–
World Wide Web, 157, 158, 167, 340; 53, 154, 159
Berners-Lee and, 137; centralization Zhang, Sophie, 47
of, 141; failure of, 141 Zittrain, Jonathan, 28, 318
World Wide Web Consortium, 92 zoning, 341
“Would You Feel Differently about Zuboff, Shoshana, 306, 402; analysis by,
Snowden, Greenwald, and Assange If 85, 86; criticism by, 87; decentraliza-
You Knew What They Really tion and, 89; democracy and, 85;
Thought?” (Wilentz), 114–15 surveillance capitalism and, 83, 84
Wozniak, Steve, 6, 137 Zuckerberg, Mark, 44, 46, 51, 220; civil
Wu, Tim, 28, 137, 262 rights and, 38, 39, 40; criticism of, 38,
Wyden, Ron, 150, 152, 391; CDA and, 40; freedom of expression and, 37;
149, 157; digital technology and, 151 hate groups and, 45; politics and, 36;
testimony by, 96
X (Twitter), 8, 68, 148, 235, 304, 305, Zuckerman, Ethan, 293
306; accounts on, 292; censorship Zwolinski, Matt, 389
D A V I D G O L U M B I A (1963–2023) was associate professor in the English
department and the Media, Art, and Text PhD program at Virginia
Commonwealth University. He was author of The Cultural Logic of
Computation and The Politics of Bitcoin: Software as Right-Wing Extremism
(Minnesota, 2016).