Affsprung 1
DIGITAL ANALOGY
By Daniel Affsprung
The book of nature open lies,
With much instruction stored;
But ‘till the Lord anoints our eyes,
We cannot read a word…
John Newton 1
We are being asked to teach our phones and computers about us, to report when they get
things wrong, or simply to grant permission for them to learn without our explicit help,
intentions, or awareness. From a user’s perspective their abilities creep slowly ahead,
interrupting the bland authority of a textbook with the simple errors of a preschooler. We can
often understand when things go wrong, feeling amusement at the simple mistakes made by
smart technologies. Somewhere, remotely connected to this phenomenon in our minds, is a
nebulous and concerning set of institutions, gathering and listening, working to enable the
computers to do better, trying to describe the world in definite terms to things we say are like
brains in locked rooms. They remember everything they are told, never make anything up, and
follow any instructions. But they cannot meet us entirely on our terms, so we must speak in a
language they can understand. This is to say, if we want them to imagine our world for us, to
offer their incredible powers of recollection, prediction, and creation to the things in our own
lives, we must recreate it for them. We love its incredible memory, its obedience, its simulations,
and we fear its inscrutability, its indisputability, its hold over us. Closer than the sides of a coin,
these all emerge from the same nature. It plays the games we ask it to, and we are getting better
at expressing the world in terms it can understand.
Ubiquitous recommendations and interventions created by simulations of human
attentional and emotional fluxes have created the sense that we too are being shaped by the
John Newton, “The Book of Nature Open Lies.” Quoted in Peter Harrison “The Book of Nature
and Early Modern Science,” in The Book of Nature in Modern and Early Modern History, ed.
Klaas van Berkel and Arjo Vanderjagt (Leuven: Peeters, 2006), 1-26.
1
Affsprung 2
demands of digitization. Often, this takes the common form of recognizing that phones and
computers are doing much more than users ask them to, mixed with the sense that this is mostly
in our best interest. They are offering us things and denying us others; we feel annoyed when
they get it wrong but also uneasy when they get it just right. I suggest that the feeling of being
misunderstood holds as much critical potential as that of being mistreated. We should ask if the
translations we are making to accommodate the needs of digitization are true to what we are or
want to be. We should ask if, in their dark studios, the simulators are creating ever more accurate
depictions of the world or if we are fogging their windows, taking off our glasses, and
applauding the results.
All rules theorize about relationships and essences in their governed objects, thus we
critique laws on the basis of unconstitutional treatments of humans. When a rule is seen to ignore
something essential about its object, we have grounds to reject it. Programs do something even
more dramatic than law, which approximates through language. Digital rules discretize, they
create unambiguous categories and hierarchies. Translation from a real, imaginary, or symbolic
phenomenon to a digital one is defined and limited not only by tools that digitize but also by our
willingness to accept the analogies, selections, and simplifications they necessarily employ. In
the simulation of both mechanical and human events, believable and functional analogies are the
results of histories of observation, informing the design of the mathematical models and
algorithms that will replace human deliberation.2 For cases concerning voluntary behavior, the
relevant questions are what was previously being considered and sought and what kinds of
observations can be made that capture these fluxes and ideals.
Popular concerns and hesitations surrounding new computational processes of human
prediction are most articulate when they cohere around notions of privacy, exploitation, and
injustice, in short, tool criticism. We seem less able to define an alternative perspective from
which predicting, recommending, or automating morally consequential, political, and emotional
affairs looks questionable due to the analogies it necessarily employs. In the creation of a
mathematical model of an isolated natural process, a scientist seeks to determine laws by which
the process operates. These models can be tested as technical events: the process simulated must
See Matteo Pasquinelli, “Three Thousand Years of Algorithmic Rituals: The Emergence of AI
from the Computation of Space”, e-flux 101, (June 2019). https://www.eflux.com/journal/101/273221/three-thousand-years-of-algorithmic-rituals-the-emergence-of-aifrom-the-computation-of-space/ .
2
Affsprung 3
really behave in an isometric way with the real or be improved to do so.3 But simulation can also
be an ideological event for which there is no test, merely acceptance or rejection: if the analogies
employed in simulations that predict our actions are believed, the mechanisms can ‘work on us’
no matter the model they employ. It has of course been asked before if immutable laws are what
determines what these programs simulate, namely human behavior. We have established
programs that assign sentiments to words in a text, predict likelihoods of bail violation to
prisoners, or metrics to a good employee, a good credit risk, or a good partner. Does this mean
we believe that sentiment, emotion, or personality abides by laws adequately determined through
correlation to similar observations of superficial tracked behavior? Or does our confidence that
some order is there, beneath the noise and chaos of the real world, allow automation too readily?
When what is simulated has no single procedure, norm, or essence, simulation is
simultaneously an act of creation and of redefinition. Because of their undemocratic creation,
opacity, and resilience to critique, the redefinitions effected by today’s simulations of human
thought, attention, and desire are perhaps more consequential than the unavoidable changes in
social understandings of these throughout history, even those which are authoritative but not
reified as automatic procedures. But the ways in which our thinking has changed in the last
century, especially in relation to our bodies, minds and language, make for a difficult starting
point to reject what we rightfully recognize as the treatment of computational capitalism and
government by algorithms. Shaped by the information and communication technosciences of the
20th century, the tandem development of current concepts of human and artificial intelligence has
tended to sideline differences between the two as particularities of a single general phenomenon.4
It is up for debate whether decisions which are still the purview of human intelligence remain so
due to a lack of data and tools or something more substantive. The right redefinitions of
processes that currently resist digital simulation could render them amenable to computational
management and prediction, so long as believable grammatizations and analogies are found.
Consider the interrelated concerns of health and work. As the body is increasingly understood as
a readable and insightful text for the technoscientific gaze, definitions of health and embodiment
become both consolidated and dependent on external authorities for legitimacy. The pre-
3
Wilden, System and Structure, 157.
See N. Katherine Hayles, How We Became Posthuman, (Chicago: University of Chicago Press,
1999) and Jean-Pierre Dupuy, The Mechanization of the Mind, (Princeton: Princeton UP, 2000).
4
Affsprung 4
subjective basis apparently provided by body data opens the door to smart interventions in any
domain where body data is found to influence or predict a measurable norm. It is easy to
understand why the body would become the site of increased surveillance by both authorities and
users themselves, as faith has been lost in the ability of the individual to discern, report, record,
and assess the fluxes relevant to various norms in a manner amenable to computation. As these
norms become defined as the products of procedures to be optimized, programs differing only in
the details, it no longer makes sense to encourage or allow the individual to maintain control
over the relevant inputs. In the era of big data analytics, the deficiency of the self in managing its
own needs and improving it own performance is hardly different than the deficiency of the
library in the age of Google. By measuring a growing domain of human life in reference to a
finite set of options or metrics, we clear the ground for automation. Capital, in unifying healthrelated costs, performance-related earnings, and much else besides, provides a crucial basis for
these redefinitions. The close affinity of technologically supported body tracking and corporate
insurance practices indicates the flexibility and capacity of the concepts of performance
optimization and risk-management to describe and order human endeavor in a financialized
society. When actions are accompanied by probable future costs, savings, earnings, or other
quantitative measures, smart recommendations which claim to optimize these actions appear as
the sensible and responsible approach to life. The value of self-determination is perhaps
becoming overshadowed by the anxiety of competition, precarity, and the injunctions of
neoliberal self-management. What we might manage according to this general formula is limited
only by our willingness to accept the analogies, simulations, and intrusions of smart
technologies.
In embracing the dazzling functionality and efficiency of digital technology, we have
found that a secularized, informational, and relativistic world nicely suits its limitations. Lacking
anything transcendental, unquantifiable, or incomparable, digitally representing the world and
human concerns appears merely as a task of measurement. Material things can be simulated,
quantitative things calculated, relative values understood through competition and comparison.
Even following the groundbreaking writings of Alan Turing and Claude Shannon, great
reimaginings of cognition, information, and intelligence generally stopped short of approaching
political and ethical problems; better for the machines to learn chess and mazes than statecraft
and morality. But in the age of machine learning, the boundary between logic problems and the
Affsprung 5
challenges of our world, between what is appropriately approached computationally and what is
not, is blurry and moving fast. We celebrate novel programs that answer complex questions with
ever-decreasing input data. We struggle to reject even obviously reductive computational
replacements for previous methods that are thought to be less objective, standardized, or
predictable. We might reach out to ideals, only to find that we had given up justifying technology
by the aims of justice, truth, and beauty long ago.5 The postmodern turn away from such
metanarratives was neatly contemporaneous with the increasing use of systems that cannot
comprehend them, leaving the humanities with less ground on which to critique technological
practices than ever before.
At the boundaries of the digital, there is always analogy created by the programmer, “the
necessary analog component to complement” the computer, enabling the digital representation of
surveilled objects and the simulation of real processes. 6 Just as different literal codes grammatize
intonation, pace, and other elements of speech to greater or lesser degrees, digital codes
representing real phenomena always select and analogize. But unlike literal information which is
fundamentally ambiguous, without a science of interpretation, digital information is only
functional when given unambiguous significance. The bit’s abstraction requires programs to
determine its meaning totally.7 Even when the surveilled object or simulated process of digital
computation does not abide by a known law, computation treats it strictly according to the
program used to handle the data. The total abstraction of digital computing collapses the
distinction between the significations of arbitrary or incompletely known models and those of
proven functions.
Analog output, in contrast, signifies based on fixed material relationships. We trust the
output of a film camera on the basis of our knowledge of light particle interactions with certain
chemicals; we trust a thermometer’s output according to our knowledge of the density of
mercury at different temperatures. Because the design of an analog computer is informed by
fixed material properties, its output signifies on the basis of the laws governing their relationship.
But the laws that govern digital information are authored, even when they are mathematically
5
See Gary Hall on Lyotard and the increasing justification of science and technological
development in individual, instrumental terms, in Pirate Philosophy, (Cambridge: MIT Press,
2016), 27-28.
6
Wilden, System and Structure, 157.
7
Aden Evens, Logic of the Digital, (London: Bloomsbury Academic, 2017), 9.
Affsprung 6
consistent with real processes. Bits are used to represent both symbols meaningful on the basis of
convention and signals meaningful on the basis of fixed laws. This introduces an ideological
element to digital models: the ways in which data can signify depends not on material properties
but the way in which that data is read. While the significance of an analog computer’s output is
limited by the relationships one can physically construct, the digital can signify anything we
believe it is accurately modeling. The boundary of the digital is a hermeneutic one, where
surveilled objects are abstracted into symbolic representation, read and translated according to
preprogrammed and consequential analogies. The power and threat of digital abstraction lies in
part in making these analogies disappear: once the program is written or the sensor built, human
knowledge is no longer needed; hammer and anvil become a mold.
One way to illustrate the ideological element of simulation would be with the fascinating
example of the MONIAC, a fluidic analog macroeconomic simulator built by Bill Phillips in the
1940s. The MONIAC is a simulator of human choices, allowing experimentation on variables
like income, employment, and interest rates, and yielding predictions of economic behavior. The
design of the MONIAC was informed by classical and Keynesian economic theories and
principles. 8 By employing these principles in building the machine, its output signifies on their
basis, just as the output of an analog gunfire calculator signifies on the basis of the laws of
physics. In other words, the structure to which the output of the MONIAC refers is ideological
by merit of its design, even if its simulation is governed by laws. To dispute the predictions of
the MONIAC is not to argue with hydrodynamics, but with economic theory.
This encoding of principles characterizes any kind of signifying computation: as it
dispenses with the need for human deliberation, simulation begins with decisions of principles,
equivalences, and relations which create the structure allowing the system’s output to signify.
The MONIAC is a unique case because it equates the determinism of fluid movement to the
results of human decisions, starkly revealing a theoretical equivalence that is less obvious in
digital modeling. Provided a model, a digital computer is equally capable of simulating the
behavior of a machine part and the behavior of a group or individual.9 In digital simulation, all
Tim Ng and Matthew Wright, “Introducing the MONIAC, an Early and Innovative Economic
Model”, The Reserve Bank of New Zealand Bulletin 70, no. 4 (Dec. 2007).
9
On the historical relevance of physics models and ‘social physics’ to big data, see Trevor J.
Barnes and Matthew W. Wilson “Big Data, Social Physics, and Spatial Analysis: The Early
Years”, Big Data and Society 1 no. 1 (2014): 1-14.
8
Affsprung 7
functions are mathematical and logical, unambiguous, and law-governed, no matter what they
are thought to be simulating. Today, big data modelling can create convincing and functional
models of individual and group behavior: simulations of real processes of choice, deliberation,
planning, and reaction. So, should we be convinced by their analogies? Just because a simulation
does not distinguish between the uncertainty of an uninitiated chemistry experiment and the
uncertainty of an undecided human action, does that mean these are the same? And what should
we do if they are not?
For natural processes which we are ready to call law-abiding, where models can be
tested, it is difficult to see the cost of trading digital simulation for analog. The algorithm
however presents a unique case when it preempts human deliberation and decisions. The
algorithm not only operates on a simulation of real objects but also as a substitution for human
processes which would otherwise perform these operations. There is a first order on which the
algorithm signifies – what I have been describing above – namely, the set of laws of its
functioning. Just as any computation signifies on the basis of its own rules, an algorithm signifies
on the basis of the decisions of correspondence, equivalence, relation, and reference inherent in
its design. But as an analogy of human decision making, there is a second order by which the
algorithm signifies. The content of an algorithm’s output is significant on the basis of its design,
but the form of this output is taken as a replacement for human deliberation by the ideological
equivalence (or superiority) its uses assert. If the output of the algorithm is thought to be
analogous to a decision, or knowledge, it suggests that what humans are doing when we reach
conclusions is essentially a series of logical and arithmetical steps, referring to objects, criteria,
and relationships which are precisely defined in the mind. If there were referents informing
human action that cannot be precisely defined or experienced, or functions that cannot be
programmatically reproduced, the algorithm would be a questionable substitute.
To dispute not only the content of simulations but their forms we must articulate exactly
what concessions and accommodations are made to enable simulations, not only their outcomes
but the definitions they make of objects and aims. Criticism of program as tools invites the
techno-utopian defense: ‘Thank you for revealing this flaw, it’s very regrettable. We will be sure
to improve the dataset and add more diversity or supervised ethical training to the next iteration
of this program; we will get it right next time.’ To dispute the form of any technology which
claims analogy to something incompletely known or understood, we can look to the analogy
Affsprung 8
itself. Should we define everything the way digital computers demand? Anything we would
know through them, predict through them, control though them, must follow rules. Many things
in our world already do, but new rules are being written every day, to turn the realities of our
lives into the scenes of the computers’ simulations. The molds that ease and accelerate the
functioning of our world should be true to what we are or want to be.
It may sound as though the issue I am raising with digital computing’s indiscriminate
law-application is one of overconfidence – that creating laws from correlations is too permissive.
In fact, my concern is the opposite. Treating the signification of language and the potential of
human choice as law-abiding is potentially a diminishment. Anything not easily quantifiable
raises challenges for our digital analogies: health becomes an analogy for character, stress
becomes an analogy for moral harm done by others or by a system. Health and stress in turn are
captured not holistically but in superficial, behaviorist, and metonymic forms. Ideals and ends
become variables to maximize; nothing can be referred to that is transcendent or outside the
system. It is not an essential weakness that human language is not digital code: in signifying
language we can speak of things we understand only obliquely and through reference and
relation. Are all human aims rightly pursued through the optimization of a metric, as progress
along a trajectory? Or are some only pursued through the realization of ideals, as progress
towards a telos?
Automatic processes that themselves derive norms for the sake of mechanical objectivity
raise this question in an especially vivid sense. Recommendations and prompts informed by
correlation prescribe us explicitly normative options and avenues when they offer performance
optimization, health improvement, and risk minimization. Bernard Stiegler calls the automatic
normativity shaping these offerings “a-normative’ in the sense that it is never debated, which
means that it is immanent” to metonymic traces, their denumerable nature, and the prior actions
of users.10 This is not to say that these norms somehow evade political and moral consequences.
There is always a ‘crowd’ that informs the theoretico-experimental assemblage used to make
new traces meaningful, or rather actionable: the platform where the subject has been traced, preexisting databases of similar traces, statistical correlative models related to that of the data
double or profile generating the trace, and all other categorization tools at work in the process by
10
Bernard Stiegler, Automatic Society: The Future of Work, (Malden: Polity Press, 2016), 110,
emphasis in original.
Affsprung 9
which real-time interventions are made in a user’s experience.11 So-called ‘unsupervised
learning’ models create an even more profound a-normativity; anything that has not already been
demonstrated by users and captured as a numerical variable cannot be sought.
What does this limitation imply? How does the issue of a-normativity differ from the isought problem? How does raising the old questions of autonomy, determinism, and behaviorism
come to bear on the ostensible subject of this article, the simulation of human intelligence and
prediction of our action? I recently learned that the word intelligence derives etymologically
from the Latin inter- still familiar to us as meaning ‘between’ and legere, meaning ‘to choose’,
but also ‘to read.’12 To pursue these two threads for a moment brings two kinds of intelligence
into focus. Intelligence as choosing-between would aim at closure and decision; ambiguous or
incalculable matters are impediments creating uncertainty or distractions to be ignored. This is
the kind of intelligence which could be automated without any kind of loss or essential change.
Intelligence as reading-between, however, suggests process, suspension, deliberation, coping
with and acknowledging things necessarily beyond its grasp, about which there are competing
theories or accounts. A reading is situated, which means it is never final and can always be
challenged and improved. It is never finished because it aims at truth rather than functionality. In
concluding this work, I would like to dwell for a moment on the second meaning. My interest in
the differences between the two will be made clear through the insights of James E. Dobson, who
examines the digital humanities approach alongside Northrop Frye’s Anatomy of Criticism and
Sigurd Burckhardt’s theory of intrinsic interpretation.
In Anatomy of Criticism, Northrop Frye advocated what is essentially a scientific
approach to hermeneutics, assuming “an order of meanings that lies behind the enterprise known
as literature and exists as a coherent whole,” which is discovered and articulated by the critic
through wide and diverse reading, and then deductively applied to any work to find its meaning
according to laws and norms developed prior.13 Frye thus thought in a manner characteristic of
structuralism, which Dobson reveals to be closely imitated by the computational ‘readings’ of the
Gary Genosko, “A-signifying Semiotics”, Public Journal of Semiotics 2, no.1 (January 2008):
17.
12
“lection, n.” OED Online. March 2020. Oxford University Press. accessed March 26, 2019,
https://www-oed-com.dartmouth.idm.oclc.org/view/Entry/106853?.
13
James E. Dobson, “Can An Algorithm Be Disturbed?: Machine Learning, Intrinsic Criticism,
and the Digital Humanities”, College Literature 42, no. 4 (2015): 554.
11
Affsprung 10
digital humanities.14 Frye’s approach to interpretation employs a comparable model to machine
learning processes which use training datasets to develop rules for categorization that are then
applied to new text corpora. Both processes function by induction of laws from a limited set of
works, followed by deductive application of those laws to a new ‘test’ work, justified by the
assumption of shared structures underlying the texts.
Some would say that even those readers and critics who do not consciously attempt to
construct a schema like Frye’s are still essentially doing the same thing by discerning meaning
based on more or less well-founded sets of rules on how to ‘read’ artifacts, not least of all a
knowledge of the language of the work. Perhaps literature is not a law-abiding phenomenon, they
might say, but we treat it that way because this pattern of generalization and application is the
only way we can understand new information. To imagine an alternative, I will follow Dobson as
he indicates a different direction for the digital humanities that is applicable to multiple sites
where processes of interpretation are being treated scientifically, and automated. Dobson draws
on a fascinating appendix to Burckhardt’s 1968 Shakespearean Meanings, wherein Burckhardt
introduces a distinction that will be useful in articulating the difference between reading-between
and choosing-between. Burckhardt identifies two approaches to the task of hermeneutics. We can
attempt to understand a work either intrinsically, examining objects as infallibly and
autonomously crafted artifacts, or extrinsically, examining objects as creations not only of
authorial intention but also of external laws which apply more broadly to creative work and
human thought.15 Intrinsic treatment yields an interpretation, which understands a work on the
basis of only its own properties, while extrinsic treatment yields an explanation by applying rules
drawn from elsewhere.
Procedures like Frye’s and those of digital readings create explanations: laws extrinsic to
the object in question can neutralize apparent inconsistencies or ‘disturbing’ elements. Those
elements are figured as idiosyncrasies that, far from drawing the critic’s attention and
challenging an initial reading, simply indicate that the object is a unique instantiation of a ruleabiding process. Burckhardt points out the ease with which literary scholars can explain difficult
passages if they are willing: through appeals to past versions, zeitgeist, or other influences from
See also Northrop Frye, “Literary and Mechanical Models,” in Research in Humanities
Computing 1, ed. Ian Lancashire (New York: Oxford University Press, 1991).
15
Sigurd Burckhardt, “Appendix: Notes on a Theory of Intrinsic Interpretation”, in
Shakespearean Meanings (Princeton: Princeton UP, 1968): 285-313.
14
Affsprung 11
which only towering figures escape on account of their “inherent lawfulness and autonomy.”16
One may only explain works of Goethe on the basis of the author’s influences if one can
“demonstrate validly the law of Goethe’s development.”17 In other words, there is no exemption
for literary hypotheses from the rules of infallibility and economy faced by scientific hypotheses:
interpreters of the book of nature and the books of man cannot make aberrations disappear from
a system without appealing to a larger, infallible one that governs it. The “deep dreams of
structuralism” and hopes for data-driven interpretation of humans and our affairs hinges on these
questions of infallibility and autonomy in creative and moral acts, on whether there can be a
“science of interpretation” for all we do.18
Burckhardt’s intrinsic approach might have to end at the level of the individual, as with
his imagined Law of Goethe, but it seems like good form to insist on the rigor of interpretation
rather than the flexibility of explanation, especially when the laws of larger literary ‘systems,’ if
they exist at all, enjoy so little consensus. Frye’s schema goes further: it posits something akin to
a grand unified theory of literature. While Burckhardt insists upon infallibility in literary work
and perhaps even entire oeuvres, he is clear-eyed about the impossibility of systematizing
beyond this, as is suggested by Frye, or by automated digital ‘readings.’ Taking the notion of
perfect consistency, or infallibility, from natural science, is for Burckhardt an impossible ideal,
whereas it is the structuralist’s enabling assumption.
For Dobson, the problem posed by machine learning’s explanatory conclusions is not
simply their crypto-structuralism, or that they may fail to determine the true meaning of a poem,
but that they ignore the essential undecideability of their task. He writes “Algorithms, of all
kinds, are recipes for success. They are a description, an ordering of operations, which can be
iteratively executed to produce a ‘correct’ result.”19 This means appealing to all kinds of external
reference including the training sets used to teach the program what to look for, what
significance or sentiment to assign to different words, and what to ignore. Explanation via digital
reading must avoid the singular literary object’s challenge to be interpreted on its own terms
because intrinsic interpretation as defined above cannot be pursued as an automated process;
there can be no ‘training set’ for that which is singular. While both intrinsic and extrinsic
Burckhardt, “Appendix,” 294, 296.
Burckhardt, “Appendix,” 296.
18
Dobson, “Can An Algorithm Be Disturbed?”, 546.
19
Dobson, “Can An Algorithm Be Disturbed?”, 560.
16
17
Affsprung 12
readings have the potential to locate disturbances, only an intrinsic reading is forced to make
something of them. This is part of what I mean by characterizing machinic intelligence as
choosing-between; to it, outliers are necessarily secondary.20 Intrinsic interpretation attends
especially to the ‘stumbling-blocks,’ elements of a work that challenge categorization, taking
these as indications and opportunities to revise earlier readings which have been disturbed.21
In interpretation, the stumbling-block is the occasion for analysis, the thing that calls on
intelligence as reading-between, rather than the obstacle that hinders intelligence as choosingbetween. Interpretation sets for itself an impossible task; by assuming its object is infallible,
interpretation refuses to discard stumbling-blocks as noise or explain them. It remains aware of
the unknowable nature of authorial meaning yet committed to the ideal of understanding. It
admits unavoidable ambiguity and incongruence in its representations or readings, aspiring to the
ideals of economy and consistency without assuming all literary works are somehow ruled by the
same laws. Unfortunately, as we know, the digital can tolerate no ambiguity or sliding in its
information, and the functioning of inductive model building and profile-based experience
automation relies on assumptions of homophily. Automation can only dispense with the need for
human interpretation by applying extrinsic explanation, using the program’s laws, the decisions
of a designed or learned model, to apply pre-existing rules inductively.
All of us are increasingly the ‘texts’ of various modeling and profiling tools, informed
not by careful attention to individual intentions and context but by correlations to the traces of
others, opaque models applied sometimes disastrously to new cases. Our surveilled actions are
extrinsically explained, read by the hidden analogies required to make decisions about and for us.
Dobson’s work focuses on big data computational techniques as practiced in the humanities, but
“The digital technology of power appears to be invincible, because the power of the
algorithmic system seems to be literally and structurally im-perturbable – imperturbable by the
improbable”. Stiegler, Automatic Society, 116. Emphasis in original.
21
Another way to consider the stakes of this ‘curve-fitting’ of singular cases or individuals is that
it would seem to render Deleuze’s category of minority obsolete: “What defines the majority is
the model you have to conform to […] the minority, on the other hand, has no model. It’s a
process, a becoming.” From “Control and Becoming: Gilles Deleuze in Conversation with
Antonio Negri” trans. Martin Joughin Futur Anterieur 1(Spring 1990). The automatic
explanation of individual behavior renders what might once have been countervailing or
critically potent ‘minority becoming’ a particular configuration of values which can be
comprehended or discarded as noise; those implacable elements that might have once disturbed
the process become denumerable adherence and aberrance.
20
Affsprung 13
as we have seen, opens the door to a wider examination of how radically diverse objects are
treated by automatic digital processes. When Dobson suggests that “failure, rather than
algorithmic success, might be the special providence of humanists,” he points towards the larger
point I am trying to make about the way in which we can think about intelligence.22 The
humanist committed to the rigor of intrinsic interpretation indeed fails in terms of choosingbetween, by admitting to analogy. The reading is not perfect, nor is there a larger rule to explain
the difference; “I have progressed to this point, but I can go no further.”23
So where is the line between what we can explain, and what we must interpret? Can we
construct theories of human desire, justice, virtue, beauty, etc. by explaining their ambiguities
and contradictions through appeal to an overarching system founded on correlated behavioral
traces? For all its functionality, big data behavior prediction commits the overweening
structuralist’s error on a scale greater and more consequential than Frye’s dream of literary laws.
Furthermore, it does so with potentially weaker tools, lacking semantic grounding and being
unable to consider that which is not precisely designated in binary data. Whether our behavior is
in all necessary ways quantifiable, rule-abiding, and explicable, its deployment as an uncertaintyreduction resource affecting the mind, as in data behaviorism and algorithmic governmentality,
treats what is ambiguous and perhaps singular as probabilistic and particular.24 This limitation
indicates a central question raised by the automatic digital simulation and administration of
human life, which is the same question as the one posed above regarding the validity of extrinsic
explanation of individual actions, and the one posed further above regarding the difference
between the uncertainty of an uninitiated experiment and that of an undecided action. What of
life is part of the orderly book of nature and, when it comes to human aims and lives, are we all
in the same book? The lack of an answer to this question is what open the door to simulations’
reified redefinitions. It does not matter if the simulation is true or not, if no one can mount a
critique of the definitions it employs, the rules it sets forth, it can stand and solidify.
Dobson, “Can An Algorithm Be Disturbed?”, 560.
Burckhardt, “Appendix,” 294.
24 See Antoinette Rouvroy “The end(s) of critique: Data behaviourism versus due process,” in
Privacy, Due Process, and the Computational Turn, edited by Mireille Hildebrandt and Katja de
Vries 143-167. Abingdon, Oxon, [England]; New York: Routledge, 2013, and “Algorithmic
Governmentality: Radicalisation and Immune Strategy of Capitalism and Neoliberalism?” La
Deleuziana 3, Life and Number (2016): 30–36.
22
23
Affsprung 14
It is easy to see the relevance of Dobson’s argument to quantification of human
experience in audit surveillance, predictive healthcare, or self-quantification, as these are
instances where something we want to call singular and incomparable, namely the individual, is
rendered calculable in parts, and where “each methodological decision in pre-processing
involves some aspect of interpretation.”25 These too make the mistakes of structuralism by
assuming an orderly and universal design steering the surveilled processes. My concern is not
motivated by doubts about the orderly nature of the biological body, but rather by the increasing
tendency, noted by Nikolas Rose and Minna Ruckenstein, to look to the body as an explanatory
key or even a non-subjective replacement for the polysemic reports, unseen thoughts, and
unknown affects of the subject.26 The question raised by this trend is where to draw the line
between processes and objects demanding interpretation (precluding automation) and those
where explanation is legitimate, and how to draw it.
Not all the dangers of increasingly powerful computation return to this question of the
orderliness, infallibility, and autonomy of individuals. But neither is this only a question for the
individual scale. My above point about the distinction between trajectory and telos is inextricably
tied to this same question of human predictability. Our desire to pursue an increasing set of aims
scientifically, that is, informed by consistent theories about a consistent and infallible system,
intersects problematically with the flexibility of digital modeling and our willingness to accept
its analogies. The ideological basis of digital modelling means we might simply be convinced
that the things we want are in fact amenable to this kind of processing, and the way to achieve
them is systematic and quantitative to match. On an ideological basis we might be led to
understand ourselves and any simulated elements of our worlds and lives as though they were
amenable to a machinic approach whether they are or not, and whether we have the system
figured out or not. The craving for a scientific approach and the requisite scientific interpretation
of ourselves and our lives is not new, but the pressure has increased in the era of neoliberalism
and the ever-increasing responsibility, risk, and empowerment at the level of the individual.
If everything humans need is immanent, precisely representable numerically, and
pursuable as a set of logic and arithmetic problems, let automation take over as soon as we have
Dobson, “Can An Algorithm Be Disturbed?”, 552. The interpretation in question is of course in
service of extrinsic explanations (to use Burckhardt’s vocabulary) or future observations.
26
On the avoidance of the subject in recommendation engines, see also Reigeluth, Tyler.
“Recommender Systems as Techniques of the Self?” Le Foucaldien 3, no. 1 (2017): 1-25.
25
Affsprung 15
these theories nailed down. Self-determination will likely be looked back upon as a misguided
and irresponsible fiction, a stubborn vestige of the liberal institutions of the Enlightenment. In
the meantime, we ought to be critical of the metonymy and analogy of digital representations for
things we once called nondenumerable and incalculable, undecided or undecidable. A humanist
approach admits that its product is a reading rather than erasing analogies through design,
building theory into models while claiming to do without it. Intrinsic interpretation will never
provide a blueprint for automation, preserving the rigor of infallibility and economy we demand
of theories of the rest of the natural world without assuming the same methods of statistical
discovery can apply. Intelligence as reading-between is situated and in suspension, always open
to further discussion, as opposed to the structural explanation of text, author, or user as
deterministically law-abiding. Humanistic intelligence does not prove its theories the way
machinic intelligence does, but we ought to entertain the possibility that this is a result of the
objects examined and the tests applied.
Table 1
Approach to
understanding
Relation to
Machinic (choosing-between)
Extrinsic (structuralist, a-
Humanistic (reading-between)
Intrinsic
normative)
Ossified, generates complicity,
Responsive, disturbable,
outliers
fitting the curve
dialectical
Essence
Decision
Deliberation
Automation of process
Continuation of project
Progress as
Optimization
Realization
Orientation
Trajectory
Telos
Aim
The urge to automate our lives is necessarily an urge to be ‘objectified’ in the scientific
book of nature rather than subjectified in the signifying regime of subjective knowledge, in
which we speak for ourselves. This perhaps results from the pressure experienced by subjects
who feel that they ought to be maximally empowered and given the greatest possible number of
choices, but also feel unqualified to make those choices, being too subjective, too situated, too
incompletely informed, too error-prone, too human. The amount of information we need to
operate ‘responsibly’ is its own kind of data deluge, and perhaps the real reason we are
Affsprung 16
threatened by an end of theory — not because theory is no longer needed but because our
empowerment calls for the black-boxes of automated decision making.27 We accept explanations
of ourselves from beyond ourselves; we crave machinic preemption because we have built a
world that demands it. Becoming explicable is apparently a welcome change as long as it is
accompanied by prompts and guidance to optimize the metrics representing our aims. Health,
earnings, social standing, and other things we are responsible for are being rendered comparable,
quantitative, optimizable, and competitive, and new tools are always being developed to extend
this regime further. Deliberation is not only fallible, idiosyncratic, and difficult to justify
scientifically, it is simply too inefficient to operate without leaving the vast majority of possibly
relevant information unconsidered: just as the search engine narrows our perceptions and decides
for us, smart control mechanisms offer to narrow our protentions even to a single option,
deciding for us.
There are more obvious reasons to assert that human deliberation cannot legitimately be
preempted. Firstly, as noted above, predictive technologies authorize constant risk-avoidance
modulation and intervention, ideologically essential to a robust new generation of control
mechanisms beyond those described by Deleuze in his “Postscript on Societies of Control.” No
longer merely maintaining a condition on the basis of access and denial, smart control
mechanisms assume a primary position. They generate recommendations and interact with us to
optimize or intensify a condition; the technology assigns actions which the user is believed
unable to plan as effectively. These blur the lines between prediction and prescription. A
simulation of a natural process can be flawed, predicting an outcome that does not occur, while
the algorithm, as a simulation of human deliberation, can prescribe its own outcome. By
recommending action to a user, the mechanism enters into the system it is modeling, supplying
probabilistic an extrinsic evidence about an unrealized event that would not otherwise operate
probabilistically. The paths created by recommendation and prediction engines threaten to
become self-fulfilling prophecies as statistical correlation models increase conformity and shape
trends which come to be seen by individuals as natural responses to new discoveries. Successful
products, ideas, or candidates which spread through recommendation systems seem to users
See Chris Anderson, “The End of Theory: The Data Deluge makes the Scientific Method
Obsolete,” Wired June 23, 2008, https://www.wired.com/2008/06/pb-theory/. See also Bruno
Latour
27
Affsprung 17
popular due to their qualities, when in fact the ‘destiny’ of such phenomena can be highly
predictable to an outside observer from an early stage.28 When these systems generate norms
automatically, through correlation to other profiles, they increase the influence of individual
decisions on the options available to others and prioritize the predictability of a social system
over the freedom and choice of its members.
I would like to conclude by doing a bit of human prediction of my own. My argument
here is based on the idea that we have a choice in this matter: while no one can prevent the
creation of new sensors or programs, there is always an ideological element to the simulations
these produce. The analogies justifying many of the new mechanisms of control depends on
relatively recent reconceptions of intelligence and the relations between the human body and
mind. While the informatization of the body and the mechanization of the mind have informed
the design of many new technologies, this may not prove the truth of their analogies so much as
their compatibility with a prevailing cultural preference for the quantitative, the statistical, and
the economic rather than the qualitative, the situated, and the political. As new terrains are
captured and digitized, unseen analogies affix value, relevance, relation, and meaning to
elements of the world which then ossify. The traces of the user, when ‘read’ via correlative
explanations, become products of mechanical objectivity rather than contestable interpretations.
These are the impressions we are in danger of internalizing despite the fact that their conclusions
are often founded on theories not so much proven as functional, not so much scientific as
scientistic. Digitization is a fast-drying mold, flexible but uninterruptable, capacious but partially
blind, above all machinic. It is hard to imagine automatic norm optimization having any result
other than the acceleration of existing tendencies and the entrenchment of existing patterns.
Investment will always seek to increase predictability, reducing spontaneity and disruption. If the
sphere of human affairs and potential is illegitimately or prematurely subsumed into that of lawabiding processes, smart mechanisms of prediction and prescription offer industries and
governments a guarantee previously and rightly reserved for classroom science experiments.
Human intelligence is perhaps the only meaningful alternative we can hold up to the
computer to make sense of it, whether we understand ourselves as a model to which it aspires or
See David Chavalarias, “The unlikely encounter between von Foerster and Snowden: When
second-order cybernetics sheds light on societal impacts of Big Data”, Big Data and Society 3,
no. 1 (2016). 1-11. On the assumption of homophily in recommendation engines, see Wendy
Chun, Updating to Remain the Same, (Cambridge: MIT Press, 2016), 13-15.
28
Affsprung 18
its obsolete predecessor. The space rapidly being cleared for algorithmic treatment is thus
rightfully a place for humanist investigation, but we have only begun to assert where this ground
is and what we should do there. We can reject the analogies in the design of technics that speak
and decide for us. But to do this we need notions of what the human is beyond a site for
performance optimization, and what intelligence is beyond information processing capacity. It
should be clear I am not talking about the analogies of computers and brains. Rather, it is the
negotiated and value-dependent question of what in our lives is rightly approached
computationally that sets the stakes here.
The subject who accepts the analogies of scientistic data driven decision processes
accepts a position of irredeemable and accelerating marginalization, self-distrust, and precarity.
If we wish to avoid a future where our potential is constrained by relevance, our aims
constrained to optimization, and our selves reduced to particular versions of a grand unified user,
we must begin by acknowledging that these are the risks of the predictive technologies which
comprise our new digital ecosystem, our new control mechanisms, our new smart world. Perhaps
there is no transcendent reference, no way of proceeding towards our highest aims and virtues
with confidence founded in experience, and no way to comprehend and compare ourselves on
common ground. I am merely pointing out that these are open questions and may remain so for a
long time. The more immediate question is why we see a lifeboat in the computer’s formality,
rationality, and objectivity – how we became so certain of the grand analogy between its reality
and ours.
Affsprung 19
Works Cited
Burckhardt, Sigurd. “Appendix: Notes on a Theory of Intrinsic Interpretation” in Shakespearean
Meanings, 285-313. Princeton: Princeton UP, 1968.
Dobson, James E. “Can An Algorithm Be Disturbed?: Machine Learning, Intrinsic Criticism, and the
Digital Humanities.” College Literature 42, no. 4 (2015): 543–64.
https://doi.org/10.1353/lit.2015.0037.
Evens, Aden. Logic of the Digital. London: Bloomsbury Academic, 2017.
Genosko, Gary. “A-Signifying Semiotics.” The Public Journal of Semiotics 2, no. 1 (January 2008):
11–21.
Ng, Tim and Matthew Wright. “Introducing the MONIAC, an Early and Innovative Economic
Model.” The Reserve Bank of New Zealand Bulletin 70, no. 4 (December 2007): 46-52,
https://www.rbnz.govt.nz/-/media/ReserveBank/Files/Publications/Bulletins/2007/2007dec704ngwright.pdf.
Stiegler, Bernard. Automatic Society. Cambridge, UK; Malden, MA: Polity Press, 2016.
Wilden, Anthony. System and Structure: Essays in Communication and Exchange (version 2nd ed).
2nd ed. London; New York: Tavistock, 1980.