In and out of Wonderland: a criti/
chromatic stroll across postdigital culture
Stamatia Portanova
AI & SOCIETY
Journal of Knowledge, Culture and
Communication
ISSN 0951-5666
AI & Soc
DOI 10.1007/s00146-020-01103-x
1 23
Your article is protected by copyright and all
rights are held exclusively by Springer-Verlag
London Ltd., part of Springer Nature. This eoffprint is for personal use only and shall not
be self-archived in electronic repositories. If
you wish to self-archive your article, please
use the accepted manuscript version for
posting on your own website. You may
further deposit the accepted manuscript
version in any repository, provided it is only
made publicly available 12 months after
official publication or later and provided
acknowledgement is given to the original
source of publication and a link is inserted
to the published article on Springer's
website. The link must be accompanied by
the following text: "The final publication is
available at link.springer.com”.
1 23
Author's personal copy
AI & SOCIETY
https://doi.org/10.1007/s00146-020-01103-x
ORIGINAL ARTICLE
In and out of Wonderland: a criti/chromatic stroll across postdigital
culture
Stamatia Portanova1
Received: 12 June 2020 / Accepted: 19 October 2020
© Springer-Verlag London Ltd., part of Springer Nature 2021
Abstract
The contemporary info-proliferation is taking the ideal of a solid technological rationalism to its extreme point: the depletion of all bodies into ’informational cuts’, orderable bits and pieces of data fabric. The present contribution will discuss
this process of datafication, trying to avoid any polarization along the ‘pro’ or ‘anti’ dualism, and any consequent excess
of enthusiasm or critique. For this purpose, the essay will take the form of a stroll across post-digital culture, alternatively
under the effects of a ‘red and a blue pill’ as the two main points of view already exemplified, in 1999, by Morpheus in the
famous sci-fi movie The Matrix. To these two points of view, respectively identifiable as digital critique (going down the
deep rabbit hole, and seeing that computers are playing today a leading role in what Gilles Deleuze and Fèlix Guattari have
called ’capitalist schizophrenia’), or digital potential (remaining in a world of numeric dreams, a world populated not only
by humans but also by bots, autonomous computer programs that are becoming increasingly able not only to post, but also
to understand content and interact with people and, most importantly, to take aesth/ethical decisions), a yellow one will be
added, which can be recognized as that of ‘hacker culture’, at the same time suggesting that, instead of a dialectical contraposition between two different perceptual and cognitive modalities, post-digital culture can be more easily discussed through
a multiplication of possible perspectives.
Keywords Datafication · Post-digital culture · Capitalist schizophrenia · Algorithmic neurosis · Bots · Hacker culture
“A schizophrenic out for a walk is a better model than a
neurotic lying on the analyst’s couch.”
1 Introduction
The contemporary info-proliferation is taking the ideal of
a solid technological rationalism to its extreme point: the
depletion of all bodies into ’informational cuts’, orderable
bits and pieces of data fabric. Mirko Tobias Schafer and
Karin Van Es have efficaciously described this ubiquitous
’datafication’ as the abstraction “of all things under the sun
into a data format” (2017). A process that is triggering significant modifications in perceptual and cognitive habits, as
it comes to coincide with an epistemological tendency to
* Stamatia Portanova
sportanova@unior.it
1
Department of Human and Social Sciences, Università degli
Studi di Napoli ‘L’Orientale’, Largo S. Giovanni Maggiore,
30, 80134 Naples, Italy
reduce the knowable physicality of the real to analyzable
quantitative indicators, and to recur to a particular kind of
images as their ’visual proofs’. Visible examples of this epistemological attitude are the omnipresent bars and pies, the
arc and network diagrams that, on a daily basis, allow both
the experts and the untrained to take the social and economic
pulse of national and corporative bodies. The same attitude
can also be thought under the frame of a mutating social
relationality: if, as argued by Karin Knorr-Cetina (1997),
human beings have long been displaced by objects as the
main referents of social relations (a displacement that she
defines as ’objectualisation’), it is possible to think data
(and their visualisation tools) as the algorithmic objects we
increasingly often relate to, in what could be defined as a
new, overflowing ’digitactualisation’.
The diffused presence of quantitative data in our daily
lives immediately suggests a compulsive appetite for numbers: it is the counter placed on top of the TV news screen,
or the stream of percentages flowing under the presenter’s
figure, that transmits to us the impact of phenomena such
as violence against women, terrorist attacks, migration. In
13
Vol.:(0123456789)
Author's personal copy
AI & SOCIETY
the same way, it is only the erasure of all metrics from our
Facebook account, that manages to produce a real change in
our social media experience; at least according to Benjamin
Grosser, the author of an artistic project called Facebook
Demetricator, a web browser extension that hides all numbers from the Facebook interface, and whose aim is thus
to “disrupt the prescribed sociality these metrics produce,
enabling a network society that isn’t dependent on quantification.”1 Are we all addicted to quantification then? Reading
the project’s description on the artist’s site, is a little bit like
hearing good, old Morpheus: do we choose the blue pill and
go back to our numerical dreams, watching for the counts of
responses, comments and likes, rather than for the ’responses
themselves’, waiting for the number of friend requests to
appear, rather than looking for ’meaningful connections’?
Or do we decide to swallow the red pill, install Grosser’s
software and stay in Wonderland, following the deep rabbit
hole of the ’real’ (real connections, real responses, real relations)? Struck by the ’matrixial’ tone of the Demetricator,
we wonder whether it might not be a better idea to instead
erase the binary difference (that is still very active as an anticapitalist therapeutic strategy in cultural and media theory)
between blue pill (a pathological trust in data, a compulsory
implanting of algorithms everywhere) and red pill (a healthier attempt at going back to the real). In order for this erasure
to happen, we should in fact first try both. The pills, in the
end, are nothing less and nothing more than points of view
on what we have defined as an increasing digitactualisation.
2 Red
2.1 Economic diagnosis
A world of paradoxes unveils itself in front of our eyes.
As soon as we swallow the realist pill, we start to see that
computers are playing today a leading role in what Gilles
Deleuze and Fèlix Guattari have called ’capitalist schizophrenia’. The paradox appears more clearly when the proliferating and overflowing creativity that Deleuze and Guattari
attribute to schizophrenia (and to capitalist societies), a creativity characterized by addition and accumulation, is thought
of in relation to the rational simplifying capacity of binary
algorithms (2000, p 6). From this point of view, the encounter between the sobriety of digital computing machines and
the excesses of capitalism can also be defined as ’postdigital
culture’, intending the latter as an obsessive repetition and
accumulation of informational and algorithmic processes.
The red pill, in other words, is showing us the evolution of
1
https://bengrosser.com/projects/facebook-demetricator/
13
the capitalist schizophrenia anchoring itself to the solidity
of algorithmic procedures.
In 1972, Deleuze and Guattari’s diagnosis of capitalist
schizophrenia already possessed the ambiguity of a paradox. As it was clear to the philosophers, the most insidious
adversary in the political struggle of that time was not a
totalitarian regime anymore (at least not in a pure monolithic
form), but its adoption of a subtler sociocultural and psychophysical, strategic schizophrenia. The main symptom of the
capitalist schizophrenic tendency was its ‘deterritorialization’, as the system started to avoid laws and limits, and to
prefer “difference over uniformity, flows over unities, mobile
arrangements over systems,” because “what is productive is
not sedentary but nomadic” (Foucault in Deleuze and Guattari 2000, p XV).
The very origin of capitalism as an economic system, as
Deleuze and Guattari explain, is linked to the simultaneous liberation of different flows in a particular moment: “[f]
lows of property that is sold, flows of money that circulates,
flows of production and means of production making ready
in the shadows, flows of workers becoming deterritorialized” (223). At the same time, the fluid that circulates on
the deterritorialized field of industrial capitalism is nothing
else than capital as re-investible money. When money begets
money, capital becomes an independent flowing substance
entering into relation with itself, rather than with its products.2 It is important to note that, for Deleuze and Guattari,
the beginning of the capitalist regime cannot be defined by
the predominance of commercial or financial capital yet, the
latter being merely flows among flows; it is industrial capital
that gives the regime its true schizophrenic nature, acting as
a decoder and a recombinator of all the other flows (226).3
Bearing in mind Deleuze and Guattari’s description of
the economic genealogy of capitalism, we can see how the
conjunction of different flows that was necessary for the
appearance and development of an industrial economy is
today gradually fading away from the market’s own horizon,
while the conditions that link capital to production, and the
very meaning of production, have completely changed. To
better grasp this shift, let us remember the distinction traced
in the Anti-Oedipus between schizophrenia and neurosis as
two expressive forms. The schizophrenic is nothing but a
universal producer, an assembler of relations.4 Withdrawal
2
And, according to the two philosophers, differentiating itself into
an original value (advanced or invested money) and surplus value
(profit), a differential relation that is also mirrored by that between
exchange value and the power of capital.
3
The difference is between investment that increases capital stock, or
production capacity, and an asset simply changing hands, or a simple
manipulation of money.
4
Now, the question is: “How is it possible that the schizo was conceived of as the autistic rag—separated from the real and cut off from
life—that he is so often thought to be? Worse still: how can psychiat-
Author's personal copy
AI & SOCIETY
from the world, detachment from the materiality of the real,
exclusive focus on one’s own inner life, and incapacity to
weave relations, constitute, on the other hand, typically neurotic symptoms, or symptoms of repression. When looked at
from this point of view, the financial market appears less and
less schizophrenic, its purely numerical or computational
flow incapable of establishing a non-exploitative relation
with its own social and physical flows. The breath of living
billions is captured by states of anxiety, depression, climate
change, reminding us that there indeed is a body (or rather,
more than one). But through a constant ‘acting as if’, the
financial machine reaches its absolutization, unsticking itself
from productive and consumptive processes, with a consequent decrease in the significance of the ‘real’ economy, and
an increased autonomy of derivatives.
2.2 Techno-logical diagnosis
Capitalism could in fact only see the light when the flow of
capital was allowed to become abstract and more abstractly
regulated (which does not in any case mean ’free’) by a
’state’ (the modern political State as guarantor of the industrial economic status). To understand the digital as a form of
further economic abstraction, we should at this point remember that the digital is first of all a mode of thought based on a
structuration of reality according to formal oppositions.5 In
the seventeenth century, Gottfried Leibniz already thought
that everything can be represented as binary mathematical
Footnote 4 (continued)
ric practice have made him this sort of rag, how can it have reduced
him to this state of a body without organs that has become a dead
thing—this schizo who sought to remain at that unbearable point
where the mind touches matter and lives its every intensity, consumes
it?” Or, in other words, “how does psychoanalysis go about reducing a person, who this time is not a schizophrenic but a neurotic…?”
(Deleuze and Guattari 2000, p 20).
5
A code (or a way to think) can be defined as digital when it is
constituted by digits, countable units that can be related or opposed
to each other (not necessarily in a binary way): a sort of perceptual
and cognitive totalitarism composing the chaos of the real into an
ordered formalism. This conception derives from Deleuze’s distinction between analog and digital codes, and his definition of the digital from digitum (the counting finger). In the book dedicated to the
painter Francis Bacon, Deleuze discusses abstract painting as a form
of artistic expression where chaos is eliminated, or reduced to its
minimum, together with any form of tactile trace or manual craft: it
is an ascetic art, an art of the spirit that goes beyond figuration, in
order to reveal abstract forms. An art “without hands.” These abstract
forms, in their turn, belong to a purely optical space. Abstract painting, in this sense, elaborates a symbolic code on the basis of formal
oppositions. It is this code that the philosopher defines as digital: the
digits, here, are the units which group the terms of the opposition in a
visible space. With its simple codified oppositions of vertical-whiteactivity and horizontal-black-inertia, Wassily Kandinski is mentioned
as one of the main examples of this code: a conception of art based
on binary choices (Deleuze 2005, pp 70–77).
information: by building one of the first computing machines
in history, he could, therefore, deal with themes of complexity and casuality, through the composition of 0 s and 1 s.
Transducing the Leibnizian vision into the present, James
Bridle argues that the digital space of 0 s and 1 s “is where
we do our thinking, this notional space in which we imagine
possible visions of the future…”.6
According to Luciana Parisi, digital information (data in
the form of 0 s and 1 s) cannot be easily incorporated or
used for functional human purposes, as it manifests a level
of autonomy (2017). Nevertheless, despite the inevitable
complexity and incompleteness of the computations, in the
digital vision the obsession remains the same: everything is
reducible to digital information, 0′s and 1′s. Being identifiable with Guattari’s definition of ’coding’, the idea behind
computer processing is the direct translation of qualitative
into quantitative values, an idea which the philosopher has
defined as inherently “capitalistic due to the neutralization,
the systematic dequalification, of the materials of expression
from which [it] proceed[s]—which puts [it] into the orbit
of the economic valorization of Capital, …” (1995, p 104)
Furthermore, since most of the economic transactions on the
financial market are currently being performed by or through
these binary processors, the technological parameters of the
machine (from programming languages to Internet protocols) have ended up becoming the true decision-makers,
in relation to the quantity and velocity of the transactions
that can be, and are, actually performed. The main difference from industrial capitalism is, therefore, that, today,
the social axiomatic is not given (or at least not exclusively
given) anymore by a machine of control acting against the
schizophrenic anarchy of capital (the State, the Bank, the
Church), but by a technical machine of abstraction acting
against the residual materiality of money (the processor).
Technical machines are, therefore, no longer simply adjacent to bodies in the process of surplus value extraction (or
abstraction): they are now also able to take the place of, or
collaborate with, the repressive social code of capitalism, in
enslaving those same bodies. A technological parasite that
provides contemporary capitalism with forms of repression
not simply aimed at the elimination of physiological schizophrenia (as in nineteenth and twentieth centuries). Rather,
the ’madness’ has now become part of the system, where it
has evolved into the shape of a neurosis: data neurosis, algorithmic obsession, a compulsion to create sense (and value)
by abstracting, discretizing and quantifying. The diagnosis,
6
https://www.tumblr.com/search/james%20bridle. We can describe
this process, by using the words of philosopher Alfred N. Whitehead,
as “An abstraction arrived at by confining thought to purely formal
relations which them masquerade as the final reality. … The concrete
world has slipped through the meshes of the scientific net.” (Whitehead 1968, p 18).
13
Author's personal copy
AI & SOCIETY
therefore, changes, from capitalist schizophrenia to neoliberal neurosis.
Under the red pill effect, it has thus become difficult to
still agree with Franco Berardi, on the disappearance of neurotic pathologies as vestiges of an old capitalism. The liberation of flows of desire and flows of goods, and the replacement of repression by ’simulation’ mechanisms are, for
Berardi, what distinguishes the late ’semiocapitalist’ phase
of immaterial labour and the explosion of the infosphere (the
contemporary info-cracy) (Berardi 2007) Yet, the missing
point in this theory is the digitalized abstraction of the capitalist productive machine into a form of data processing, and
the consequent new form of subjective capture as a kind of
quantitative narcissism. Events, in other words, do not have
to carry semiotic signification anymore, but only quantitative significance. This situation is not simply associated to a
change of state in the capitalist market (from a flow of productive money to a flow of self-replicating numbers), but is
also affecting the epistemological environment of postdigital
culture, by triggering new forms of numerical neurosis.
2.3 Technical diagnosis
In Deleuze and Guattari’s Anti-Oedipus, we find the definition of technical machines as ‘indexes’ corresponding
to general forms of production: “thus there are manual
machines and primitive societies, hydraulic machines and
“Asiatic” forms of society, industrial machines and capitalism” (Deleuze and Guattari 2000, p 32) Every political
regime, according to them, is productive not only in a social
and economic sense but also in a perceptual and cultural
sense, as a way to see the world, a set of conditions through
which particular notions of space and time are produced. As
a technical dispositif that produces regimes of light (and,
later on, of sound), cinema, for example, channels electromagnetic energy in a way that allows it to form images as
serial assemblages of frames. Life as perceived and produced industrially, ’in series’.
What about the digital dispositif of the neoliberal, or
financial, capitalist phase? Defined by Knorr-Cetina as a
’synthetic situation’, the ubiquity of screens can perhaps be
considered as one of the main technical assemblages characterizing contemporary postdigital culture. Starting from
the financial exchange market, where,
In doing deals, all traders on the floors have technological equipment at their disposal; most conspicuously, up to five or more computer screens displaying
the market and allowing trading to be conducted. …
In this way, their bodies and the screen world melt
together—an apparently total immersion in the actions
in which they are participating (Knorr-Cetina 2009).
13
Beyond the realm of finance, screen ubiquity is a phenomenological affect of all digitalized perceptual and social
relations, as much of our interaction with things is via the
screen.
Screen ubiquity has not even spared the street and the
physical activity of walking, as we continuously consult
Google Maps for directions. The Maps software derives its
data from the aerial and satellite observation of physical
places. But once the data are acquired, they start to function
autonomously from the material physicality of the Earth, and
are projected using the formulae of the Mercator model, an
unrealistic spherical model of the planet from which we end
up obtaining our geographical information. The detachment
of the software from the world and its bodies is often phenomenally actualized by what we perceive as cartographic
mistakes, which are often opposed by us to the ’healthier’
habit of relying on the ’here and now’ of real experience. We
can imagine a theorist such as Elizabeth Grosz reminding
the compulsory Map user that “Sensation [the here and now]
includes not only the perceiving body but also the Umwelt
in which the body moves through an ever changing horizon”
(Grosz, 2008, p 72) The sensation of the ’here and now’,
according to Grosz’s vision, does not coincide with the consciousness of a human or technological subjective perception (which is dependent on the establishment of coordinates
and abstract regularities), but is a sort of phenomenological
equivalent of Morpheus’ red pill.
2.4 Historical diagnosis
’Dispositifs’, as Bernard Stiegler also points out, have a history (2005–2006). Much before the advent of cinema, pictorial representation constituted for example a potent machine
that allowed people to produce particular visions of space:
for archaic Romans, the symbolic representation of a genius
loci, an animal or a human figure drawn or sculpted on a wall
or a door, was a technical dispositif that allowed the singularity of that place to emerge and to distinguish itself from a
neutral space whatever (Dumèzil 1977). ’Seeing’ the genius
meant being able to weave a relation with the environment
and its unique properties, before converting it into the object
of a subjective perception. It is evident how this worldview,
based on the singular-neutral differentiation and on the valorization of the former, was in opposition with what has
successively become the main conceptual presupposition of
modern science (and of its technological dispositifs): that is,
the belief in quantifiable, reproducible, foreseeable realities,
and their preference over singularities.
What about the history of the various dispositifs attached
to the digital worldview? We can go back in time to isolate
some germs. We could certainly arrive to 1000 B.C. China,
where the Yijing (or Book of Changes, a classic divination
text based on a binary combinatorics of hexagrams) was
Author's personal copy
AI & SOCIETY
written by the first kings of the Zhou dynasty (and only in
the seventeenth century translated into a European language
by Leibniz, who took it as the basis of his binary numerical system). Or we could get to ninth century Baghdad,
where the Persian mathematician, astronomer and astrologer Muhammad ibn Musa al-Khwarizmi published a new
system for solving polynomial and quadratic equations: algebra, thus giving his name (Latinized into Algorismus by his
translator Gerard of Cremona) to sequences of mathematical
instructions. Furthermore, a look at the architectural and
urban structures of traditional African settlements would
show us that geometrical fractals were known and applied
much before the advent of computer graphics. But certain
particular tendencies of contemporary postdigital culture
also seem to retrospectively point towards another, important predecessor.
As a guide in this temporal journey, we can again follow
Deleuze: on the 14 and the 21 of March 1978, the philosopher gave two seminars on Immanuel Kant at the University
of Vincennes-St. Denis.7 For Deleuze, the modernity of Kant
resided in his conception of space (as well as time) as a
noumenon, a knowable and measurable extension composed
of parts: what this means is that every spatial quantity is
at the same time conceivable as a multiplicity and a unity
(a unity of multiplicities), a multiplicity being a gathering
of parts into a whole. The concept of extensive magnitude
appeared thus as one of the transcendental tools to purify
reason from the contaminations of subjectivity, to restore its
objective universality. A pure reason which laid down the
laws of human knowledge but also, according to theorists
such as Achille Mbembe and Denise Ferreira da Silva, the
metaphysical basis for the ’modern’ exclusion of particular, ’other’ subjectivities from the anthropocentric ideal of
thought (Mbembe 2017; Ferreira Da Silva 2007).
In the Kantian image of a universal human reason, space
and time are the objects of a non-empirical intuition (rather
than sensation). Insofar as it appears ‘in’ space and time, the
real, or the phenomenon, the experientable, acquires a measurable extension (for example the space of a room floor).
Since, as demonstrated by many after Kant (and Descartes),
no element of our experience can be said to really possess
this extensive calculable character in itself, we can only
arrive at realities such as precisely located bits of material, or numerically definable instants, through a process of
abstraction.8 Empty space and time become thus two abstract
7
http://deleuzelectures.blogspot.com/2007/02/on-kant.html.
The definition of particular entities as calculable units, separate
extensions, or spatially and locatable points, is in fact what Whitehead defines as abstraction: for the philosopher, the distinction
between what is abstract (facts of the mind) and what is concrete
(physical reality) is of fundamental importance, and failure in doing
so amounts to a “fallacy of misplaced concreteness.” As he argues,
“among the primary elements of nature as apprehended in our imme8
neutral extensions purified of the loci’s divinity and only
graspable by the modern human mind.
The Kantian notion of extension makes us catch a glimpse
of a particular conception of space and time: digital geometry as a study of space as composed of discrete information
sets, and of time as ’bit time’, or ’flick’. It is at this point
worth remembering how early sci-fi representations of the
‘digital world’ were very much in tune with this Kantian
conception, when they made the space of computers ’appear’
to human eyes for the first time. In 1982, cyberpunk writer
William Gibson for example introduced, in his novel Neuromancer, the notion of cyberspace: “Lines of light ranged
in the nonspace of the mind…”.9
Kant’s space is not an object of experience but an independent a priori that applies to it. It is a condition of possible
experience, of the whole of possible experience, or rather
the very locus, where the conception of a whole becomes
possible despite the fragmentation and difference of experiences, precisely thanks to such pretense of a universal
condition. Before being experienced, this transcendental
condition lays down the foundation for a phenomenology
of perceptual and motor processes (for example, for seeing
and walking), identifying the movement of the human bodysubject as an exploration and cognition of that space which,
as often argued, has been de-materialized and dis-oriented
by technological interventions.
After the initial, disorienting appearance of a digital
space, various images and metaphors offering a lexicon of
place (verbs such as ‘enter’, ‘exit’, ‘explore’, nouns such as
‘space’, ‘highway’, ‘channel’) were provided, to allow users
and readers to visualize an otherwise imperceptible, unlivable electronic reality, and to redefine it in phenomenologically familiar terms: “Here-now. … You can conceive of two
objects, whose concept is strictly the same, there are still two
objects, for this very reason that the one is here and the other
there. … There is a spatio-temporal order irreducible to the
conceptual.”10 The phenomenological conception of this
spatial metaphor is, therefore, founded on the same dualism
Footnote 8 (continued)
diate experience, there is no element whatever which possesses this
character of simple location…. I hold that by a process of constructive abstraction we can arrive at abstractions which are the simply located bits of material, and at other abstractions which are the
minds included in the scientific scheme. Accordingly, the real error
is an example of what I have termed: The Fallacy of Misplaced Concreteness.” (Whitehead 1967, p 58). From this point of view, the main
question to be explored is to what extent digital technology can also
be said to be a mind.
9
As evidenced in Lakoff and Johnsons theorizations on metaphors,
the latter are often conceived as spatial physico-conceptual devices
(such as the simple association ‘up is good, down is bad’ and the consequent equation of uprightness with consciousness) (1980).
10
See Deleuze’s lectures on Kant.
13
Author's personal copy
AI & SOCIETY
that animated Kant’s transcendentalism: the physical and
the digital, as two different and separate kinds of space, in
need of a connection.
In our time, the modern ‘on-line off-line’ dichotomy
seems to have been amply demystified, together with all
metaphorical dualism between physical and cyber space,
demonstrating how the two spaces are enmeshed, and revealing us our own profound sympathy for a class of new objects
native to the twenty-first century. So how does seeing, or
thinking, like a computer really feel? A tentative answer
could be: in spite of everything, it still feels very Kantian,
or very anthropomorphic. The software of ARGUS, a video
surveillance machine attached to an unmanned UAV (a software called Persistics after the concept of persistent ISR—
intelligence, surveillance, and reconnaissance), perceives
by drawing a colored box around humans, cars, and other
objects of interest. With an imaging unit that totals 1.8 billion pixels, the drone captures pictures and video (12 fps)
that are ’detailed enough to pick out birds flying through
the sky, or a lost toddler wandering around’.11 In a Kantian terminology, we can say that the visual representation
constructed by ARGUS is the result of a double process of
‘apprehension’ (the positioning of reality as occupying a
certain space and time) and ‘reproduction’ (the linking of
the preceding parts to the following ones).12 More precisely,
the Kantian faculties of apprehension and reproduction are
directly correspondent to the two main features of ARGUS’
perception: location, as a precise tracking on a spatiotemporal grid, that is the transformation of everything into a
datum; and the reproducibility of the datum ad infinitum
through the causal logic of algorithmic processing, whereby
every output depends on a previous input. But even more
than apprehension and reproduction as the two perceptual
modes delineating the Kantian imaginative synthesis realized by the digital dispositif, it is the unity of this synthesis
according to an a priori category, that makes of digital perception a transposition of Kant’s transcendental humanism.
(Deleuze, 1984, p.15) Seeing pixels, faces, things or, which
is the same, seeing objects in coloured boxes: a cultural attitude with Kantian origins.
According to Benjamin Noys, recognizing the Kantian
origin of digital perception allows the drone to inhabit a metaphysical field, and to embody the dream of transcendence
that has always haunted the Western imagination (2015).
From this point of view, the abstract objectifying perception of drones comes to coincide with the metaphysical project of Kant. But, Noys suggests, endowing the drone with
11
https://www.extremetech.com/extreme/146909-darpa-shows-off-18-gigapixel-surveillance-drone-can-spot-a-terrorist-from-20000-feet.
12
For a deeper exploration of Kant’s notions, see Deleuze (1984, pp
1–17).
13
ontological agency flatters the device as an object, allowing
it to elevate itself beyond the mere fusion of human flesh,
cybernetic weapon and imperial apparatus that it actually is.
And yet, the human agency that directs the drone’s perceptions, movements and actions is also a peculiar kind of subject, as “these humans are constituted in ways to make them
resist calls on their humanity and … are called to conform
to the drone. (…). The achievement of a final ‘subjectlessness’ of the human is not simply the effect of an automation,
but a labour by the human that operates in the process of
self-automation, or the creation of an ‘automatic self’.” If
it is now possible to identify software with the incarnation
of contemporary reason, this identification simultaneously
pushes the very limits of human subjectivity towards the
most rationally Kantian idea, through its automation and
perceptual abstraction.
3 Blue
3.1 Infinite potential
The blue alternative does not make us perceive the paradoxical, but rather, the potential side of digitalization. We will,
at this point, draw on the cosmology of Alfred N. Whitehead, as one of the possible sources from which it could be
possible to grasp the virtuality of a computer’s algorithmic
pixellated vision: the basic stuff of which the whole universe
is made, Whitehead tells us, is not constituted by isolated
bits and pieces (such as atoms or pixels) but by relations, or
networks of bits and pieces (1985). According to Whitehead,
we often like to talk of ’this’ or ’that’ entity (a computer, a
pixel, an algorithm, a human eye, an image), but we should
remember that everything is ‘relationized’, and each entity
can only be comprehended as a cluster of relations. On one
hand, thus, we have the ontological primacy of relations over
parts; but on the other hand, relations appear as relations
between parts. How can this (onto)logical contradiction be
solved and, more importantly, how can the solution be relevant for a definition of the digital potential?
For Whitehead, it is the infinite potential divisibility
(rather than the actual finite division) of the world into parts
(for example into bits of data) that allows relations to take
place: thanks to this divisibility, the world can become an
extensive continuum, or “a common world with mathematical relations” (Whitehead, 1985, p 62). The infinite divisibility of matter into wholes and parts is like a prepixelation,
a sort of potential digitalization of the world. Only after
we have thought this potentiality as a tendency of matter
towards infinite division (or ’cutting’), we can understand
how this potential becomes actualized in the digital cut as
a concrete experience of the technological assemblage. In
fact, the generation of finite actual patterns from a virtual
Author's personal copy
AI & SOCIETY
infinity is, for Whitehead, much more interesting than infinity in itself: experience is all that there is, and it is never
infinite (Whitehead cited in Jones 2010).13 This means that
a conceptual valuation of the digital as a radically empirical potential should take us from its infinite virtuality, to its
capacity of curing the transcendental sense of infinity by
generating experience.
As Lev Manovich already stated in 2012, we can “expect
that the number of photos uploaded to Facebook daily is
larger than all artifacts stored in all world’s museums.”14
How can an image be selected, or even just made out, as a
finite pattern in this infinity? Creative composition has today
come to coincide with curation. The coincidence becomes
mostly evident when artists use mainstream user-generated
content as the primary subject matter of their work (see
Richard Prince’s New Paintings, as a specific chapter in the
history of ’appropriation art’).15 Which means that artists,
increasingly often, create in the same way as social media
users do, by “gleaning interesting images from Big Data, as
algorithms and robot eyes spew out images by rates as high
as 30 frames per second in some cases, which makes images
akin to grains of sand on the aesthetic beach. But the New
Aestheticist strides upon that beach, picking out a sparkly
grain of sand or even the occasional diamond, ready-cut,
and places it in their bucket (Tumblr, Pinterest) to show to
other people on the beach. … Lots of data; lots of sand.”16
The cure cannot but be curation: on the Big Data beach,
according to Patrick Lichty, the image creator or composer
becomes a collector, an aggregator, a fetishizer of autonomous operations performed by someone (or something) else,
a “drone aestheticist.”17 A New Aestheticist.
13
“We cannot understand the flux which constitutes our human experience unless we realize that it is raised above the futility of infinitude
by various successive types of mode of emphasis which generate the
active energy of a finite assemblage.” (Whitehead in Jones 2010).
14
Reading the NYTimes, we see that “Just as access to pens and
paper hasn’t produced thousands of Shakespeares or Nabokovs, this
explosion of camera phones doesn’t seem to have led to more Dorothea Langes or Henri Cartier-Bressons. But it has certainly led to
many more images of what people ate at lunch” (Estri 2012).
15
http://www.richardprince.com/paintings/.
16
Lichty, Patrick. Article not available anymore.
17
“Curation in the age of social media must be made to include the
posting of photos and videos to social media, with the gesture, constituting the greatest number with the least investment (the function
of the Long Tail’s power curve—# involved vs. degree of investiture). This lower stratum from the pin board to the Like is the beach
to which I allude earlier, with New Aestheticists doing slightly more
than Liking an image by taking the time to find it and put it on their
Tumblr, hoping for a Like. But with the rise of art-based Internet
Surfing Clubs like NastyNets and Double Happiness in the 2000′s,
the aggregation of images of interest have become a function of
quantum-level curatorial practice at the base of the saddle of the Long
Tail. But perhaps NA is a form of curation for the masses, a folk curatorial practice for cyborg times.” (Lichty).
In 2011, London-based designer James Bridle launched
“The New Aesthetic.” Sharing the online space of this blog
with pics and videos of the latest games used for anti-terrorism and anti-crime training, or with smart city visualizations, the first images to be posted were examples of a
visual style that expressed the digital through physical materials (such as the printing of pixilated imagery on fabrics).
Bridle’s New Aesthetic Tumblr is, in other words, nothing
more than an incessant flow of images whose only ’curatorial’ logic is that of ‘being digital’: a machinic criterion
that replaces sensation (the aesthetic quality of an image)
with function (its technical digitalization), in the name of a
mutual sympathy between ‘us humans’ and ‘them technologies’. It is now the machine that, with its own categories
and parameters, looks at the world and gives it a sense. Our
initial question about how the virtuality of the digital can
generate experience, has thus flown into a human/machine
decisional alternative.
3.2 Decisional potential
If the role of the human was already put into question by
the first calculating machines (such as Leibniz’s digital
computer), the latter were in fact still seen as mere laborsaving devices replicating complex but tedious computations, while the very production and functioning of these
machines required more specialized, intellectual human
labor (Betancourt 2013). With a significant shift, digital
computers have acquired today the capacity to automate
most cognitive tasks, even aesth/ethic ones. New Aestheticists have recurred to various theories to respond positively
to this shift. Among the preferred theories, we find Bruno
Latour’s Actor-Network Theory, a theoretical vision that is
very popular among New Aestheticists, particularly because
of its will to go beyond an anthropocentric vision, and
beyond the human individual as the only possible intentional
agent of a decision. The decision, from an ANT’s point of
view, would actually be neither human nor technological. As
argued by Latour, “Instead of opposing the individual level
to the mass, or the agency to the structure, [ANT] simply
follow[s] how a given element becomes strategic through
the number of connections it commands and how does it
lose its importance when losing its connections” (1996).
Translated into the New Aesthetic’s idiom, this principle
has been transformed into an algorithmic criterion of choice,
where the decision about what to publish is based on the
proliferation of numerical relations (for example, number
of attracted clicks). It is the algorithmic subject that, finally,
takes the decision.
Algorithmic calculation can also be defined as the most
concrete actualization of that rationally thinking (and deciding) subject that has been haunting Western culture for centuries, at least since the Enlightenment. This subject is not
13
Author's personal copy
AI & SOCIETY
defined by Brian Massumi in its human but, rather, in its
’machinic’ character, working as a ’live-wire technology’, an
abstract machine for cutting decisions into a continuous flow,
and for making something determinate emerge out of the
potentials mutually included in the oscillations of thought
and sensation (2015). The potentials are always more than
a subject can rationally arrive to think of, and constitute a
complex virtual field of metastable states to which the subject is (and remains) affectively linked before, throughout,
and after the decision. From this point of view, it becomes
evident how reason owes its positive, or ’decisive’, nature,
precisely to what eludes it: a constitutive immanent field
open to all kinds of oscillatory processes. The nonconscious,
field-induced nature of this cybernetic decisional machine
is certainly not to be interpreted as a mere passivity on the
part of the subject but, more interestingly, as its autonomization or, in other words, a form of nonhuman thinking and
’decisioning’ in autonomy from subjective consciousness.
But what happens when an algorithm, as foreseen by New
Aesthetes, decides? On one hand, the oscillatory character
of the human decisional machine as described by Massumi
seems to be in contradiction with the main capacity of the
technology, which is to cut a decision in the quickest possible time, therefore (apparently), leaving complexity out
of the process. But on the other hand, according to Parisi,
examples such as RankBrain (an algorithm that supports the
Google ranking software in dealing with long and complex
search inputs) show that algorithms are becoming able to
activate a level of inference or complex reasoning, to fabricate long series of hypothetical conjectures, and to learn
from uncertain or incomplete information (Parisi 2017, p.
10). We can also think of Creatism, another deep-learning
Google algorithm that is capable of creating professional
photographic works from StreetView images.18 If, ‘humanly
speaking’, the selection of what is considered to be a good
image usually happens according to aesthetic criteria, Creatism breaks down these criteria into a series of numerable
features to be looked for, each feature to be learned in its
turn from a vast dataset of photographic examples. Transforming aesthetics into a mathematical operation that can be
efficiently optimized (for example, through the ’Dramatic
mask’ tool, an operation that improves dramatic lighting
in a photo), the algorithm is thus compelled to continually
re-learn to choose and adjust, each time from an increased
quantity of data or database images.
And yet, despite all this complex deep-learning, and in
spite of the algorithmic capacity to deal with the errors, randomness and imprecision implied by huge amounts of data,
a difference seems to persist: thinking of Massumi’s argument, we see that what algorithms lack is the simultaneous
coexistence of all the potentials that constitute a ‘living’
field of decisional complexity, and that the software can only
process sequentially, or in parallel, due to its discrete nature.
For Parisi, on the other hand, this persisting difference is
not to be read as a lack on the algorithmic side, but as the
sign of a peculiar fact: indeterminacy lies at the core of the
techno-logic not as a kind of irrational (or not yet rational)
immediacy with an outside, but as a superposition of levels
of rationality and mediated thinking, and as the establishment of “new chains of reasoning that draw from the minute
variations of data content.” (2017, p. 8)
At this point, reconnecting the discussion to the problem
of curatorial decision and image selection, Deleuze’s words
come to mind: a selection, or a perceptual cut, in the infinity of the visual field, can only emerge along the lines and
contours of an idea. It is the idea (intended by Deleuze as
a differential of thought) that operates the decision, instead
of a preconstituted human or digital subject: an image is
chosen neither by a human nor by a computer’s intention,
but by an idea affirming itself autonomously.19 Only an idea
can curate the visual infinity of the postdigital neurosis.
This definition of ’idea’ might sound here controversially
transcendental. But it should be remembered that, for both
Whitehead and Deleuze, the notion of ’idea’ does not derive
from the necessity for an extra-experiential concept of the
understanding, and does not imply the imposition of any
pre-existing Category: to the contrary, the idea is like a form,
a model derived from reality itself. For example, according
to Whitehead, red should be defined as that eternal object
(or idea) which allows a series of physical relations between
atoms and molecules to be identified as ONE coloured spot
(a red chair) by a perceiver. Thanks to the idea of red, an
image can stand out as ’this’ and not ’that’. It is the form of
the objectification of an actual entity for another, the form
that a relation between ’two’ can take to generate a ’one’, a
relation between many but also a detachment, a separation
from another many.
The digital certainly has an ‘ideal’ origin, in the sense
that, as Gregory Chaitin reminds us, the computer emerged
first of all as an idea, an eternal object, or a digital differential of thought (2005, p XIII). In this sense, by following the
logic of an algorithm, every digital machine, be it a CCTV
camera or a flying drone, actualizes a simplified version of
the real through calculation: an eye that is simply guided by
a mathematical idea. Since it is the idea that takes the decision, determining its own path and literally choosing where
to go, it is important to note that the digital idea has today
decided to follow the direction of contemporary capitalism.
19
18
https://google.github.io/creatism/.
13
On the notion of the idea as a ’differential of thought’, see Deleuze
(2001, pp 168–221).
Author's personal copy
AI & SOCIETY
In the postdigital visual continuum, we are thus perceiving a fading out of the idea as ’humanly shaped’ (or, to put
it in Whiteheadian terms, of the eternal object as qualitatively realized by an image), and a becoming indiscernible of
images themselves. But the sense of ephemerality associated
to this disappearance and indiscernibility is in fact generated
by the stubborn persistence of a human attitude towards the
idea (the red pill effect). Under a ’blue’ point of view, on
the other hand, it appears that what we are being confronted
with is nothing less than a paradigmatic shift, a change of
criteria that is not only affecting aesthetics and art, but various cultural environments of digitalization. As an example,
we can think of phenomena of online mass curation, such as
the ephemerality-based website 4chan.org, where one of the
statements most frequently given by its participants or ’curators’ does not offer any comment on the beauty of the posted
image, on its political importance, or on any other subjective
judgement of interest or taste, but simply the expression “I’ll
just leave this here.”20 ’This’ might very well be ’that’; if it
was not for the counting of the clicks already and potentially
attracted by the image; a numerical judgement that associates the capacities of a human curator to those of a bot. This
association appears, in the online ecology of bots, like a sort
of extreme actualization of Kant’s extensive propensity; the
propensity that appears not only, as stated by Kant in his
Critique of Judgement, when we think that shape and rhythm
are more important than colour and tone, but also when we
imagine the ‘Konigsberg Clock’ counting the steps of his
daily walking routine.
3.3 Learning potential
Like most social network environments, 4Chan is populated
not only by humans but also by bots, autonomous computer
programs that are becoming increasingly able not only to
post, but also to understand content and interact with people.21 If, on one hand, the automatization of tasks entrusted
to bots can still appear as the neurotization of a Kantian
anthropocentred phenomenology, and if the programming
of the algorithm is nothing more than a manifestation of
human thought obsessed with itself and incapable to overcome its limits, the development of a capacity to distinguish
the human from the technological transforms the algorithm
itself into an example of what is generically defined as
‘machine learning’. At a basic level of machine learning,
the algorithm’s technique for distinguishing living from nonliving online presences consists in isolating a few properties
for each account (such as name length, account age, retweet
20
https://www.urbandictionary.com/define.php?term=I%27m%20jus
t%20going%20to%20leave%20this%20here.
21
https://www.4chan.org.
frequency) and in crossing these properties among the different account instances to be analysed. In fact, the resulting
model has to often be applied not to a few but to millions
of accounts, which means that it will need to work with
always new incoming information. But it is here, among
the complex operations performed on massive quantities of
data, that algorithmic cognition reveals another aspect of its
Kantian lineage.
As argued by Deleuze in his Vincennes seminars, Kant’s
real modernity did not simply reside in the conception of a
metric-able abstract space filled with objects. Whereas in
classical Western philosophy the ’magical’ relation between
the spiritual and the earthly had always been a true paradox
or a strange, inexplicable equilibrium between transcendental and physical planes, what made Kant’s thought modern was its capacity to conceptualize not only the dualism
between the abstract metric extension of a measurable space
and the objects dwelling in it, but also the intensity of experienced phenomena.22
It is in an ’intensive’ sense that machine learning can
also be defined as Kantian. Cara Emotion Recognition is
a system for facial detection and semiotic analysis that can
be programmed to interpret a person’s mood.23 Like most
machine emotional intelligence systems, Cara identifies a
face as object-in-space, to then proceed to identify the emotions that intensively fill up that same face-as-space. In this
space, the algorithm learns to ’sense’ the emotional intensities that show up in the micro expressions of the face, by
metrically analysing the relations between different points
(mouth angles, nose tip, eyes), and by then linking these
data to the categories listed on a database. We are, in other
words, still presented with an extensive capture of intensive
phenomena.
But an algorithmic processing of intensity can extend
itself, from the extensive individuation and recognition
of emotions on a human face, to the actual registering and
22
We have seen how, for Deleuze, “The Kantian theory according to which intensive quantities fill up, to varying degrees, matter
that has no empty spaces, is profoundly schizoid.” (Deleuze 1984,
p 19) “One world of mere appearance, and the other world compact
of ultimate substantial facts.” This simple relation of material reality ‘filling’ abstract space characterized Kant’s topology as unilateral
and unidimensional, a straight topology where space becomes a pure
universal parameter subject but indifferent to the empirical variables
of the reality that fills it. While Leibniz’s topology considered spatial determination as conceptual (two forms are superposable because
it is not their position in time and space that counts, but a concept),
Kant’s synthetic judgment is like a rule for all possible content, a rule
of construction (example: a straight line is the shortest path between
two points). Differently from extensive measures such as lengths and
volumes, intensity is not given by a sum of successive parts but is
apprehended in one instant, because the rules of addition and subtraction are not valid for it.
23
https://www.kairos.com/introducing-cara.
13
Author's personal copy
AI & SOCIETY
measurement of intensive quantities filling up the material
environment. On January the 8th 2018, FLIR Systems, Inc.
announced the availability of a high resolution Thermal
Vision Automotive Development Kit featuring a Boson
thermal camera.24 In other words, an infrared camera helping self-driving cars to ’see’, and move, during the night, or
in other difficult situations like sun glare or fog. The infrared light emitted by objects in view is scanned by infrareddetectors that obtain temperature information (thermogram),
and the information is transduced into electric impulses.
The final phase is the appearing on the display of objects or
bodies with different colour tones according to the intensity
of the infrared emission. According to Kant, “One cannot
give an exhaustive mathematical description of an object
by appeal to its size and shape alone”, because two bodies can have the same size and shape but differ in density
or, in other words, they can contain different quantities of
matter (Jankowiak 2013). In this sense, the metaphysical
concept persisting behind unmanned crafts that are able to
spot and identify objects in a visual field, is complemented
by the thermal camera’s capacity to ’sense’ the objects’
intensive quantities of heat and matter. In the experience
of the infrared camera, Kant’s ’degrees of reality’ become
thermograms.25
As a sensing material capable of registering variations of
temperature, the thermal dispositif has in fact a much older
technical genealogy than that of digital algorithms. From
Galileo’s experiments with air and water to the testing of
semiconductor metals in the twentieth century, temperature
degrees have always already been measured in analog ways.
But what makes digital thermal cameras and sensors really
interesting is their use in autonomous vehicle computing
platforms as vision enhancers. Here, the learning process is
not undergone by the camera or the sensor in itself, but by an
Intel Movidius Myriad 2 Vision processing Unit, an embedded computer processor that analyses the data provided by
the sensor and makes predictions/takes decisions about the
car’s movement in the street. The deep-learning algorithms
composing the cognitive neural network of an autonomous
car are thus able, in their own way, to develop an intuition
of continuity, or of continuous quantities, in a mathematical
rather than a physical way. So how can Kant’s philosophy of
intensive, continuous magnitudes retrospectively shed light
on this kind of deep-learning algorithms?
24
http://inves tors.flir.com/news-relea ses/news-relea se-detai ls/flirreleases-high-resolution-thermal-camera-development-kit-0.
25
Since sensory matter (the matter perceived by the senses as colour,
acoustic tonality, warmth, etc.) constitutes the most basic human representation of objects, the ’continuous’ or intensive magnitudes (or
quantities) of this matter are to be considered as phenomenologically
basic.
13
3.4 Curving potential
For Kant, perception is not enough to describe the phenomenology of experience, but we need to consider the ways in
which the empirical reality of the senses comes to form conscious knowledge, through what he defines as a ’synthesis
of the productive imagination’. In the cognition of qualities,
between the pure 0 and the 1 of reality, an infinite sequence
of degrees is in fact possible, each degree constituting the
intensive magnitude of a different qualitative sensation.
Qualities, in other words, are ’known’ by the subject through
the calculus of an infinitely small differential, an ’intensive
unit’ that can only be abstracted by imagining the possibility of a continuous change from one degree to another.
Every colour has a degree which, however, small, is never
the smallest; the same goes with warmth. Rates of change of
qualities are intensive, and we can algebraically translate this
philosophical concept by saying that the derivative (curve)
of a function represents an infinitesimal change with respect
to the considered parameters.
The calculus of differentials is applied in mathematics to
find the extrema of functions, whereas integral calculus is
used for probabilistic modeling. Coming back to our autonomous car system and its traffic scene problem, we understand
that such system requires the deployment of processes much
more complex than the pixel-level extensive identification of
objects in the street. The street becomes a data space, a set of
points potentially linked in many possible ways: people, animals, other cars, buildings and objects as potential movers or
obstacles, each with a trajectory that represents a problem to
be solved. The system has in fact to take into account a multiplicity of internal and external data that are visual, thermal, acoustic, and then to use them to predict eventualities,
evaluate possibilities, and obtain the best possible action
model (calculus of an integral). This calculus takes the form
of a ’gradient descent’ in which a deep-learning algorithm
learns from progressively incoming data: modeling the best
guide action coincides, from the algorithm’s point of view,
with finding the line (or the derivative curve) that combines
all the data-points (extrema of movement) coming from the
outside, with the smallest margin of error, according to a
non-crash parameter. In this process, the derivative of the
sum of all possible mistakes is used to update the system’s
parameters and make error decrease at each new information
input, so that after every update, the system learns to predict
with a lower error margin. Until, after running many iterations, it converges around an optimal solution.
The complexity of this kind of reasoning, the delineation
of error gradients from infinitesimal changes, the search for
an integral line or curve across myriads of data-points, constitute the intensity of deep machine learning. This intensity
can be philosophically connoted through a Kantian definition according to the principle of intensive magnitudes as
Author's personal copy
AI & SOCIETY
a line curved by infinitesimal degrees. But a deeper look
into the genealogy of calculus as an algorithm’s learning
criterion, shows us that what Deleuze defined as the most
modern dimension of Kantian thought, that is its capacity
to delve into the infinitesimal shifts of the physical world,
emerges from the discussions about infinitesimal and differential calculus that were already quite frequent and popular
by the time the Critiques appeared. In his book The Fold,
Deleuze traces a deviating philosophical curve from the
Kantian lineage of modernity, and extensively comments
on the modernity of Leibniz as a Baroque philosopher and,
more importantly for us, as the mathematician who, already
in the seventeenth century, introduced calculus to the Europeans (1992). An underdeveloped concept in Kant’s work,
perhaps exactly because of its modern way to represent the
most obscure, microscopic and infinitesimal parts of consciousness. The modernity of calculus is in fact starting to
re-appear now, while the human programming mania gives
space and autonomy to the calculating intensity of algorithms, and to a less fixed and immutable idea of software.
But we could extend the genealogical curved line of this
modernity further back in the past, making it deviate from
Western media philosophy and history, and land sometime
after 996 in Cairo, where Abu Ali al-Hasan al-Haytham (also
knows as Alhazen), a Persian mathematician associated with
the University of al-Azhar, was already able to integrate (calculate) fourth-degree polynomials.
3.5 Yellow
Entering a neurologist’s studio, an autistic child sees hundreds of neural maps and brain models aligned on the doctor’s desktop screen, like the inert flowers drawn and listed
on a botanist’s catalogue.
In their essay “Coming Alive in a World of Texture. For
Neurodiversity,” Massumi and Erin Manning suggest a distinction between two modes of perception: on one hand, an
immersive ’environmental awareness’ that feels the surroundings as a complexity of relational variations, or as a
qualitative diagram, a landscape of sensations (a modality
that the two thinkers show to be proper of so-called ’classical autists’) (2014). In this perceptual modality, a whole
field has to be taken into account, in which a flower is not yet
seen as a detached object but is felt as a conduit of relation,
a sign of the field’s tendency to express itself. A field full
of ’budding’ objects emerging at the boundary, or as relays,
between experiencing and imagining. On the other hand, we
find a subtractive kind of perception which is, by contrast,
proper of neurotypicals, a perception that extracts discrete
objects and positional griddings from the complexity of the
outside (following the static model of a map). “Neurotypical
experience,” in this sense, “tends immediately to align to the
beyond of the associated milieu of relation, to an ulterior
phase in which the flower stands alone, a solitary object
separate from its shadow stories” (Manning and Massumi
2014). An experience perfused with ’for-ness’, as it follows
a tendency towards the use-value of expression, rather than
following expressibility on its own account. A separation of
the object from the background, a cutting out of the flowerimage in the experiential flow. Nevertheless, as Deleuze and
Guattari might put it, neurotypicality and autism are not to
be interpreted as the representatives of ’normality’ and
’deviation’, but as the two actualizations of a unique imageproduction process which can take variable extremes: from
a botanical portrait to a garden composition. As Manning
and Massumi also underline, neurotypicality and neurodiversity should not be seen as constituting “a dichotomy, but
as a polarity governing a continuum of variable intermixing
between the modes.” Relationality and codification are, in
this sense, two coexisting tendencies in the perceptual process, differently highlighted or peripheralized by different
experiences: perception is, in fact, a matter of degrees and
mixtures of the two, rather than of healthy or pathological
identifications.
When looked at from this double point of view, contemporary postdigital culture seems to reveal, in its multiple
digitactualisations, a sort of data/rhythmic neurosis, or in
other words an increasing tendency to objectivize perceptual physicality, and to quantify reality into data models
to be algorithmically processed by binary computers. The
same kind of vision encountered by the autistic child on
the neurologist’s computer desktop, was perhaps encountered by artist Salvatore Iaconesi after being diagnosed with
brain cancer. After a period of tentative treatments, Iaconesi
decided to leave the hospital, and to place all medical data
about his tumor (from scans to reports) on a website called
La cura (“The Cure”), together with a request for possible
’cures’ to be proposed by the online community.26 Addressing peers, activists and designers all around the world, he
asked them to engage with the data and to produce a cure.
Thousands of different propositions arrived, in forms as
diverse as medical advice and poetry. In the Cure performance, in other words, Iaconesi did nothing else than ’open
source’ his cancer, transforming it into a trigger of collective action. The visualizations of Iaconesi’s cancer and brain
inspired many different productions, all focused around the
concept of resisting, or reinventing, the patient’s condition.
In this sense, Iaconesi and Delfanti argue, “the opening up
of cancer’s “source code”” becomes “a biopolitical rite of
healing, aimed at redefining concepts like “disease” and
“cure”” (2016).
After trying both the red pill (unmasking a suffocating
bodily reality concealed by a thick veil of digital numbers)
26
http://la-cura.it.
13
Author's personal copy
AI & SOCIETY
and the blue one (understanding that datafication requires
a shift in our aesth/ethical paradigms), Iaconesi’s project
thus calls on us to try a yellow pill. As in the chromatic
scale, the yellow does not represent a dialectical synthesis between two opposed alternatives, but a hint of infinite
multiplication of the points of view. This third pill can actually be identified with the perspective of hacker culture: a
pill for techno-socio-cultural decentralization, and for the
expropriation of data controlled by institutions. Iaconesi’s
The Cure, in fact, initiates an online ritual that “follows a
protocol or script which derives from hacker practices and
rhetoric” (Delfanti and Iaconesi 2016). The first step in the
performance was a technical one: to convert the medical
records from professional to common standards, making the
data easily readable and shareable by everyone. The cure, in
other words, was possible thanks to a transduction between
technological standards. While, for medical institutions, a
brain scan represents an instrumental, objective abstraction
from the individual body, its transduction and its ’opening
out’ to the online open-source community extends its significance and its range, also repositioning the brain-object
into a wider perceptual and social environment. The brain
can thus be brought back to its collabor(rel)ational social
field, where it appears as an actualization of the field’s (in
this case, the social field’s) tendency to express itself. The
yellow pill, in other words, allows us to focus, as if under
the effect of a ’neuro-diverse’ perception, on the imaginative
“biodigital rituals of sharing” that are intricately woven to
our scientific and technological experiences. Open access
to data and tools, the cracking of institutional and technical
protocols, and the free circulation of digital artefacts, compose thus a techno-active substance that would be certainly
worth exploring further.
References
Berardi F (2007) The pathologies of hyper-expression. Discomfort
and repression. https ://trans versa l.at/trans versa l/1007/berardiaka-bifo/en. Accessed 12 Mar 2020
Betancourt M (2013) Automated labour: the ‘new aesthetic’ and immaterial physicality. https://journals.uvic.ca/index.php/ctheory/artic
le/view/14934/5827. Accessed 12 Mar 2020
Chaitin G (2005) Meta math! The quest for omega. Vintage Books,
New York
Deleuze G (1984) Kant’s critical philosophy. The Doctrine of the faculties. The Athlone Press, London
Deleuze G (1992) The fold. Leibniz and the baroque. Minneapolis
U.P., Minnesota
Deleuze G (2001) Difference and repetition. The Athlone Press,
London
13
Deleuze G (2005) Francis bacon. The logic of sensation. Continuum,
London
Deleuze G, Guattari F (2000) Anti-oedipus. capitalism and schizophrenia. The Athlone Press, London
Delfanti A, Iaconesi S (2016) Open source cancer. Brain scans and the
rituality of biodigital data sharing. In: Barney D, Coleman G, Ross
C, Sterne J, Tembeck T (eds) The participatory condition in the
digital age. Minneapolis U.P., Minnesota, pp 123–144
Dumèzil G (1977) La religioneromanaarcaica. Rizzoli, Bologna
Estrin J (2012) In an age of likes, commonplace images prevail. The
New York Times, New York
Ferreira Da Silva D (2007) Toward a global idea of race. Minnesota
U.P., Minneapolis
Gibson W (1982) Neuromancer. ACE, New York
Grosz E (2008) Chaos, territory, art. Deleuze and the framing of the
earth. Columbia U.P., New York
Guattari F (1995) Chaosmosis. An ethico-aesthetic paradigm. Indiana
U.P., Bloomington
Jankowiak T (2013) Kant’s argument for the principle of intensive
magnitues. Kantian Rev 18(3):387–412
Jones J (2010) Provocative expression: transitions in and from metaphysics in Whitehead’s Later Work. In: Faber R, Henning BG,
Combs C (eds) Beyond metaphysics? Explorations in alfred North
Whitehead’s Later Thought. Brill Rodopi, Leiden, pp 259–280
Knorr-Cetina K (1997) The society with objects: social relations in
postsocial knowledge societies. Theory, Cult Soc 14(4):1–30
Knorr-Cetina K (2009) The synthetic situation: interactionism for a
global world. Symb Interact 32(1):61–87
Lakoff G, Johnson M (1980) Metaphors we live by. Chicago U. P,
Chicago
Latour B (1996) on actor-network theory. a few clarifications plus more
than a few complications. Soz Welt 47:369–381
Manning E, Massumi B (2014) Thought in the act. Passages in the
ecology of experience. Minnesota U.P., Minneapolis
Manovich L (2012) Trending: the promises and challenges of big social
data. In: Matthew K. Gold (ed.) Debates in the digital humanities.
University of Minnesota Press, Minneapolis, pp 460–475
Massumi B (2015) The Power at the end of the economy. Duke U.P,
Durham
Mbembe A (2017) Critique of black reason. Duke U.P, Durham
Noys B (2015) Drone metaphysics. Cult Mach 16:3–4
Parisi L (2017) Reprogramming Decisionism. e-flux 85
Schafer MT, van Es K (eds) (2017) The Datafied society: studying
culture through data. Amsterdam U.P, Amsterdam
Stiegler B (2005–06) Individuation et grammatisation: quand la technique fait sens. Doc Sci de l’Inf 42:354–360
Whitehead AN (1967) Science and the modern world. The Free Press,
New York
Whitehead AN (1968) Modes of thought. Simon and Schuster, New
York
Whitehead AN (1985) Process and reality. An essay in cosmology. The
Free Press, New York
Publisher’s Note Springer Nature remains neutral with regard to
jurisdictional claims in published maps and institutional affiliations.