Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Annamaria Carusi

Politically authorized reports on personalized and precision medicine stress an urgent need for finer-grained disease categories and faster taxonomic revision, through integration of genomic and phenotypic data. Developing a data-driven... more
Politically authorized reports on personalized and precision medicine stress an urgent need for finer-grained disease categories and faster taxonomic revision, through integration of genomic and phenotypic data. Developing a data-driven taxonomy is, however, not as simple as it sounds. It is often assumed that an integrated data infrastructure is relatively easy to implement in countries that already have highly centralized and digitalized health care systems. Our analysis of initiatives associated with the Danish National Genome Center, recently launched to bring Denmark to the forefront of personalized medicine, tells a different story. Through a "meta-taxonomy" of taxonomic revisions, we discuss what a genomics-based disease taxonomy entails, epistemically as well as organizationally. Whereas policy reports promote a vision of seamless data integration and standardization, we highlight how the envisioned strategy imposes significant changes on the organization of health care systems. Our analysis shows how persistent tensions in medicine between variation and standardization , and between change and continuity, remain obstacles for the production as well as the evaluation of genomics-based taxonomies of difference. We identify inherent conflicts between the ideal of dynamic revision and existing regulatory functions of disease categories in, for example, the organization and management of health care systems. Moreover, we raise concerns about shifts in the regulatory regime of evidence standards, where clinical care increasingly becomes a vehicle for biomedical research.
In clinical practice, decision-making is not performed by individual knowers but by an assemblage of people and instruments in which no one member has full access to every piece of evidence. This is due to decision making teams consisting... more
In clinical practice, decision-making is not performed by individual knowers but by an assemblage of people and instruments in which no one member has full access to every piece of evidence. This is due to decision making teams consisting of members with different kinds of expertise, as well as to organisational and time constraints. This raises important questions for the epistemology of medicine, which is inherently social in this kind of setting, and implies epistemic dependence on others. Trust in these contexts is a highly complex social practice, involving different forms of relationships between trust and reasons for trust: based on reasons, and not based on reasons; based on reasons that are easily accessible to reflection and others that are not. In this paper, we focus on what it means to have reasons to trust colleagues in an established clinical team, collectively supporting or carrying out every day clinical decision-making. We show two important points about these reasons, firstly, they are not sought or given in advance of a situation of epistemic dependence, but are established within these situations; secondly they are implicit in the sense of being contained or nested within other actions that are not directly about trusting another person. The processes of establishing these reasons are directly about accomplishing a task, and indirectly about trusting someone else's expertise or competence. These processes establish a space of reasons within which what it means to have reasons for trust, or not, gains a meaning and traction in these teamwork settings. Based on a qualitative study of decision-making in image assisted diagnosis and treatment of a complex disease called pulmonary hypertension (PH), we show how an intersubjective framework, or 'space of reasons' is established through team members forging together a common way of identifying and dealing with evidence. In dealing with images as a central diagnostic tool, this also involves a common way of looking at the images, a common mode or style of perception. These frameworks are developed through many iterations of adjusting and calibrating interpretations in relation to those of others, establishing what counts as evidence, and ranking different kinds of evidence. Implicit trust is at work throughout this process. Trusting the expertise of others in clinical decision-making teams occurs while This article has been accepted for publication in a special issue on Medical Knowledge in Synthese. Please do not cite or distribute without permission by the authors. 2 the members of the team are busy on other tasks, most importantly, building up a framework of common modes of seeing, and common ways of identifying and assessing evidence emerge. It is only in this way that trusting or mistrusting becomes meaningful in these contexts, and that a framework for epistemic dependence is established.
Research Interests:
In recent years a growing number of scholars in science studies and related fields are developing new ontologies to displace entrenched dualisms. These efforts often go together with a renewed interest in the roles played by symbolisms... more
In recent years a growing number of scholars in science studies and related fields are developing new ontologies to displace entrenched dualisms. These efforts often go together with a renewed interest in the roles played by symbolisms and tools in knowledge and being. This article brings Maurice Merleau-Ponty into these conversations, positioning him as a precursor of today's innovative recastings of technoscience. While Merleau-Ponty is often invoked in relation to his early work on the body and embodiment, this article focuses on his later work, where the investigation of perception is integrated with an ontological exploration. The resulting approach revolves around the highly original idea of the body as a standard of measurement. We further develop this idea by coining the term 'the measuring body', which to a greater extent than did Merleau-Ponty accentuates the relative autonomy of symbolisms and tools and their capacity to decentre the perceiving body.
In recent years there has been growing attention to the epistemology of clinical decision‐making, but most studies have taken the individual physicians as the central object of analysis. In this paper we argue that knowing in current... more
In recent years there has been growing attention to the epistemology of clinical decision‐making, but most studies have taken the individual physicians as the central object of analysis. In this paper we argue that knowing in current medical practice has an inherently social character and that imaging plays a mediating role in these practices. We have analyzed clinical decision‐making within a medical expert team involved in diagnosis and treatment of patients with pulmonary hypertension (PH), a rare disease requiring multidisciplinary team involvement in diagnosis and management. Within our field study, we conducted observations, interviews, video tasks, and a panel discussion. Decision‐making in the PH clinic involves combining evidence from heterogeneous sources into a cohesive framing of a patient, in which interpretations of the different sources can be made consistent with each other. Because pieces of evidence are generated by people with different expertise and interpretation and adjustments take place in interaction between different experts, we argue that this process is socially distributed. Multidisciplinary team meetings are an important place where information is shared, discussed, interpreted, and adjusted, allowing for a collective way of seeing and a shared language to be developed. We demonstrate this with an example of image processing in the PH service, an instance in which knowledge is distributed over multiple people who play a crucial role in generating an evaluation of right heart function. Finally, we argue that images fulfill a mediating role in distributed knowing in 3 ways: first, as enablers or tools in acquiring information ; second, as communication facilitators; and third, as pervasively framing the epistemic domain. With this study of clinical decision‐making in diagnosis and treatment of PH, we have shown that clinical decision‐making is highly social and mediated by technologies. The epistemol-ogy of clinical decision‐making needs to take social and technological mediation into account.
Research Interests:
This chapter attempts to overcome the deep divisions between attitudes to realism in science and the humanities. It re-affirms analogies between art and science, not because they similarly fail actually to grasp reality, but because they... more
This chapter attempts to overcome the deep divisions between attitudes to realism in science and the humanities. It re-affirms analogies between art and science, not because they similarly fail actually to grasp reality, but because they similarly express and enact their world-directedness. Starting with a detailed description of the methodological and ontological intertwinement of models, simulations and experiments that characterises systems biomedicine, I then discuss the different ways of broaching the ‘realism’ of these models: through an analogy between art and science that trades on the deficit conception of fictions and metaphor, or through the critique of realism first applied to literary texts by Barthes (with a focus on textuality), and to science by Woolgar and Latour (with a focus on social practices). I highlight trends in science studies that have developed distinctively humanities forms of post-realism, with a common focus on the textual or artefactual mediation of science, such as Rheinberger’s notion of graphting, and the non-dualist, new materialist frameworks of entanglement and intertwinement. Lastly, I consider the implications of conceiving models, scientists and patients as grafted, entangled and intertwined things in systems biomedicine, not least for the responsibility that this demands from critical medical humanities scholars as active participants in science.
In silico medicine is still forging a road for itself in the current biomedical landscape. Discursively and rhetorically, it is using a three-way positioning, first, deploying discourses of personalised medicine, second, extending the 3Rs... more
In silico medicine is still forging a road for itself in the current biomedical landscape. Discursively and rhetorically, it is using a three-way positioning, first, deploying discourses of personalised medicine, second, extending the 3Rs from animal to clinical research, and third, aligning its methods with experimental methods – The discursive and rhetorical positioning in promotions and statements of the programme gives us insight into the sociability of the scientific labour of advancing the programme. Its progress depends on complex social, institutional and technological conditions which are not external to its epistemology, but intricately interwoven with it. This article sets out to show that this is the case through an analysis of the process of computational modelling that is at the core of its epistemology. In this paper I show that the very notion of 'model' needs to be rethought for in silico medicine (as indeed, for most forms of computational modelling), and propose a replacement, in the form of the 'Model-Simulation-Experiment-System' or MSE-system, which is simultaneously an epistemological, social and technological system. I argue that the MSE-system is radically mediated by social relations, technologies and symbolic systems. We need now to understand how such mediations operate effectively in the construction of robust MSE-systems.
Systems biology is currently making a bid to show that it is able to make an important contribution to personalised or precision medicine. In order to do so, systems biologists need to find a way of tackling the pervasive variability of... more
Systems biology is currently making a bid to show that it is able to make an important contribution to personalised or precision medicine. In order to do so, systems biologists need to find a way of tackling the pervasive variability of biological systems that is manifested in the medical domain as inter-subject variability. This need is simultaneously social and epistemic: social as systems biologists attempt to engage with the interests and concerns of clinicians and others in applied medical research; epistemic as they attempt to develop new strategies to cope with variability in the validation of the computational models typical of systems biology.  This paper describes one attempt to develop such a strategy: a trial with a population of models approach in the context of cardiac electrophysiology. I discuss the development of this approach against the background of ongoing tensions between mathematically and experimentally inclined modellers on one hand, and attempts to forge new collaborations with medical scientists on the other.  Apart from the scientific interest of the population of models approach for tackling variability, the trial also offers a good illustration of the epistemology of experiment-facing modelling.  I claim that it shows the extent to which experiment-facing modelling and validation require the establishment of criteria for comparing models and experiments that enable them to be linked together.  These 'grounds of comparability' are the broad framework in which validation experiments are interpreted and evaluated by all the disciplines in the collaboration, or being persuaded to participate in it. I claim that following the process of construction of the grounds of comparability allows us to see the establishment of epistemic norms for judging validation results, through a process of 'normative intra-action' (Rouse 2007) that shape the social and epistemic evolution of systems approaches to biomedicine.
This chapter develops an account of neuroimaging that conceives brain imaging methods as at once formative and revealing of neurophenomena. Starting with a critical discussion of two metaphors that are often evoked in the context of... more
This chapter develops an account of neuroimaging that conceives brain imaging methods as at once formative and revealing of neurophenomena. Starting with a critical discussion of two metaphors that are often evoked in the context of neuroimaging, the ‘window’ and the ‘view from nowhere’, Carusi and Hoel propose an approach that goes beyond contrasts between transparency and opacity, or between complete and partial perspectives. Drawing on Merleau-Ponty’s discussion of painting in ‘Eye and Mind’, where he sets forth an integrated account of vision, images, objects, and space, the authors argue that the handling and understanding of space in neuroimaging involves the establishment of a ‘system of equivalences’ in the terms of Merleau-Ponty. Accentuating the generative dimension of images and visualizations, the notion of seeing according to a system of equivalences offers a conceptual and analytic tool that opens a new line of inquiry into scientific vision.
This chapter examines the dismantling of the qualitative-quantitative distinction in the practice and instrumentation of computational biology. Computational biologists work with an impressive array of visual artifacts, including... more
This chapter examines the dismantling of the qualitative-quantitative distinction in the practice and instrumentation of computational biology. Computational biologists work with an impressive array of visual artifacts, including microscopy images, MRI and fMRI, organ atlases, virtual organs, optical imaging of "real" organs, and simulations. Despite the clear disciplinary associations between instrumentation and methods in the field, researchers blend observational, mathematical, and computational practices in ways that demand a rethinking of the quantitative-qualitative distinction. Drawing on the later work of Maurice Merleau-Ponty, which conceives the ontology of vision and the ontology of nature as co-emergent, the authors develop the idea of observers and observed being in a "circuit" – originally derived from the biological writings of Jakob von Uexküll. The encounter between Merleau-Ponty’s notion of circuitry and recent ontological concerns in STS expands the toolbox for analyzing hybrid scientific practices.
This paper is published in the Tecnoscienza Special Issue: From Bench to Bedside and Back: Laboratories and Biomedical Research, edited by F. Neresini and A. Viteritti. It forms part of the three way conversation between Regula Valérie... more
This paper is published in the Tecnoscienza Special Issue: From Bench to Bedside and Back: Laboratories and Biomedical Research, edited by F. Neresini and A. Viteritti.  It forms part of the three way conversation between Regula Valérie Burri, Annamaria Carusi, Aikaterini A. Aspradaki: 'Visualising Bodies Within and Beyond  Laboratories and Clinics'.

Abstract: As a response to the spread of biomedical imaging, this conversation
explores crucial aspects related to the production, interpretation and
use of body images within and beyond laboratories and clinics. Regula Valérie
Burri’s contribution raises questions about the implications of medical imaging
technologies and practices for both medical treatments and patients’
identities. Annamaria Carusi explores the intertwined epistemic and ontological
roles of visualizations in the field of personalized medicine within two
contexts of mediation: that of basic research and biomedical application; and
and that of biomedical research and health care systems. Finally, Aikaterini A.
Aspradaki discusses the use of body images from a bioethics perspective, focusing
on the autonomy of persons and the ethical, economic, legal and social
issues raised by the visualizations of bodies.

Keywords: visualisation; bodies; biomedical imaging; personalised medicine; bioethics
The paper considers the question ‘what is the model?’ in a specific example of the use of computational modelling and simulation in systems biology, multi-scale models of cardiac electrophysiology. A detailed account of the construction... more
The paper considers the question ‘what is the model?’ in a specific example of the use of computational modelling and simulation in systems biology, multi-scale models of cardiac electrophysiology. A detailed account of the construction of the computational models and simulations in these contexts shows that the modelling and simulating process is itself better understood as a hybrid and dynamic system of interacting models (in equation form), simulations and experiments, or what we have called the MSE system. That is, the MSE system is a system both as model source and with respect to the biological systems that they target.
We argue that the process of constructing the MSE system as a model system is a process of constructing the grounds for comparability between the MSE system and the target domain. The ‘systems’ nature of the MSE system is foregrounded by validation experiments, which demand consideration of the whole system in order to be interpreted. We  propose that validation is a process rather than a result, and that it consists in seeking maximal coherence and consistency within the MSE system, and across it and validation experimental outputs. In addition, these models invert the relationship between theory and model that holds on traditional views of models in science, according to which models are derived from theory, and seek to derive theory from models.
Research Interests:
In this chapter we aim to draw attention to the unrealized potential of the oeuvre of Merleau-Ponty to give a novel account of technological mediation. The later thinking of Merleau-Ponty is characterized by the way that the investigation... more
In this chapter we aim to draw attention to the unrealized potential of the oeuvre of Merleau-Ponty to give a novel account of technological mediation. The later thinking of Merleau-Ponty is characterized by the way that the investigation of the perceiving body converges on an ontological exploration that acknowledges the ontological import and transformative capacities of a broad array of mediating apparatuses (the bodily apparatus, art works, language and other symbolic systems, tools, algorithms). In the following, we hope to demonstrate the relevance of Merleau-Ponty’s indirect ontology to some of the key concerns of present-day postphenomenology, and the extent to which an engagement with Merleau-Ponty’s expansive and dynamic notion of “flesh” may serve significantly to deepen our understanding of our interaction with technologies, including computation. Like today’s postphenomenologists, the later Merleau-Ponty is concerned to show both that the body is technologized and that technologies are embodied – hence, the continued relevance of phenomenological frameworks.
Research Interests:
As data-intensive and computational science become increasingly established as the dominant mode of conducting scientific research, visualisations of data and of the outcomes of science become increasingly prominent in mediating knowledge... more
As data-intensive and computational science become increasingly established as the dominant mode of conducting scientific research, visualisations of data and of the outcomes of science become increasingly prominent in mediating knowledge in the scientific arena. This position piece advocates that more attention should be paid to the epistemological role of visualisations beyond their being a cognitive aid to understanding, but as playing a crucial role in the formation of evidence for scientific claims. The new generation of computational and informational visualisations and imaging techniques challenges the philosophy of science to re-think its position on three key distinctions: the qualitative/quantitative distinction, the subjective/objective distinction, and the causal/non-causal distinction.
Several studies have focused on the social sharing of visual practices as constitutive of evidence within a domain, while there has been relatively less attention paid to points where the social sharing of practices breaks down, or is... more
Several studies have focused on the social sharing of visual practices as constitutive of evidence within a domain, while there has been relatively less attention paid to points where the social sharing of practices breaks down, or is resisted. This article argues that a study of both types of cases is necessary in order to gain a better perspective on social sharing of practices, and on what other factors this sharing is dependent upon. The article presents the case of currently emerging inter-disciplinary visual practices in the domain of computational biology, where the sharing of visual practices would be beneficial to the collaborations necessary for the research. Computational biology includes sub-domains where visual practices are coming to be shared across disciplines, and those where this is not occurring, and where the practices of others are resisted. A significant point of difference between these sub-domains is between visualizations that render the output of simulations and those which are images taken during observations using the techniques of microscopy. A crossing over, compromise or sharing of practices relating to these different sub-domains is difficult and often resisted. This resistance needs to be contextualised in a far richer account of the relations between the visual artifacts, the scientists who use them within disciplinary domains, the theoretical and instrumentational outlook of the disciplines in question, and that towards which the science is directed, its domain of study. Social practices alone are not sufficient to account for the shaping of evidence. The philosophy of Merleau-Ponty is introduced as providing an alternative framework for thinking of the complex inter-relations between all of these factors. This philosophy enables us to think of the inter-constitutive relations between these different factors, which ultimately define an epistemological and ontological space in which the object of study itself has an active constitutive role, and in which the scientist as person and perceiving body within a knowledge domain is also constituted.
Research Interests:
Home Home. ...
This chapter examines the dismantling of the qualitative-quantitative distinction in the practice and instrumentation of computational biology. Computational biologists work with an impressive array of visual artifacts, including... more
This chapter examines the dismantling of the qualitative-quantitative distinction in the practice and instrumentation of computational biology. Computational biologists work with an impressive array of visual artifacts, including microscopy images, MRI and fMRI, organ atlases, virtual organs, optical imaging of "real" organs, and simulations. Despite the clear disciplinary associations between instrumentation and methods in the field, researchers blend observational, mathematical, and computational practices in ways that demand a rethinking of the quantitative-qualitative distinction. Drawing on the later work of Maurice Merleau-Ponty, which conceives the ontology of vision and the ontology of nature as co-emergent, the authors develop the idea of observers and observed being in a "circuit" – originally derived from the biological writings of Jakob von Uexküll. The encounter between Merleau-Ponty’s notion of circuitry and recent ontological concerns in STS expands the toolbox for analyzing hybrid scientific practices.
Computational models in physiology often integrate functional and structural information from a large range of spatiotemporal scales from the ionic to the whole organ level. Their sophistication raises both expectations and skepticism... more
Computational models in physiology often integrate functional and structural information from a large range of spatiotemporal scales from the ionic to the whole organ level. Their sophistication raises both expectations and skepticism concerning how computational methods can improve our understanding of living organisms and also how they can reduce, replace, and refine animal experiments. A fundamental requirement to fulfill these expectations and achieve the full potential of computational physiology is a clear understanding of what models represent and how they can be validated. The present study aims at informing strategies for validation by elucidating the complex interrelations among experiments, models, and simulations in cardiac electrophysiology. We describe the processes, data, and knowledge involved in the construction of whole ventricular multiscale models of cardiac electrophysiology. Our analysis reveals that models, simulations, and experiments are intertwined, in an assemblage that is a system itself, namely the model-simulation-experiment (MSE) system. We argue that validation is part of the whole MSE system and is contingent upon 1) understanding and coping with sources of biovariability; 2) testing and developing robust techniques and tools as a prerequisite to conducting physiological investigations; 3) defining and adopting standards to facilitate the interoperability of experiments, models, and simulations; 4) and understanding physiological validation as an iterative process that contributes to defining the specific aspects of cardiac electrophysiology the MSE system targets, rather than being only an external test, and that this is driven by advances in experimental and computational methods and the combination of both.
Visual tools and instruments have been a focal point of historical, social and cognitive studies of science for quite some time, and even more so with the onset of the digital era. Profound questions about the nature of scientific... more
Visual tools and instruments have been a focal point of historical, social and cognitive studies of science for quite some time, and even more so with the onset of the digital era. Profound questions about the nature of scientific knowledge are posed by the plethora of digital images and computational visualizations to be found in scientific domains. Currently, we are seeing the emergence of a new generation of computational and digital tools which are fast becoming entrenched in all research domains across science, social science and the humanities, and which are even constitutive of new cross-cutting domains. It remains unclear which distinctions become important now that the predominant form of picturing is computational or in what specific ways this makes a difference.
This special issue consists of a collection of papers that address different aspects of the methodological and theoretical questions raised by computational forms of picturing.
This is the introduction to the Special Issue of Information, Communication and Society, on Law and Ethics in e-Social Science. Table of Contents: ETHICAL IMPLICATIONS OF LIFESTYLE MONITORING DATA IN AGEING RESEARCH Alison Bowes,... more
This is the introduction to the Special Issue of Information, Communication and Society, on Law and Ethics in e-Social Science.

Table of Contents:
ETHICAL IMPLICATIONS OF LIFESTYLE MONITORING DATA IN AGEING RESEARCH
Alison Bowes, Alison Dawson & David Bell, 5-22
RETHINKING RESEARCH ETHICS FOR MEDIATED SETTINGS
Anne Beaulieu & Adolfo Estalella, 23-42
AGILE ETHICS FOR MASSIFIED RESEARCH AND VISUALIZATION
Fabian Neuhaus & Timothy Webmoor, 43-65
DEFINING ‘PERSONAL DATA’ IN E-SOCIAL SCIENCE
Christopher Millard & W. Kuan Hon, 66-84
DATA PROTECTION, FREEDOM OF INFORMATION AND ETHICAL REVIEW COMMITTEES
Policies, practicalities and dilemmas
Andrew Charlesworth, 85-103
CONSTRUCTING THE LABYRINTH
The impact of data protection on the development of ‘ethical’ regulation in social science
David Erdos, 104-123
THE ETHICAL WORK THAT REGULATIONS WILL NOT DO
Annamaria Carusi & Giovanni De Grandis, 124-141
Ethical concerns in e-social science are often raised with respect to privacy, confidentiality, anonymity and the ethical and legal requirements that govern research. In this article, the authors focus on ethical aspects of e-research... more
Ethical concerns in e-social science are often raised with respect to privacy, confidentiality, anonymity and the ethical and legal requirements that govern research. In this article, the authors focus on ethical aspects of e-research that are not directly related to ethical regulatory framework or requirements. These frameworks are often couched in terms of benefits or harms that can be incurred by participants in the research. The authors shift the focus to the sources of value in terms of which benefits or harms are understood in real social situations. A central claim of this paper is that the technologies that are used for research are not value neutral, but serve to reinforce some values at the expense of others. The authors discuss databases, modelling and simulation, network analysis as examples of technologies which affect the articulation of values. A view of e-social science as a techno-scientific constellation of researchers, technologies and society, in which values are always already embedded, is put forward as a basis for a view of ethics as reflexive and active engagement, conducted with awareness. Methodological pluralism and proactive openness are also proposed as responses to this view of the ethical dimensions of e-social science.

Can also be found on SSRN: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1994451
The borders between the physical and the virtual are ever-more porous in the daily lives of those of us who live in Internet enabled societies. An increasing number of our daily interactions and transactions take place on the Internet.... more
The borders between the physical and the virtual are ever-more porous in the daily lives of those of us who live in Internet enabled societies. An increasing number of our daily interactions and transactions take place on the Internet. Social, economic, educational, medical, scientific and other activities are all permeated by the digital in one or other kind of virtual environment. Hand in hand with the ever-increasing reach of the Internet, the digital and the virtual, go concerns about trust. In the increasing numbers of cross-disciplinary attempts to understand the way that the Internet is changing our societies, ‘trust’ is a truly cross-boundary word, used just as frequently by computer scientists as it is by economists, sociologists and philosophers. Concerns in the name of trust are articulated about the objects and artifacts found, accessed or bought on the Internet, about the people with whom we interact on the interact, and about the technological systems and infrastructures that enable us to carry out activities of different types. This paper reflects on the implications for trust of the way we shape our technologies and they in turn shape us, for example, in the way we trust and the extent to which we can trust ourselves as trusters. The account I am working towards is an ecological and co-evolutionary view of trust and technologies, which attempts to hold in view the complex inter-relationships between the agents and other entities within and across environments. First, I consider the ways in which problems of justifying trust are analogous to problems of justifying knowledge, and claim that trust, like knowledge, cannot be justified from an external position. Second, I outline an account of internal relations drawn from phenomenology. This is followed by a discussion of three aspects of trust which are internally related to it: value, reason and morality

Also available on SSRN: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1929434
Research Interests:
The philosopher of art Roger Scruton has claimed that photographic images are not representations, on the basis of the role of causal rather than intentional processes in arriving at the content of a photographic image (Scruton 1981). His... more
The philosopher of art Roger Scruton has claimed that photographic images are not representations, on the basis of the role of causal rather than intentional processes in arriving at the content of a photographic image (Scruton 1981). His claim was controversial at the time, and still is, but had the merit of being a springboard for asking important questions about what kinds of representation result from the technologies used in depicting and visualising. In the context of computational picturing of different kinds, in imaging and other forms of visualisation, the question arises again, but this time in an even more interesting form, since these techniques are often hybrids of different principles and techniques. A digital image results from a complex interrelationship of physical, mathematical and technological principles, embedded within human and social situations. This paper consists of three sections, each presenting a view of the question whether digital imaging and digital visual artefacts generally are representations, from a different perspective. These perspectives are not representative, but aim only to accomplish what Scruton’s paper did succeed in accomplishing, that is, being a provocation and a springboard for a broader discussion.

Also available on SSRN: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1929438
Research Interests:
Volunteer computing projects (VCPs) have been set up by groups of scientists to recruit members of the public who are asked to donate spare capacity on their personal computers to the processing of scientific data or computationally... more
Volunteer computing projects (VCPs) have been set up by groups of scientists to recruit members of the public who are asked to donate spare capacity on their personal computers to the processing of scientific data or computationally intensive models. VCPs serve two purposes: to acquire significant computing capacity and to educate the public about science. A particular challenge for these scientists is the retention of volunteers as there is a very high drop-out rate. This paper develops recommendations for scientists and software engineers setting up or running VCPs regarding which strategies to pursue in order to improve volunteer retention rates. These recommendations are based on a qualitative study of volunteers in a VCP (climateprediction.net). A typology of volunteers has been developed, and three particularly important classes of volunteers are presented in this paper: for each type of volunteer, the particular benefits they offer to a project are described, and their motivations for continued participation in a VCP are identified and linked to particular strategies. In this way, those setting up a VCP can identify which types of volunteers they should be particularly keen to retain, and can then find recommendations to increase the retention rates of their target volunteers.
Research Interests:
This study investigated international developments in Virtual Research Communities (VRCs) and to evaluate them in relation to the activities in the JISC’s VRE programme. The study examined programmes in a number of key countries along... more
This study investigated international developments in Virtual Research Communities (VRCs) and to evaluate them in relation to the activities in the JISC’s VRE programme. The study examined programmes in a number of key countries along with significant projects and communities as well as some countries where developments on this front are just beginning. There has been a great deal of activity over the past few years in terms of prototype and demonstration systems moving into the mainstream of research practice. Notable trends are emerging as researchers increasingly apply collaborative systems to everyday research tasks.
Research Interests:
Research Interests:
A transcript of an event held at the Oxford e-Research Centre on 25th February, 2009. The group included computational, mathematical and experimental biologists, philosophers and historians of science discussing issues relating to... more
A transcript of an event held at the Oxford e-Research Centre on 25th February, 2009. The group included computational, mathematical and experimental biologists, philosophers and historians of science discussing issues relating to modelling, simulation, experiment, quantitative and qualitative approaches, institutional aspects of research, collaboration, and education, among other topics.
Research Interests:
Knowledge does not float free of the technologies available for its production and presentation. The intimate connection between ideas and praxis - embodied, technological, social - exemplified in any knowledge practice is, in the terms... more
Knowledge does not float free of the technologies available for its production and presentation. The intimate connection between ideas and praxis - embodied, technological, social - exemplified in any knowledge practice is, in the terms of Ihde & Selinger (2004), an 'epistemology engine'. This refers to the material-semiotic connections that obtain for any specific rendering of an idea. Often this material-semiotic connection is easier to recognise in the case of art than in that of knowledge, where it appears more-or-less obvious that the rendering of an idea in poetic rather than prose form, in musical rather than linguistic form, in plastic rather than digital form, makes a difference to the idea. However, it is also recognisable (if not always actually recognised) in science, where there is a keen awareness of visual media alongside or instead of discursive media.

Ideas on the Internet shift and change as they pass through different networks of meaning production and communication, in different media and modalities. Different disciplines and modes of knowledge have either embraced these possibilities of transmogrification or remained aloof. Philosophy is one of the latter, and seems still steadily rooted to the discursive world. However, as a discipline, it overlaps in interesting ways both with science and with art. What are the epistemology-engines that apply to philosophy, and are there specific philosophy-engines? This is the background against which the applications to e-learning in philosophy will be considered.

In previous work, I claimed that the nature of philosophical argument cannot simply be assumed to remain constant even in the use of relatively simple discursive technologies such as discussion boards (Carusi 2006). In the present paper I consider a range of other technologies that form the technological culture of philosophy, or which mediate it. The paper aims to open a line of enquiry into these underlying technologies and the kinds of philosophy-engine that emerge from them, individually or by way of a convergence of a set of technologies. In particular, I focus on text-mining techniques, visualisations, and modeling showing what potential they have for disturbing, derailing, re-shaping or transforming the mode, form or substance of philosophy.

Also available on SSRN: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1505970
Research Interests:
"In this issue, Pila (2009) has criticised the recommendations made by requirements engineers involved in the design of a grid technology for the support of distributed readings of mammograms made by Jirotka et al. (2005). The... more
"In this issue, Pila (2009) has criticised the recommendations made by requirements engineers involved in the design of a grid technology for the support of distributed readings of mammograms made by Jirotka et al. (2005). The disagreement between them turns on the notion of “biographical familiarity” and whether it can be a sound basis for trust for the performances of professionals such as radiologists. In the first two sections, this paper gives an interpretation of the position of each side in this disagreement and their recommendation for the design of technology for distributed reading, and in the third the underlying reasons for this disagreement are discussed. It is argued that Pila, in attempting to make room for mistrust as well as trust, brings to the fore the question of having and reflecting upon reasons for trust or mistrust. Pila holds that biographical familiarity is not a sound reason for trust/mistrust, as it seems to obliterate the possibility of mistrust. In response to her proposal, an analysis is proposed of the forms of trust involved in biographical familiarity. In particular, implicit trust is focused upon—as a form of trust in advance of reasons, and as a form of trust contained (in the logical sense) within other reasons. It is proposed that implicit trust has an important role in establishing an intersubjectively shared world in which what counts as a reason for the acceptability of performances such as readings of X‐rays is established. Implicit trust, therefore, is necessary for professionals to enter into a “space of reasons”. To insist upon judgements made in the absence of the form of implicit trust at play in biographical familiarity is to demand that radiologists (and other relevantly similar professionals) make judgements regarding whether to trust or mistrust on the basis of reasons capable of being reflected upon, but at the same time leave them without reasons upon which to reflect.

Also available on SSRN: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1506043"
Researchers in the social sciences are increasingly encouraged or obliged to deposit data in digital archives for greater transparency of research or for secondary use by other researchers. However, digital archives raise many ethical... more
Researchers in the social sciences are increasingly encouraged or obliged to deposit data in digital archives for greater transparency of research or for secondary use by other researchers. However, digital archives raise many ethical challenges at the institutional, disciplinary and personal level, and researchers can find themselves caught between conflicting requirements. This article considers the ethical challenges of qualitative data in particular showing what specific ethical challenges qualitative researchers face. There is generally a lack of policy or guidelines as to how to deal with digital data, or else there are conflicting requirements set by funding and academic institutions and by the law. In the face of this, researchers themselves need to be aware of the ethical and legal dimensions of their data, so that they are in the best position to enter into negotiations concerning whether and how it is archived. The options for archiving are outlined, and an interdisciplinary approach is recommended.

Number of Pages in PDF File: 14

Also available on SSRN: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1929485
The collaborative ‘Big Science’ approach prevalent in physics during the mid- and late-20th century is becoming more common in the life sciences. Often computationally mediated, these collaborations challenge researchers’ trust practices.... more
The collaborative ‘Big Science’ approach prevalent in physics during the mid- and late-20th century is becoming more common in the life sciences. Often computationally mediated, these collaborations challenge researchers’ trust practices. Focusing on the visualisations that are often at the heart of this form of scientific practice, the paper proposes that the aesthetic aspects of these visualisations are themselves a way of securing trust. Kant’s account of aesthetic judgements in the Third Critique is drawn upon in order to show that the image-building capability of imagination, and the sensus communis, both of which are integral parts of aesthetic experience, play an important role in building and sustaining community in these forms of science. Kant’s theory shows that the aesthetic appeal of scientific visualisations is not isolated from two other dimensions of the visualisations: the cognitive-epistemic, aesthetic-stylistic and interpersonal dimensions, and that in virtue of these inter-relationships, visualisations contribute to building up the intersubjectively shared framework of agreement which is basic for trust.
Research Interests:
Research Interests:
The article is an exploration of online reading from the perspective of theories of reading and interpretation based on literary theory and the phenomenology of reading literary text. One of its aims is to show that such theories can make... more
The article is an exploration of online reading from the perspective of theories of reading and interpretation based on literary theory and the phenomenology of reading literary text. One of its aims is to show that such theories can make a contribution to our understanding of reading and to our design of online reading spaces. The precursor of this stance is the form of hypertext theory originally proposed by George Landow, which predicted radical changes in reading practices with an impact not only on literature but on education in general. The prediction has been slow to be verified and has been criticized by empirical and psychological studies. In this article, hypertext theory is compared to the phenomenology of reading linear literary text, with particular attention paid to the role played by the notion of a text, work or ‘whole’ which is constructed or produced during the course of reading. I show that the active and engaged reading predicted by hypertext theory is available in reading linear literary text, and to a higher degree than in reading hypertext, and consider ways in which the kinds of reading process which occur in reading literature can be generalized to reading for other higher education purposes. Finally, I speculate as to the range of online technologies that could be used to encourage these reading processes, and propose an alternative online reading space.
Research Interests:
Research Interests:
In 2000 the UK Government launched a major new initiative, the UK eUniversity (UKeU), to capitalize on the potential of e-learning. With over £60 million of investment the UKeU was created to act as a broker between existing universities... more
In 2000 the UK Government launched a major new initiative, the UK eUniversity (UKeU), to capitalize on the potential of e-learning. With over £60 million of investment the UKeU was created to act as a broker between existing universities in terms of marketing online degrees from British universities. The UKeU represented the most important foray into e-learning yet undertaken in the UK and was also certainly one of the most significant internationally. As Conole et al. quoted:At its launch the then secretary of state proudly announced that: ‘… it is clear that virtual learning is an industry which is striding forward all around us …’ (Blunkett, 2000). When it collapsed only five years later, Sheerman suggested the investment had been ‘… a disgraceful waste of public money …’ (Sheerman, 2005).Its early demise sounds a warning note to all of us involved in e-learning. It is important that we learn from this experience so as not to replicate its mistakes, but also not to allow its failure on some levels to drown out the enormous potential and good practice which it instituted on other levels.
Research Interests:
Computational modelling and simulation have for some time received a great deal of attention in medical research. For the last decade major funding resources in many countries have been channelled into developing a modelling and... more
Computational modelling and simulation have for some time received a great deal of attention in medical research. For the last decade major funding resources in many countries have been channelled into developing a modelling and simulating research programme, and into bringing researchers with backgrounds in mathematics, computer science and engineering into the medical sciences. Computational modelling and simulating offered the great promise of achieving results that could not be achieved experimentally, and the medical research community has eagerly awaited these results.  One of the most compelling promises made by this programme of research has been that of delivering new methods for personalised medicine (Hunter et al 2010). Yet the real challenges of achieving these results are now beginning to become apparent, and there is increasing scepticism that they can actually be met. As a senior figure in clinical research awaiting the fulfilment of the promise of computational modelling remarked, ‘The honeymoon is over’. What underlies this increasing scepticism – at least in the area of physiology with its concomitant medical applications –  is the scarcity of actual examples of validations of models against experimental data (Carusi, Burrage and Rodriguez 2012: H145). 

This paper discusses two possible reasons for this scarcity, and their effects on the promise of personalised medicine.  The first is that in physiological modelling, the effort has gone into constructing mathematical models capable of multi-scale integration (that is, integrating the different levels of a physiological process from the sub-cellular level up to the whole organ level).  The role of simulations is often defined from the perspective of mathematics, as that of solving the equations of the models. That is, simulations are geared towards the models rather than towards experiments and the validation of the models is mathematically defined not experimentally defined.  This is a matter of disciplinary practice, since if the construction of the models is carried out by researchers who identify themselves primarily as mathematicians (rather than engineers, for example)  they will tend to be interested in the solution of the equations that make up the model, but they are not used to thinking in terms of hypothesis and test, which is what is required for entering an experimental paradigm. The shift from a mathematical to an experimental epistemic practice is the first issue that will be discussed in the presentation.  A second related issue is that even when comparisons with experiments are carried out, the pervasive variability of biological processes makes it very difficult to interpret and compare experimental and computational results. Ironically, this is particularly true of the results of multi-scale integrated models, which are precisely the strength of computational modelling and simulation over experimental methods. For clinical researchers the variability of biological processes is the real issue. In their eyes, it does not help to develop a multi-scale integrated model which then becomes difficult to validate, and even more difficult to apply because of variability.  The different attitudes towards variability will be the second point discussed. 

A different view of the relationship between modelling and simulation is required if the promise of computational modelling for medical science is to be even part-way fulfilled. Drawing upon studies such as those of Humphreys (2004), Varenne (2007) andWinsberg (2010) , this presentation puts forward a suggestion for reformulating this relationship which has implications for the interdisciplinary epistemology of computational modelling and simulation in the medical sciences.

Carusi, Burrage and Rodriguez (2012) Bridging experiments, models and simulations: an integrative approach to validation in computational cardiac electrophysiology. American Journal of Physiology – Heart.  vol. 303 no. 2 H144-H155.
Carusi, Burrage and Rodriguez (forthcoming 2013) Model Systems in Computational Systems Biology. Juan Duran and Eckhart Arnold (Eds.): Computer Simulations and the Changing Face of Scientific Experimentation, Cambridge Scholars Publishing.
Galison, P. (1996). Computer Simulations and the Trading Zone. The Disunity of Science: Boundaries, Contexts, and Power. P. Galison, Stump, D.J. Stanford, California, Stanford University Press: 118-157.
Humphreys, R. (2004). Extending Ourselves: Computational Science, Empiricism, and Scientific Method. Oxford, Oxford University Press.
Hunter, P. et al (2010) Vision and strategy for the Virtual Physiological Human. Phil. Trans. R. Soc. A 13 vol. 368 no. 1920 2595-2614.
Varenne, F. (2007). Du Modèle à la Simulation Informatique. Paris, Vrin.
Winsberg, E. (2010). Science in the Age of Computer Simulations. University of Chicago Press.
The presentation discusses the relationship between how models are conceptualised and constructed on one hand and validation on the other, arguing that a systems attitude should be adopted to think about modelling as well to the... more
The presentation discusses the relationship between how models are conceptualised and constructed on one hand and validation on the other, arguing that a systems attitude should be adopted to think about modelling as well to the biological domain. It argues that scientists may do well to focus on modelling as a process that aims to build more coherent integrations of models, simulations and experiments, rather than to focus on finished models as representations of a target domain. The further implications of this strategy for broaching central medical focal areas, such as categorisations of disease as well as conceptions of the patient, are also touched upon.
"In order to be successful as a research programme in the life sciences, modelling and simulation need to be meaningfully connected to experiments. Indeed, the realism of the model depends upon successfully making this connection. This... more
"In order to be successful as a research programme in the life sciences, modelling and simulation need to be meaningfully connected to experiments. Indeed, the realism of the model depends upon successfully making this connection. This paper discusses the crucial role of measurement in bringing about this interconnection.

Models and simulations are empty of empirical content without being parameterised by data acquired from experiments; and they further require connection with the experiments in order to be validated. There is still not consensus in the field regarding how validation should be considered, and this is exacerbated when the models shift outside of disciplinary domains, for example, when they shift from settings dominated by mathematics and engineering to settings dominated by clinical aims and concerns. In Carusi, Burrage and Rodriguez (2012 and 2013), we showed that experiment, model and simulation should be considered to be a hybrid system of interconnected processes in order to interpret the results of any validation test. Since it cannot be assumed in advance that the results of laboratory experiments and computational simulations can be meaningfully compared to each other in a validation test, we proposed that the iterative relations between these stages of the process go towards establishing grounds of comparability. This paper discusses two examples of the way in which comparability is dealt with in the cardiac modelling research programme, one taken from the inception of the cardiac modelling research programme experimental system when the Hodgkin-Huxley model of electrical excitation in nerve cells was first adapted for cardiac cells, and one taken from a current development in this research programme, which is developing a population of models approach to exploring the interconnections between models and experiments. These examples will show how quantitative results of modelling and experimenting are interpreted, established as significant, and result in further articulations of both modelling source and target domain: these are all aspects in which the emerging grounds of comparability are manifested in the modelling and experimenting process.

The second part of the paper considers what philosophical accounts might be given of the way in which grounds of comparability come to be established, and draws upon accounts of art and literature for this. Grounds of comparability have to do with a relationship at the heart of ‘realist’ art and literature, that is the ways in which symbolic systems relate in a generative way to objects in the world. Two accounts are particularly promising: The first is Joseph Rouse’s description of experimental systems as ‘materialized fictional “worlds”’ which are domain constituting in that they ‘help constitute the fields of possible judgment and the conceptual norms that allow [conceptualizable] features to show themselves intelligibly’ (51). The second is Merleau-Ponty’s idea of the coherent deformations brought about by style, and the way in which these institute (rather than constitute) systems of equivalence which articulate a field of interactions in which things can be experienced or counted as equivalent. These two accounts – the constitutive and the institutive – approach the process of establishing measurements that count from different directions: Rouse’s constitutive approach from a conceptual standpoint, and Merleau-Ponty’s from a non-conceptual standpoint.

The presentation will focus on the constitutive account, and point to the gaps in it which call for something closer to the institutive account.

NOTE: Due to illness, I was unable to attend the conference and the paper was not delivered]
-

References

Carusi, A., Burrage, K., Rodriguez, B. (2012) Bridging experiments, models and simulations: an integrative approach to validation in computational cardiac electrophysiology. American Journal of Physiology – Heart. vol. 303 no. 2 H144-H155.

Carusi, A., Burrage, K., Rodriguez, B. (2013) Model Systems in Computational Systems Biology. In Juan Duran and Eckhart Arnold (Eds.): Computer Simulations and the Changing Face of Scientific Experimentation, Cambridge Scholars Publishing.

Merleau-Ponty, M. (1993) Indirect Language and the Voices of Silence. In The Merleau-Ponty Aesthetics Reader: Philosophy and Painting. Northwestern University Press.

Rouse, J. (2009) Models as fictions. In Mauricio Suarez (Ed). Fictions in Science: Philosophical Essays on Modeling and Idealization. Routledge.
"
'Seeing is believing' and 'appearances are deceptive': both of these attitudes are held towards visual presentations of data or evidence in scientific contexts. Which attitude prevails is largely a matter of trust. When the visualisations... more
'Seeing is believing' and 'appearances are deceptive': both of these attitudes are held towards visual presentations of data or evidence in scientific contexts. Which attitude prevails is largely a matter of trust. When the visualisations in question are also relatively new, and relatively opaque to many of their users in multidisciplinary or multi-sector contexts, trust is even more likely to give way to mistrust. In this presentation, I argue that the alternations between trust and mistrust are often inter-related with questions about when a visualisation is taken to 'really' represent, or to be realistic, or what about a visualisation might stray away from these goalposts. The presentation explores two spheres of intersection: between trust and representation/realism in visualisations, and between science and art, showing how ways of thinking about representation and realism in art might be deployed in science.
The terms ‘realistic’ and ‘fictional’ are both used of scientific models, the first to describe something which modelling aims at, the second to try to get to grips with what is conceived as their rather problematic relation to truth and... more
The terms ‘realistic’ and ‘fictional’ are both used of scientific models, the first to describe something which modelling aims at, the second to try to get to grips with what is conceived as their rather problematic relation to truth and belief.  Questions about realism have come under scrutiny in numerous philosophical and sociological studies of modelling, whereas trying to work out the way in which models are fictions has been carried out mostly in philosophy of science.  Sociological studies of realism (sometimes conflated with representationalism) in science tend to show the social constructedness of realism, and accounts of models as fictions in philosophy of science tend to find the term useful because of what models and fictions purportedly have in common, a kind of deficit of truth or belief.  There is a much richer repertoire of analyses and reflections on fiction in literary and critical theory which could instead bring different perspectives onto models in science. In this paper, I describe the aims and self-reflections of 19th century realism and the analysis and critique to which these have been subjected in literary and critical theory, and bring these into dialogue with STS constructivism on one hand, and philosophical fictionalism about models on the other.  Finally, I outline a different account of fictional realism, which takes the formative capacity of fictions seriously, and gets us out of the confines of a narrow realism/anti-realism distinction.
Co-author and co-presenter: Sophia Efstathiou. "Modelling and simulation for the purposes of scientific research often occurs in contexts which are highly interdisciplinary and often distributed across industrial and academic sectors and... more
Co-author and co-presenter: Sophia Efstathiou. "Modelling and simulation for the purposes of scientific research often occurs in contexts which are highly interdisciplinary and often distributed across industrial and academic sectors and different geographical locations. Modelling techniques are also often predicated upon the use of sophisticated and highly specialised technologies with which researchers may be variously familiar. This increases the opacity of the research domain for any one researcher relative to the practices of her colleagues, and undermines her full understanding of her research output. These social and technological factors need to be taken into consideration in broaching epistemological questions relating to the warrant and epistemic value of models and simulations in practical contexts of science. The aim of this presentation is to argue for a socio-technological epistemology of modelling and simulation work where trust (Gr. pistis) plays a pivotal role.  Acknowledging the importance of trust brings an ineliminable moral component into the epistemology of modelling and simulation as practiced in large-scale science (Hardwig 1991).

Two types of trust are discussed: 1. trust based on reasons or evidence, what we may call epistemic or inferential trust (following Hume 1777), and 2. a mode of trust in advance of reasons, what we call basic,  implicit or nonepistemic trust (following Reid 1764). This distinction tracks two alternative conceptions of the Greek pistis as 1. belief, and 2. faith. Part of our aim is to explore these dual aspects of pistis, and its intrinsic part in epistemic practices that purport to stand (istamai) on top (epi) of the world. To that effect we use two studies of two modelling groups working in the life sciences in the U.K. and in Norway, respectively. Through a focus on epistemic work involved in the dry-lab modelling of wet-lab processes, and on how this knowledge is thereby managed by communities of researchers we highlight the multiple sites within large-scale research where trust is a vital infusion.

Our first case study is an example of modelling biological processes: computational physiology requires a high degree of interdisciplinarity, relying on collaborations between experimentalists and computational modellers; biologists, mathematicians and engineers with distinctive epistemic cultures. Through a detailed case study of the collaboration among these groups we show how the acceptance of models in practical settings is mediated by moral and aesthetic aspects of the models which create a common mode of perception through which what counts as evidence for and against the model is partially constituted (Carusi 2008).

Our second example concerns modelling knowledge of biological processes: systems biology is increasingly focused on managing large sets of data published across various life science fields using multiple terminologies, different parts of different organism systems and diverse methodologies to study hoped to be basic, shared processes and entities. Studying the development of an interface for data entry and retrieval shows how prima facie collective epistemic trust based on the choice of accepted ontologies and reporting styles is underwritten by (the expectation of) basic trust between research groups and experts. Conventional or seemingly arbitrary choices are supported by moral expectations and aesthetic ‘buying in’.

Both when scientists are modelling processes, and when they are modelling their knowledge of processes basic trust with its concomitant moral and aesthetic values plays an important role.
"
‘All models are wrong but some are useful’ is something of a mantra for modellers everywhere. It is often invoked to defend models in the face of criticism of their approximate, incomplete, or sometimes downright misleading nature. The... more
‘All models are wrong but some are useful’ is something of a mantra for modellers everywhere. It is often invoked to defend models in the face of criticism of their approximate, incomplete, or sometimes downright misleading nature. The ‘wrongness’ of models is couched in language of error (for which tolerance is needed), lies (giving rise to the controversial nature of, for example, climate models) or fictions (harmless or useful pretences or make belief).  In scientific modelling, (mis) representation goes hand in hand with representation. 

In this talk, I explore these various modes of (mis)representation which all posit a form of deficit against which models are judged, I propose a different mode of understanding, in terms of the notion of a gap or interstice (which Merleau-Ponty labels ‘la déhiscence) or a relation of negativity between model source and target, as underlying the possibility of both representation and misrepresentation. The relation of ‘not’ between model source and target (ironically rendered by Magritte’s ‘This is not a pipe’) is also, for scientific modelling, the condition for the usefulness of models in the scientific domain, and that which makes them productive, as it is the source of their manipulability, their opening to action and practice. The way in which this operates in scientific practice will be exemplified by discussion of examples from the process of constructing and validating models in biology.
Digitalization and computerization are now pervasive in science. This has deep consequences for our understanding of scientific knowledge and of the scientific process, and challenges longstanding assumptions and traditional frameworks of... more
Digitalization and computerization are now pervasive in science. This has deep consequences for our understanding of scientific knowledge and of the scientific process, and challenges longstanding assumptions and traditional frameworks of thinking of scientific knowledge. Digital media and computational processes challenge our conception of the way in which perception and cognition work in science, of the objectivity of science, and the nature of scientific objects. They bring about new relationships between science, art and other visual media, and new ways of practicing science and organizing scientific work, especially as new visual media are being adopted by science studies scholars in their own practice. This volume reflects on how scientists use images in the computerization age, and how digital technologies are affecting the study of science.
Digitalization and computerization are now pervasive in science. This has deep consequences for our understanding of scientific knowledge and of the scientific process, and challenges longstanding assumptions and traditional frameworks of... more
Digitalization and computerization are now pervasive in science. This has deep consequences for our understanding of scientific knowledge and of the scientific process, and challenges longstanding assumptions and traditional frameworks of thinking of scientific knowledge. Digital media and computational processes challenge our conception of the way in which perception and cognition work in science, of the objectivity of science, and the nature of scientific objects. They bring about new relationships between science, art and other visual media, and new ways of practicing science and organizing scientific work. Not least, new visual media are being adopted by science studies scholars in their own practice. This volume gathers together thirteen contributions from science studies scholars from anthropology, visual studies and the sociology, history and philosophy of science, reflecting on the way that scientists use images in this age of computerization, and on the way digital technologies are affecting the study of science.

Contributors were involved with the Oxford University conference in 2011, 'Visualisation in the Age of Computerisation', and include:

Chiara Amrosio,
Anne Beaulieu,
Andreas Birkbak,
Annamaria Carusi,
Lisa Cartwright,
Matt Edgeworth,
Peter Galison,
Aud Sissel Hoel,
Torben Elgaard Jensen,
Michael Lynch,
Anders Koed Madsen,
Anders Kristian Munk,
David Ribes,
Kathryn de Ridder-Vignone,
Tom Schilling,
Alma Steingart,
Timothy Webmoor,
Steve Woolgar,
Albena Yaneva


REVIEWS:
"The STS perspective provides a thoughtful counterpoint in the midst of rapid technological change across science and engineering . . .The editors provide a strong introductory essay that sets a clear context for the various essays that follow, and the quality of the writing across the book is very high.  The articles are specialized and intended for readers with strong backgrounds in STS." - R. A. Kolvoord, James Madison University
Digitalization and computerization are now pervasive in science. This has deep consequences for our understanding of scientific knowledge and of the scientific process, and challenges longstanding assumptions and traditional frameworks of... more
Digitalization and computerization are now pervasive in science. This has deep consequences for our understanding of scientific knowledge and of the scientific process, and challenges longstanding assumptions and traditional frameworks of thinking of scientific knowledge. Digital media and computational processes challenge our conception of the way in which perception and cognition work in science, of the objectivity of science, and the nature of scientific objects. They bring about new relationships between science, art and other visual media, and new ways of practicing science and organizing scientific work. Not least, new visual media are being adopted by science studies scholars in their own practice. This volume gathers together thirteen contributions from science studies scholars from anthropology, visual studies and the sociology, history and philosophy of science, reflecting on the way that scientists use images in this age of computerization, and on the way digital technologies are affecting the study of science.

Contributors were involved with the Oxford University conference in 2011, 'Visualisation in the Age of Computerisation', and include:

Chiara Amrosio
Anne Beaulieu
Andreas Birkbak
Annamaria Carusi
Lisa Cartwright
Matt Edgeworth
Peter Galison
Aud Sissel Hoel
Torben Elgaard Jensen
Michael Lynch
Anders Koed Madsen
Anders Kristian Munk
David Ribes
Kathryn de Ridder-Vignone
Tom Schilling
Alma Steingart
Timothy Webmoor
Steve Woolgar
Albena Yaneva
Research Interests:
Crowdsourcing is reshaping key gatekeeping mechanisms in healthcare such as regulatory tests and clinical trials. This project will compare the science and patient communities in order to identify the main opportunities and challenges for... more
Crowdsourcing is reshaping key gatekeeping mechanisms in healthcare such as regulatory tests and clinical trials. This project will compare the science and patient communities in order to identify the main opportunities and challenges for healthcare of this redistribution of knowledge for healthcare. The project focuses on the interrelationship between epistemic, social, pragmatic and ethical drivers in four main areas: 1. distribution and assessment of information and knowledge, 2. integrity and robustness of research, 3. ethical questions raised by the allocation of responsibility and the management of risk, 4. social acceptance and adoption of new modes of knowledge gatekeeping by scientists and patients. The main activities of the project are 1) to conduct comparative pilot studies of an emerging science community crowdsourcing information and knowledge for regulatory tests for drug assessment, and of patient communities that have produced patient-led clinical trials and similar initiatives that inform patient choices and apply pressure on healthcare providers; 2) to build an interdisciplinary and international community of researchers who will be in a position to collaborate on key questions and challenges in crowdsourcing for health, and inform the ongoing development of these resources.
Research Interests:
Neuroimaging enjoys an increasing prominence, not only among medical doctors, neuroscientists and philosophers, but in society at large. Brain images are deeply compelling, and are claimed to provide windows into the living brain. Yet... more
Neuroimaging enjoys an increasing prominence, not only among medical doctors, neuroscientists and philosophers, but in society at large. Brain images are deeply compelling, and are claimed to provide windows into the living brain. Yet what these images really show remains a debated issue. The research project Picturing the Brain: Perspectives on Neuroimaging seeks to deepen our understanding of the epistemological roles neuroimaging technologies play in the conduct and communication of medicine and science. The primary objective is, more precisely, to develop a fine-grained understanding of socio-cultural and ethical issues that arise in relation to current applications of these technologies, as they are put to use as cognitive tools, as perceptual prostheses, and as visual rhetoric. To pursue this goal, we will carry out interactionist in-depth studies of the design and use of two key applications of neuroimaging, brain mapping and neuronavigation, proceeding from these to questions concerning computational brain modelling and simulation in science. The project will also investigate prospects and issues relating to the persuasive force of neuroimaging against the background of the current overwhelming demand for brain images. This includes exploring issues relating to neuroenhancement and to the ways that neuroimaging reframes the brain-mind relationship, fostering deep changes in how humans perceive themselves. The project is interdisciplinary and allows researchers with backgrounds in media studies, philosophy, digital media engineering, medical imaging, neuroscience, and creative arts to work together on specific tasks in varying configurations. The research is divided into three work packages focusing, respectively, on cognitive, prosthetic, and rhetorical functions of neuroimaging. A fourth package takes the form of a project laboratory for experimenting with different modes of integrating science, technology and society through artistic interventions.
Research Interests: