Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content
Nicola Angius
  • Via Concezione 6, Messina, Sicily, Italy
This paper introduces the Global Philosophy symposium on Giuseppe Primiero's book On the Foundations of Computing (2020). The collection gathers commentaries and responses of the author with the aim of engaging with some open questions in... more
This paper introduces the Global Philosophy symposium on Giuseppe Primiero's book On the Foundations of Computing (2020). The collection gathers commentaries and responses of the author with the aim of engaging with some open questions in the philosophy of computer science. Firstly, this paper introduces the central themes addressed in Primiero's book; secondly, it highlights some of the main critiques from commentators in order to, finally, pinpoint some conceptual challenges indicating future directions for the philosophy of computer science.
This paper sheds light on the shift that is taking place from the practice of 'coding', namely developing programs as conventional in the software community, to the practice of 'curing', an activity that has emerged in the last few years... more
This paper sheds light on the shift that is taking place from the practice of 'coding', namely developing programs as conventional in the software community, to the practice of 'curing', an activity that has emerged in the last few years in Deep Learning (DL) and that amounts to curing the data regime to which a DL model is exposed during training. Initially, the curing paradigm is illustrated by means of a study-case on autonomous vehicles. Subsequently, the shift from coding to curing is analysed taking into consideration the epistemological notions, central in the philosophy of computer science, of function, implementation, and correctness. First, it is illustrated how, in the curing paradigm, the functions performed by the trained model depend much more on dataset curation rather than on the model algorithms which, in contrast with the coding paradigm, do not comply with requested specifications. Second, it is highlighted how DL models cannot be considered implementations according to any of the available definitions of implementation that follow an intentional theory of functions. Finally, it is argued that DL models cannot be evaluated in terms of their correctness but rather in their experimental computational validity.
This paper shows how safety and liveness properties are not necessarily preserved by different kinds of copies of computational artefacts and proposes procedures to preserve them that are consistent with ethical analyses on software... more
This paper shows how safety and liveness properties are not necessarily preserved by different kinds of copies of computational artefacts and proposes procedures to preserve them that are consistent with ethical analyses on software property rights infringement. Safety and liveness are second-order properties that are crucial in the definition of the formal ontology of computational artefacts. Software copies are analysed at the level of their formal models as exact, inexact, and approximate copies, according to the taxonomy in (Angius and Primiero, 2018). First, it is explained how exact copies are the only kind of copies that preserve safety and liveness properties, and how inexact and approximate copies do not necessarily preserve them. Secondly, two model checking algorithms are proposed to verify whether inexact and approximate copies actually preserve safety and liveness properties. Essential properties of termination, correctness, and complexity are proved for these algorithms. Finally, contraction and expansion algorithmic operations are defined, allowing for the automatic design of safety-and liveness-preserving approximate copies. As a conclusion, the relevance of the present logical analysis for the ongoing debates in miscomputation and computer ethics is highlighted.
The philosophy of computer science is concerned with the ontological and methodological issues arising from within the academic discipline of computer science, and from the practice of software development and its commercial and... more
The philosophy of computer science is concerned with the ontological and methodological issues arising from within the academic discipline of computer science, and from the practice of software development and its commercial and industrial deployment. More specifically, the philosophy of computer science considers the ontology and epistemology of computational systems, focusing on problems associated with their specification, programming, implementation, verification and testing. The complex nature of computer programs ensures that many of the conceptual questions raised by the philosophy of computer science have related ones in the philosophy of mathematics, the philosophy of empirical sciences, and the philosophy of technology. We shall provide an analysis of such topics that reflects the layered nature of the ontology of computational systems in Sections 1-5; we then discuss topics involved in their methodology in Sections 6-8.
Essential traits of model checking, a prominent formal method utilized in computer science to predict future behaviours of software systems, are examined here in the framework of the modelbased paradigm of scientific reasoning. Models... more
Essential traits of model checking, a prominent formal method utilized in computer science to predict future behaviours of software systems, are examined here in the framework of the modelbased paradigm of scientific reasoning. Models that model checking techniques enable one to develop are shown to satisfy logical requirements expressed by the set-theoretic view of scientific models. It is highlighted how model checking algorithms are able to isolate law-like generalizations holding in the model under given ceteris paribus conditions and concerning software executions. Furthermore, abstraction methodologies utilized in model checking to decrease the state space of complex models are taken to be instantiations of the general process known as Aristotelian abstraction characterizing empirical modelling. Finally, the methodological interest of the model-checking techniques is emphasized in connection with the debate concerning the epistemological status of computer science. 1. Software...
This paper provides a review of Raymond Turner's book ​ Computational Artifacts. Towards a Philosophy of Computer Science​. Focus is made on the definition of program correctness as the twofold problem of evaluating whether ​ both the... more
This paper provides a review of Raymond Turner's book ​ Computational Artifacts. Towards a Philosophy of Computer Science​. Focus is made on the definition of program correctness as the twofold problem of evaluating whether ​ both the symbolic program ​ and the physical implementation satisfy a set of specifications. The review stresses how these are not two separate problems. First, it is highlighted how formal proofs of correctness need to rely on the analysis of physical computational processes. Secondly, it is underlined how software testing requires considering the formal relations holding between the specifications and the symbolic program. Such a mutual dependency between formal and empirical program verification methods is finally shown to ​ influence the debate on the epistemological status of computer science.
This paper contributes to the computer ethics debate on software ownership protection by examining the ontological, methodological, and ethical problems related to property rights infringement that should come prior to any legal... more
This paper contributes to the computer ethics debate on software ownership protection by examining the ontological, methodological, and ethical problems related to property rights infringement that should come prior to any legal discussion. The ontological problem consists in determining precisely what it is for a computer program to be a copy of another one, a largely neglected problem in computer ethics. ​ The methodological problem is defined as the difficulty of deciding whether a given software system is a copy of another system. And the ethical problem corresponds to establishing when a copy constitutes, or does not constitute, a property rights infringement. The ontological problem is solved on the logical analysis of abstract machines, and the latter are argued to be the appropriate level of abstraction for software at which the methodological and the ethical problems can be successfully addressed.
The Epistemology Of Computer Simulation (EOCS) has developed as an epistemological and methodological analysis of simulative sciences using quantitative computational models to represent and predict empirical phenomena of interest. In... more
The Epistemology Of Computer Simulation (EOCS) has developed as an epistemological and methodological analysis of simulative sciences using quantitative computational models to represent and predict empirical phenomena of interest. In this paper, Executable Cell Biology (ECB) and Agent-Based Modelling (ABM) are examined to show how one may take advantage of qualitative computational models to evaluate reacha-bility properties of reactive systems. In contrast to the thesis, advanced by EOCS, that computational models are not adequate representations of the simulated empirical systems, it is shown how the representational adequacy of qualitative models is essential to evaluate reachability properties. Justification theory, if not playing an essential role in EOCS, is exhibited to be involved in the process of advancing and corroborating model-based hypotheses about empirical systems in ECB and ABM. Finally , the practice of evaluating model-based hypothesis by testing the simulated systems is shown to constitute an argument in favour of the thesis that computer simulations in ECB and ABM can be put on a par with scientific experiments.
This paper constitutes a first attempt at constructing semantic theories over institutions and examining the logical relations holding between different such theories. Our results show that this approach can be very useful for theoretical... more
This paper constitutes a first attempt at constructing semantic theories over institutions and examining the logical relations holding between different such theories. Our results show that this approach can be very useful for theoretical computer science (and may also contribute to the current philosophical debate regarding the semantic and the syntactic presentation of scientific theories). First we provide a definition of semantic theories in the institution theory framework - in terms of a set of models satisfying a given set of sentences - using the language-independent satisfaction relation characterizing institutions. Then we give a proof of the logical equivalence holding between the syntactic and the semantic presentation of a theory, based on the Galois connection holding between sentences and models.We also show how to integrate and combine semantic theories using colimits. Finally we establish when the output of a model-based software verification method applied to a semantic theory over an institution also holds for a semantic theory defined over a different institution.
Defining identity for entities is a longstanding logical problem in philosophy, and it has resurfaced in current investigations within the philosophy of technology. The problem has not yet been explored for the philosophy of information,... more
Defining identity for entities is a longstanding logical problem in philosophy, and it has resurfaced in current investigations within the philosophy of technology. The problem has not yet been explored for the philosophy of information, and of Computer Science in particular. This paper provides a logical analysis of identity and copy for computational artefacts. Identity is here understood as the relation holding between an instance of a computational artefact and itself. By contrast, the copy relation holds between two distinct computational artefacts. We distinguish among exact, inexact and approximate copies. We use process algebra to provide suitable formal definitions of these relations, using in particular the notion of bisimulation to define identity and exact copies, and simulation for inexact and approximate copies. Equivalence is unproblematic for identical computational artefacts at each individual time and for inexact copies; we will examine to which extent the formal constraints on identity criteria discussed in the literature are satisfied by our approach. As for inexact and approximate copy, they are intended as a weakening of the identity relation in that equivalence and other constraints on identity are violated. The proposed approach also suggests a computable treatment of identity and copy checking.
This paper constitutes a first attempt at constructing semantic theories over institutions and examining the logical relations holding between different such theories. Our results show that this approach can be very useful for... more
This paper constitutes a first attempt at constructing semantic theories
over institutions and examining the logical relations holding between different such
theories. Our results show that this approach can be very useful for theoretical computer
science (and may also contribute to the current philosophical debate regarding
the semantic and the syntactic presentation of scientific theories). First we provide
a definition of semantic theories in the institution theory framework - in terms of a
set of models satisfying a given set of sentences - using the language-independent
satisfaction relation characterizing institutions (Definition 3). Then we give a proof
of the logical equivalence holding between the syntactic and the semantic presentation
of a theory, based on the Galois connection holding between sentences and
models (Theorem 1).We also show how to integrate and combine semantic theories
using colimits (Theorem 2). Finally we establish when the output of a model-based
software verification method applied to a semantic theory over an institution also
holds for a semantic theory defined over a different institution (Theorem 3).
"For nearly every field of study, there is a branch of philosophy, called the philosophy of that field. …Since the main purpose of a given field of study is to contribute to knowledge, the philosophy of X is, at least in part, a branch of... more
"For nearly every field of study, there is a branch of philosophy, called the philosophy of that field. …Since the main purpose of a given field of study is to contribute to knowledge, the philosophy of X is, at least in part, a branch of epistemology. Its purpose is to provide an account of the goals, methodology, and subject matter of X. (Shapiro 1983: 525)

The philosophy of computer science is concerned with those philosophical issues that arise from within the academic discipline of computer science. It is intended to be the philosophical endeavor that stands to computer science as philosophy of mathematics does to mathematics and philosophy of technology does to technology. Indeed, the abstract nature of computer science, coupled with its technological ambitions, ensures that many of the conceptual questions that arise in the philosophies of mathematics and technology have computational analogues. In addition, the subject will draw in variants of some of the central questions in the philosophies of mind, language and science. We shall concentrate on a tightly related group of topics which form the spine of the subject. These include specification, implementation, semantics, programs, programming, correctness, abstraction and computation."
The philosophy of computer science is concerned with those ontological, methodological, and ethical issues that arise from within the academic discipline of computer science as well as from the practice of software development. Thus, the... more
The philosophy of computer science is concerned with those ontological, methodological, and ethical issues that arise from within the academic discipline of computer science as well as from the practice of software development. Thus, the philosophy of computer science shares the same philosophical goals as the philosophy of mathematics and the many subfields of the philosophy of science, such as the philosophy of biology or the philosophy of the social sciences. The philosophy of computer science also considers the analysis of computational artifacts, that is, human-made computing systems, and it focuses on methods involved in the design, specification, programming, verification, implementation, and testing of those systems. The abstract nature of computer programs and the resulting complexity of implemented artifacts, coupled with the technological ambitions of computer science, ensures that many of the conceptual questions of the philosophy of computer science have analogues in the philosophy of mathematics, the philosophy of empirical sciences, and the philosophy of technology. Other issues characterize the philosophy of computer science only. We shall concentrate on three tightly related groups of topics that form the spine of the subject. First we discuss topics related to the ontological analysis of computational artifacts, in Sections 1-5 below. Second, we discuss topics involved in the methodology and epistemology of software development, in Sections 6-9 below. Third, we discuss ethical issues arising from computer science practice, in Section 10 below. Applications of computer science are briefly considered in section 11.
Research Interests:
This paper addresses the methodological problem of analysing what it is to explain observed behaviours of engineered computing systems (BECS), focusing on the crucial role that abstraction and idealization play in explanations of both... more
This paper addresses the methodological problem of analysing what it is to explain observed behaviours of engineered computing systems (BECS), focusing on the crucial role that abstraction and idealization play in explanations of both correct and incorrect BECS. First, it is argued that an understanding of explanatory requests about observed miscomputations crucially involves reference to the rich background afforded by hierarchies of functional specifications. Second, many explanations concerning incorrect BECS are found to abstract away (and profitably so on account of both relevance and intelligibility of the explanans) from descriptions of physical components and processes of computing systems that one finds below the logic circuit and gate layer of functional specification hierarchies. Third, model-based explanations of both correct and incorrect BECS that are provided in the framework of formal verification methods often involve idealizations. Moreover, a distinction between restrictive and permissive idealizations is introduced and their roles in BECS explanations are analyzed.
This paper provides a methodological analysis of Executable Cell Biology (ECB), a current simulative approach to computational biology, showing how ECB resumed the general idea of constructing theoretical models that are also executable,... more
This paper provides a methodological analysis of Executable Cell Biology (ECB), a current simulative approach to computational biology, showing how ECB resumed the general idea of constructing theoretical models that are also executable, pursued over fifty years ago by Herbert Simon and Allen Newell within the Information Processing Psychology approach. It is highlighted, however, that ECB focuses on a more abstract model of the biological system. On the one hand, the processes of abstraction involved in the construction of ECB theoretical models allow one to omit those implementation details of the simulative program that have no theoretical value. On the other hand, the executability of the abstract model permits, in general, to expand the class of predictions that can be extracted from the observation of the simulative programs' executions. Finally, focusing on ECB executable theoretical models, which are distinct from the simulative programs, poses new problems for the methodological analysis of the sciences of the artificial, in particular with reference to the role that both abstraction and idealization processes have in the construction of theoretical models and in the exploration of their relationship with the biological reality of modelled systems.
The application of formal methods to the examination of reactive programs simulating cell systems’ behaviours in current computational biology is taken to shed new light on the simulative approaches in Artificial Intelligence and... more
The application of formal methods to the examination of reactive programs simulating cell systems’ behaviours in current computational biology is taken to shed new light on the simulative approaches in Artificial Intelligence and Artificial Life. First, it is underlined how reactive programs simulating many cell systems’ behaviours are more profitably examined by means of executable models of the simulating program’s executions. Those models turn out to be representations of both the simulating reactive program and of the simulated cell system. Secondly, it is highlighted how discovery processes of significant regular behaviours of the simulated system are carried out performing algorithmic verifications on the formal model representing the biological phenomena of interest. Finally, a distinctive methodological trait of current computational biology is recognized in that the advanced model-based hypotheses are not corroborated or falsified by testing the simulative program, which is not even encoded, but rather by performing wet experiments aiming at the observation of behaviours corresponding to paths in the model either satisfying or violating the hypotheses under evaluation.
Research Interests:
Model checking, a prominent formal method used to predict and explain the behaviour of software and hardware systems, is examined on the basis of reflective work in the philosophy of science concerning the ontology of scientific theories... more
Model checking, a prominent formal method used to predict and explain
the behaviour of software and hardware systems, is examined on the basis of
reflective work in the philosophy of science concerning the ontology of scientific
theories and model-based reasoning. The empirical theories of computational systems that model checking techniques enable one to build are identified, in the light of the semantic conception of scientific theories, with families of models that are interconnected by simulation relations. And the mappings between these scientific theories and computational systems in their scope are analyzed in terms of suitable specializations of the notions of model of experiment and model of data. Furthermore, the extensively mechanized character of model-based reasoning in model checking is highlighted by a comparison with proof procedures adopted by other formal methods in computer science. Finally, potential epistemic benefits flowing from the application of model checking in other areas of scientific inquiry are emphasized in the context of computer simulation studies of biological information processing.
Research Interests:
This paper is concerned with the construction of theories of software systems yielding adequate predictions of their target systems’ computations. It is first argued that mathematical theories of programs are not able to provide... more
This paper is concerned with the construction of theories of software systems yielding adequate predictions of their target systems’ computations. It is first argued that mathematical theories of programs are not able to provide predictions that are consistent with observed executions. Empirical theories of software systems are here introduced semantically, in terms of a hierarchy of computational models that are supplied by formal methods and testing techniques in computer science. Both deductive top-down and inductive bottom-up approaches in the discovery of semantic software theories are refused to argue in favour of the abductive process of hypothesising and refining models at each level in the hierarchy, until they become satisfactorily predictive. Empirical theories of computational systems are required to be modular, as modular are most software verification and testing activities. We argue that logic relations must be thereby defined among models representing different modules in a semantic theory of a modular software system. We exclude that scientific structuralism is able to define module relations needed in software modular theories. The algebraic Theory of Institutions is finally introduced to specify the logic structure of modular semantic theories of computational systems.
This commentary on John Symons’ and Jack Horner’s paper, besides sharing its main argument, challenges the authors’ statement that there is no effective method to evaluate software intensive systems as a distinguishing feature of software... more
This commentary on John Symons’ and Jack Horner’s paper, besides sharing its main argument, challenges the authors’ statement that there is no effective method to evaluate software intensive systems as a distinguishing feature of software intensive science. It is underlined here how analogous methodological limitations characterise the evaluations of empirical systems in non-software intensive sciences. The authors’ claim that formal methods establish the correctness of computational models rather than of the represented program is here compared with the empirical adequacy problem typifying the model-based reasoning approach in physics. And the remark that testing all the paths of a software intensive system is unfeasible is related to the enumerative induction problem in the justification of empirical law-like hypotheses in non-software intensive sciences.
This paper takes part in the methodological debate concerning the nature and the justification of hypotheses about computational systems in software engineering by providing an epistemological analysis of Software Testing, the practice of... more
This paper takes part in the methodological debate concerning the nature and the justification of hypotheses about computational systems in software engineering by providing an epistemological analysis of Software Testing, the practice of observing programs’ executions to examine whether they fulfil software requirements. Property specifications articulating such requirements are shown to involve falsifiable hypotheses about software systems that are evaluated by means of tests which are likely to falsify those hypotheses. Software Reliability metrics, used to measure the growth of probability that given failures will occur at specified times as new executions are observed, is shown to involve a Bayesian confirmation of falsifiable hypotheses on programs. Coverage criteria, used to select those input values with which the system under test is to be launched, are understood as theory-laden principles guiding software tests, here compared to scientific experiments. Redundant computations, fault seeding models, and formal methods used in software engineering to evaluate test results are taken to be instantiations of some epistemological strategies used in scientific experiments to distinguish between valid and non-valid experimental outcomes. The final part of the paper explores the problem, advanced in the context of the philosophy of technology, of defining the epistemological status of software engineering by conceiving it as a scientific attested technology.
Automated Software Testing (AST) using Model Checking is in this article epistemologically analysed in order to argue in favour of a model-based reasoning paradigm in computer science. Preliminarily, it is shown how both deductive and... more
Automated Software Testing (AST) using Model Checking is in this article epistemologically analysed in order to argue in favour of a model-based reasoning paradigm in computer science. Preliminarily, it is shown how both deductive and inductive reasoning are insufficient to determine whether a given piece of software is correct with respect to specified behavioural properties. Models algorithmically checked in Model Checking to select executions to be observed in Software Testing are acknowledged as analogical models which establish isomorphic relations with the target system's data set. Analogical models developed in AST are presented as abductive models providing hypothetical explanations to observed executions. The model assumption—algorithmic check—software testing process is understood as the abduction—deduction—induction process defining the selective abduction and turned to isolate a set of model-based hypotheses concerning the target system behaviours. A manipulative abduction process is finally recognized in the practice of adapting, abstracting and refining models that do not provide successful predictions.
Questions concerning the epistemological status of computer science are, in this paper, answered from the point of view of the formal verification framework. State space reduction techniques adopted to simplify computational models in... more
Questions concerning the epistemological status of computer science are, in this paper, answered from the point of view of the formal verification framework. State space reduction techniques adopted to simplify computational models in model checking are analysed in terms of Aristotelian abstractions and Galilean idealizations characterizing the inquiry of empirical systems. Methodological considerations drawn here are employed to argue in favour of the scientific understanding of computer science as a discipline. Specifically, reduced models gained by Data Abstraction are acknowledged as Aristotelian abstractions that include only data which are sufficient to examine the interested executions. The present study highlights how the need to maximize incompatible properties is at the basis of both Abstraction Refinement, the process of generating a cascade of computational models to achieve a balance between simplicity and informativeness, and the Multiple Model Idealization approach in biology. Finally, fairness constraints, imposed to computational models to allow fair behaviours only, are defined as ceteris paribus conditions under which temporal formulas, formalizing software requirements, acquire the status of law-like statements about the software systems executions.
Essential traits of model checking, a prominent formal method utilized in computer science to predict future behaviours of software systems, are examined here in the framework of the model-based paradigm of scientific reasoning. Models... more
Essential traits of model checking, a prominent formal method utilized in computer science to predict future behaviours of software systems, are examined here in the framework of the model-based paradigm of scientific reasoning. Models that model checking techniques enable one to develop are shown to satisfy logical requirements expressed by the set-theoretic view of scientific models. It is highlighted how model checking algorithms are able to isolate law-like generalizations holding in the model under given ceteris paribus conditions and concerning software executions. Further-more, abstraction methodologies utilized in model checking to de-crease the state space of complex models are taken to be instantia-tions of the general process known as Aristotelian abstraction char-acterizing empirical modelling. Finally, the methodological interest of the model-checking techniques is emphasized in connection with the debate concerning the epistemological status of computer science.
Research Interests:
This paper offers a review of Giuseppe Primero’s (2020) book “On the foundations of computing”. Mathematical, engineering, and experimental foundations of the science of computing are examined under the light of the notions of formal,... more
This paper offers a review of Giuseppe Primero’s (2020) book “On the foundations of computing”. Mathematical, engineering, and experimental foundations of the science of computing are examined under the light of the notions of formal, physical, and experimental computational validity provided by the author. It is challenged the thesis that experimental computational validity can be defined only for the algorithmic method and not for the software development process. The notions of computational hypothesis and computational experiment provided by Primiero (2020) are extended to the case of software development. Finally, it is highlighted how the hypothetical-deductive method is involved in the practice of using models to corroborate computational hypotheses in software testing. As a concluding remark, it is underlined how defining experimental computational validity in the context of software development offers a sound experimental foundation to the science of computing.