While in principle probabilistic logics might be applied to solve a range of problems, in practic... more While in principle probabilistic logics might be applied to solve a range of problems, in practice they are rarely applied at present. This is perhaps because they seem disparate, complicated, and computationally intractable. However, we shall argue in this programmatic volume that several approaches to probabilistic logic fit into a simple unifying framework: logically complex evidence can be used to associate probability intervals or probabilities with sentences.
Specifically, we show in Part I that there is a natural way to present a question posed in probabilistic logic, and that various inferential procedures provide semantics for that question: the standard probabilistic semantics (which takes probability functions as models), probabilistic argumentation (which considers the probability of a hypothesis being a logical consequence of the available evidence), evidential probability (which handles reference classes and frequency data), classical statistical inference (in particular the fiducial argument), Bayesian statistical inference (which ascribes probabilities to statistical hypotheses), and objective Bayesian epistemology (which determines appropriate degrees of belief on the basis of available evidence).
Further, we argue, there is the potential to develop computationally feasible methods to mesh with this framework. In particular, we show in Part I how credal and Bayesian networks can naturally be applied as a calculus for probabilistic logic. The probabilistic network itself depends upon the chosen semantics, but once the network is constructed, common machinery can be applied to generate answers to the fundamental question introduced in Part I.
How strongly should you believe the various propositions that you can express?
That is the ke... more How strongly should you believe the various propositions that you can express?
That is the key question facing Bayesian epistemology. Subjective Bayesians hold that it is largely (though not entirely) up to the agent as to which degrees of belief to adopt. Objective Bayesians, on the other hand, maintain that appropriate degrees of belief are largely (though not entirely) determined by the agent's evidence. This book states and defends a version of objective Bayesian epistemology. According to this version, objective Bayesianism is characterized by three norms:
· Probability - degrees of belief should be probabilities
· Calibration - they should be calibrated with evidence
· Equivocation - they should otherwise equivocate between basic outcomes
Objective Bayesianism has been challenged on a number of different fronts. For example, some claim it is poorly motivated, or fails to handle qualitative evidence, or yields counter-intuitive degrees of belief after updating, or suffers from a failure to learn from experience. It has also been accused of being computationally intractable, susceptible to paradox, language dependent, and of not being objective enough.
Especially suitable for graduates or researchers in philosophy of science, foundations of statistics and artificial intelligence, the book argues that these criticisms can be met and that objective Bayesianism is a promising theory with an exciting agenda for further research.
This volume contends that Evidential Pluralism-an account of the epistemology of causation, which... more This volume contends that Evidential Pluralism-an account of the epistemology of causation, which maintains that in order to establish a causal claim one needs to establish the existence of a correlation and the existence of a mechanism-can be fruitfully applied to the social sciences. Through case studies in sociology, economics, political science and law, it advances new philosophical foundations for causal enquiry in the social sciences. The book provides an account of how to establish and evaluate causal claims and it offers a new way of thinking about evidence-based policy, basic social science research and mixed methods research. As such, it will appeal to scholars with interests in social science research and methodology, the philosophy of science and evidence-based policy.
This paper addresses a data integration problem: given several mutually consistent datasets each ... more This paper addresses a data integration problem: given several mutually consistent datasets each of which measures a subset of the variables of interest, how can one construct a probabilistic model that fits the data and gives reasonable answers to questions which are under-determined by the data? Here we show how to obtain a Bayesian network model which represents the unique probability function that agrees with the probability distributions measured by the datasets and otherwise has maximum entropy. We provide a general algorithm, OBN-cDS, which offers substantial efficiency savings over the standard brute-force approach to determining the maximum entropy probability function. Furthermore, we develop modifications to the general algorithm which enable further efficiency savings but which are only applicable in particular situations. We show that there are circumstances in which one can obtain the model (i) directly from the data; (ii) by solving algebraic problems; and (iii) by so...
Abstract. According to objective Bayesianism, an agent's degrees of belief should be determi... more Abstract. According to objective Bayesianism, an agent's degrees of belief should be determined by a probability function, out of all those that satisfy constraints imposed by background knowledge, that maximises entropy. A Bayesian net offers a way of efficiently ...
Bayesian philosophy and Bayesian statistics have diverged in recent years, because Bayesian philo... more Bayesian philosophy and Bayesian statistics have diverged in recent years, because Bayesian philosophers have become more interested in philosophical problems other than the foundations of statistics and Bayesian statisticians have become less concerned with philosophical foundations. One way in which this divergence manifests itself is through the use of direct inference principles: Bayesian philosophers routinely advocate principles that require calibration of degrees of belief to available non-epistemic probabilities, while Bayesian statisticians rarely invoke such principles. As I explain, however, the standard Bayesian framework cannot coherently employ direct inference principles. Direct inference requires a shift towards a non-standard Bayesian framework, which further increases the gap between Bayesian philosophy and Bayesian statistics. This divergence does not preclude the application of Bayesian philosophical methods to real-world problems. Data consolidation is a key cha...
Schurz (2019, ch. 4) argues that probabilistic accounts of induction fail. In particular, he crit... more Schurz (2019, ch. 4) argues that probabilistic accounts of induction fail. In particular, he criticises probabilistic accounts of induction that appeal to direct inference principles, including subjective Bayesian approaches (e.g., Howson 2000) and objective Bayesian approaches (see, e.g., Williamson 2017). In this paper, I argue that Schurz’ preferred direct inference principle, namely Reichenbach’s Principle of the Narrowest Reference Class, faces formidable problems in a standard probabilistic setting. Furthermore, the main alternative direct inference principle, Lewis’ Principal Principle, is also hard to reconcile with standard probabilism. So, I argue, standard probabilistic approaches cannot appeal to direct inference to explicate the logic of induction. However, I go on to defend a non-standard objective Bayesian account of induction: I argue that this approach can both accommodate direct inference and provide a viable account of the logic of induction. I then defend this ac...
While in principle probabilistic logics might be applied to solve a range of problems, in practic... more While in principle probabilistic logics might be applied to solve a range of problems, in practice they are rarely applied at present. This is perhaps because they seem disparate, complicated, and computationally intractable. However, we shall argue in this programmatic volume that several approaches to probabilistic logic fit into a simple unifying framework: logically complex evidence can be used to associate probability intervals or probabilities with sentences.
Specifically, we show in Part I that there is a natural way to present a question posed in probabilistic logic, and that various inferential procedures provide semantics for that question: the standard probabilistic semantics (which takes probability functions as models), probabilistic argumentation (which considers the probability of a hypothesis being a logical consequence of the available evidence), evidential probability (which handles reference classes and frequency data), classical statistical inference (in particular the fiducial argument), Bayesian statistical inference (which ascribes probabilities to statistical hypotheses), and objective Bayesian epistemology (which determines appropriate degrees of belief on the basis of available evidence).
Further, we argue, there is the potential to develop computationally feasible methods to mesh with this framework. In particular, we show in Part I how credal and Bayesian networks can naturally be applied as a calculus for probabilistic logic. The probabilistic network itself depends upon the chosen semantics, but once the network is constructed, common machinery can be applied to generate answers to the fundamental question introduced in Part I.
How strongly should you believe the various propositions that you can express?
That is the ke... more How strongly should you believe the various propositions that you can express?
That is the key question facing Bayesian epistemology. Subjective Bayesians hold that it is largely (though not entirely) up to the agent as to which degrees of belief to adopt. Objective Bayesians, on the other hand, maintain that appropriate degrees of belief are largely (though not entirely) determined by the agent's evidence. This book states and defends a version of objective Bayesian epistemology. According to this version, objective Bayesianism is characterized by three norms:
· Probability - degrees of belief should be probabilities
· Calibration - they should be calibrated with evidence
· Equivocation - they should otherwise equivocate between basic outcomes
Objective Bayesianism has been challenged on a number of different fronts. For example, some claim it is poorly motivated, or fails to handle qualitative evidence, or yields counter-intuitive degrees of belief after updating, or suffers from a failure to learn from experience. It has also been accused of being computationally intractable, susceptible to paradox, language dependent, and of not being objective enough.
Especially suitable for graduates or researchers in philosophy of science, foundations of statistics and artificial intelligence, the book argues that these criticisms can be met and that objective Bayesianism is a promising theory with an exciting agenda for further research.
This volume contends that Evidential Pluralism-an account of the epistemology of causation, which... more This volume contends that Evidential Pluralism-an account of the epistemology of causation, which maintains that in order to establish a causal claim one needs to establish the existence of a correlation and the existence of a mechanism-can be fruitfully applied to the social sciences. Through case studies in sociology, economics, political science and law, it advances new philosophical foundations for causal enquiry in the social sciences. The book provides an account of how to establish and evaluate causal claims and it offers a new way of thinking about evidence-based policy, basic social science research and mixed methods research. As such, it will appeal to scholars with interests in social science research and methodology, the philosophy of science and evidence-based policy.
This paper addresses a data integration problem: given several mutually consistent datasets each ... more This paper addresses a data integration problem: given several mutually consistent datasets each of which measures a subset of the variables of interest, how can one construct a probabilistic model that fits the data and gives reasonable answers to questions which are under-determined by the data? Here we show how to obtain a Bayesian network model which represents the unique probability function that agrees with the probability distributions measured by the datasets and otherwise has maximum entropy. We provide a general algorithm, OBN-cDS, which offers substantial efficiency savings over the standard brute-force approach to determining the maximum entropy probability function. Furthermore, we develop modifications to the general algorithm which enable further efficiency savings but which are only applicable in particular situations. We show that there are circumstances in which one can obtain the model (i) directly from the data; (ii) by solving algebraic problems; and (iii) by so...
Abstract. According to objective Bayesianism, an agent's degrees of belief should be determi... more Abstract. According to objective Bayesianism, an agent's degrees of belief should be determined by a probability function, out of all those that satisfy constraints imposed by background knowledge, that maximises entropy. A Bayesian net offers a way of efficiently ...
Bayesian philosophy and Bayesian statistics have diverged in recent years, because Bayesian philo... more Bayesian philosophy and Bayesian statistics have diverged in recent years, because Bayesian philosophers have become more interested in philosophical problems other than the foundations of statistics and Bayesian statisticians have become less concerned with philosophical foundations. One way in which this divergence manifests itself is through the use of direct inference principles: Bayesian philosophers routinely advocate principles that require calibration of degrees of belief to available non-epistemic probabilities, while Bayesian statisticians rarely invoke such principles. As I explain, however, the standard Bayesian framework cannot coherently employ direct inference principles. Direct inference requires a shift towards a non-standard Bayesian framework, which further increases the gap between Bayesian philosophy and Bayesian statistics. This divergence does not preclude the application of Bayesian philosophical methods to real-world problems. Data consolidation is a key cha...
Schurz (2019, ch. 4) argues that probabilistic accounts of induction fail. In particular, he crit... more Schurz (2019, ch. 4) argues that probabilistic accounts of induction fail. In particular, he criticises probabilistic accounts of induction that appeal to direct inference principles, including subjective Bayesian approaches (e.g., Howson 2000) and objective Bayesian approaches (see, e.g., Williamson 2017). In this paper, I argue that Schurz’ preferred direct inference principle, namely Reichenbach’s Principle of the Narrowest Reference Class, faces formidable problems in a standard probabilistic setting. Furthermore, the main alternative direct inference principle, Lewis’ Principal Principle, is also hard to reconcile with standard probabilism. So, I argue, standard probabilistic approaches cannot appeal to direct inference to explicate the logic of induction. However, I go on to defend a non-standard objective Bayesian account of induction: I argue that this approach can both accommodate direct inference and provide a viable account of the logic of induction. I then defend this ac...
Cognitive theorists routinely disagree about the evidence supporting claims in cognitive science.... more Cognitive theorists routinely disagree about the evidence supporting claims in cognitive science. Here, we first argue that some disagreements about evidence in cognitive science are about the evidence available to be drawn upon by cognitive theorists. Then, we show that one’s explanation of why this first kind of disagreement obtains will cohere with one’s theory of evidence. We argue that the best explanation for why cognitive theorists disagree in this way is because their evidence is what they rationally grant. Finally, we explain why our view does not lead to a pernicious kind of relativism in cognitive science.
Uploads
Books by Jon Williamson
1 Phyllis McKay Illari, Federica Russo and Jon Williamson Why look at causality in the sciences? A manifesto
Part II Health sciences
2 R. Paul Thompson Causality, theories and medicine
3 Alex Broadbent Inferring causation in epidemiology: Mechanisms, black boxes, and contrasts
4 Harold Kincaid Causal modelling, mechanism, and probability in epidemiology
5 Bert Leuridan and Erik Weber The IARC and mechanistic evidence
6 Donald Gillies The Russo–Williamson thesis and the question of whether smoking causes heart disease
Part III Psychology
7 David Lagnado Causal thinking
8 Benjamin Rottman, Woo-kyoung Ahn and Christian Luhmann When and how do people reason about unobserved causes?
9 Clare R. Walsh and Steven A. Sloman Counterfactual and generative accounts of causal attribution
10 Ken Aizawa and Carl Gillett The autonomy of psychology in the age of neuroscience
11 Otto Lappi and Anna-Mari Rusanen Turing machines and causal mechanisms in cognitive science
12 Keith A. Markus Real causes and ideal manipulations: Pearl’s theory of causal inference from the point of view of psychological research methods
Part IV Social sciences
13 Daniel Little Causal mechanisms in the social realm
14 Ruth Groff Getting past Hume in the philosophy of social science
15 Michel Mouchart and Federica Russo Causal explanation: Recursive decompositions and mechanisms
16 Kevin D. Hoover Counterfactuals and causal structure
17 Damien Fennell The error term and its interpretation in structural models in econometrics
18 Hossein Hassani, Anatoly Zhigljavsky, Kerry Patterson, and Abdol S. Soofi A comprehensive causality test based on the singular spectrum analysis
Part V Natural sciences
19 Tudor M. Baetu Mechanism schemas and the relationship between biological theories
20 Roberta L. Millstein Chances and causes in evolutionary biology: How many chances become one chance
21 Sahotra Sarkar Drift and the causes of evolution
22 Garrett Pendergraft In defense of a causal requirement on explanation
23 Paolo Vineis, Aneire Khan and Flavio D’Abramo Epistemological issues raised by research on climate change
24 Giovanni Boniolo, Rossella Faraldo and Antonio Saggion Explicating the notion of ‘causation’: The role of extensive quantities
25 Miklós Rédei and Balázs Gyenis Causal completeness of probability theories – Results and open problems
Part VI Computer science, probability, and statistics
26 I. Guyon, C. Aliferis, G. Cooper, A. Elisseeff, J.-P. Pellet,
P. Spirtes and A. Statnikov Causality Workbench
27 Jan Lemeire, Kris Steenhaut and Abdellah Touhafi When are graphical causal models not good models?
28 Dawn E. Holmes Why making Bayesian networks objectively Bayesian makes sense
29 Branden Fitelson and Christopher Hitchcock Probabilistic measures of causal strength
30 Kevin B. Korb, Erik P. Nyberg and Lucas Hope A new causal power theory
31 Samantha Kleinberg and Bud Mishra Multiple testing of causal hypotheses
32 Ricardo Silva Measuring latent causal structure
33 Judea Pearl The structural theory of causation
34 S. Geneletti and A.P. Dawid Defining and identifying the effect of treatment on the treated
35 Nancy Cartwright Predicting ‘It will work for us’: (Way) beyond statistics
Part VII Causality and mechanisms
36 Stathis Psillos The idea of mechanism
37 Stuart Glennan Singular and general causal relations: A mechanist perspective
38 Phyllis McKay Illari and Jon Williamson Mechanisms are real and local
39 Jim Bogen and Peter Machamer Mechanistic information and causal continuity
40 Phil Dowe The causal-process-model theory of mechanisms
41 M. Kuhlmann Mechanisms in dynamically complex systems
42 Julian Reiss Third time’s a charm: Causation, science and Wittgensteinian pluralism
Specifically, we show in Part I that there is a natural way to present a question posed in probabilistic logic, and that various inferential procedures provide semantics for that question: the standard probabilistic semantics (which takes probability functions as models), probabilistic argumentation (which considers the probability of a hypothesis being a logical consequence of the available evidence), evidential probability (which handles reference classes and frequency data), classical statistical inference (in particular the fiducial argument), Bayesian statistical inference (which ascribes probabilities to statistical hypotheses), and objective Bayesian epistemology (which determines appropriate degrees of belief on the basis of available evidence).
Further, we argue, there is the potential to develop computationally feasible methods to mesh with this framework. In particular, we show in Part I how credal and Bayesian networks can naturally be applied as a calculus for probabilistic logic. The probabilistic network itself depends upon the chosen semantics, but once the network is constructed, common machinery can be applied to generate answers to the fundamental question introduced in Part I.
That is the key question facing Bayesian epistemology. Subjective Bayesians hold that it is largely (though not entirely) up to the agent as to which degrees of belief to adopt. Objective Bayesians, on the other hand, maintain that appropriate degrees of belief are largely (though not entirely) determined by the agent's evidence. This book states and defends a version of objective Bayesian epistemology. According to this version, objective Bayesianism is characterized by three norms:
· Probability - degrees of belief should be probabilities
· Calibration - they should be calibrated with evidence
· Equivocation - they should otherwise equivocate between basic outcomes
Objective Bayesianism has been challenged on a number of different fronts. For example, some claim it is poorly motivated, or fails to handle qualitative evidence, or yields counter-intuitive degrees of belief after updating, or suffers from a failure to learn from experience. It has also been accused of being computationally intractable, susceptible to paradox, language dependent, and of not being objective enough.
Especially suitable for graduates or researchers in philosophy of science, foundations of statistics and artificial intelligence, the book argues that these criticisms can be met and that objective Bayesianism is a promising theory with an exciting agenda for further research.
Papers by Jon Williamson
1 Phyllis McKay Illari, Federica Russo and Jon Williamson Why look at causality in the sciences? A manifesto
Part II Health sciences
2 R. Paul Thompson Causality, theories and medicine
3 Alex Broadbent Inferring causation in epidemiology: Mechanisms, black boxes, and contrasts
4 Harold Kincaid Causal modelling, mechanism, and probability in epidemiology
5 Bert Leuridan and Erik Weber The IARC and mechanistic evidence
6 Donald Gillies The Russo–Williamson thesis and the question of whether smoking causes heart disease
Part III Psychology
7 David Lagnado Causal thinking
8 Benjamin Rottman, Woo-kyoung Ahn and Christian Luhmann When and how do people reason about unobserved causes?
9 Clare R. Walsh and Steven A. Sloman Counterfactual and generative accounts of causal attribution
10 Ken Aizawa and Carl Gillett The autonomy of psychology in the age of neuroscience
11 Otto Lappi and Anna-Mari Rusanen Turing machines and causal mechanisms in cognitive science
12 Keith A. Markus Real causes and ideal manipulations: Pearl’s theory of causal inference from the point of view of psychological research methods
Part IV Social sciences
13 Daniel Little Causal mechanisms in the social realm
14 Ruth Groff Getting past Hume in the philosophy of social science
15 Michel Mouchart and Federica Russo Causal explanation: Recursive decompositions and mechanisms
16 Kevin D. Hoover Counterfactuals and causal structure
17 Damien Fennell The error term and its interpretation in structural models in econometrics
18 Hossein Hassani, Anatoly Zhigljavsky, Kerry Patterson, and Abdol S. Soofi A comprehensive causality test based on the singular spectrum analysis
Part V Natural sciences
19 Tudor M. Baetu Mechanism schemas and the relationship between biological theories
20 Roberta L. Millstein Chances and causes in evolutionary biology: How many chances become one chance
21 Sahotra Sarkar Drift and the causes of evolution
22 Garrett Pendergraft In defense of a causal requirement on explanation
23 Paolo Vineis, Aneire Khan and Flavio D’Abramo Epistemological issues raised by research on climate change
24 Giovanni Boniolo, Rossella Faraldo and Antonio Saggion Explicating the notion of ‘causation’: The role of extensive quantities
25 Miklós Rédei and Balázs Gyenis Causal completeness of probability theories – Results and open problems
Part VI Computer science, probability, and statistics
26 I. Guyon, C. Aliferis, G. Cooper, A. Elisseeff, J.-P. Pellet,
P. Spirtes and A. Statnikov Causality Workbench
27 Jan Lemeire, Kris Steenhaut and Abdellah Touhafi When are graphical causal models not good models?
28 Dawn E. Holmes Why making Bayesian networks objectively Bayesian makes sense
29 Branden Fitelson and Christopher Hitchcock Probabilistic measures of causal strength
30 Kevin B. Korb, Erik P. Nyberg and Lucas Hope A new causal power theory
31 Samantha Kleinberg and Bud Mishra Multiple testing of causal hypotheses
32 Ricardo Silva Measuring latent causal structure
33 Judea Pearl The structural theory of causation
34 S. Geneletti and A.P. Dawid Defining and identifying the effect of treatment on the treated
35 Nancy Cartwright Predicting ‘It will work for us’: (Way) beyond statistics
Part VII Causality and mechanisms
36 Stathis Psillos The idea of mechanism
37 Stuart Glennan Singular and general causal relations: A mechanist perspective
38 Phyllis McKay Illari and Jon Williamson Mechanisms are real and local
39 Jim Bogen and Peter Machamer Mechanistic information and causal continuity
40 Phil Dowe The causal-process-model theory of mechanisms
41 M. Kuhlmann Mechanisms in dynamically complex systems
42 Julian Reiss Third time’s a charm: Causation, science and Wittgensteinian pluralism
Specifically, we show in Part I that there is a natural way to present a question posed in probabilistic logic, and that various inferential procedures provide semantics for that question: the standard probabilistic semantics (which takes probability functions as models), probabilistic argumentation (which considers the probability of a hypothesis being a logical consequence of the available evidence), evidential probability (which handles reference classes and frequency data), classical statistical inference (in particular the fiducial argument), Bayesian statistical inference (which ascribes probabilities to statistical hypotheses), and objective Bayesian epistemology (which determines appropriate degrees of belief on the basis of available evidence).
Further, we argue, there is the potential to develop computationally feasible methods to mesh with this framework. In particular, we show in Part I how credal and Bayesian networks can naturally be applied as a calculus for probabilistic logic. The probabilistic network itself depends upon the chosen semantics, but once the network is constructed, common machinery can be applied to generate answers to the fundamental question introduced in Part I.
That is the key question facing Bayesian epistemology. Subjective Bayesians hold that it is largely (though not entirely) up to the agent as to which degrees of belief to adopt. Objective Bayesians, on the other hand, maintain that appropriate degrees of belief are largely (though not entirely) determined by the agent's evidence. This book states and defends a version of objective Bayesian epistemology. According to this version, objective Bayesianism is characterized by three norms:
· Probability - degrees of belief should be probabilities
· Calibration - they should be calibrated with evidence
· Equivocation - they should otherwise equivocate between basic outcomes
Objective Bayesianism has been challenged on a number of different fronts. For example, some claim it is poorly motivated, or fails to handle qualitative evidence, or yields counter-intuitive degrees of belief after updating, or suffers from a failure to learn from experience. It has also been accused of being computationally intractable, susceptible to paradox, language dependent, and of not being objective enough.
Especially suitable for graduates or researchers in philosophy of science, foundations of statistics and artificial intelligence, the book argues that these criticisms can be met and that objective Bayesianism is a promising theory with an exciting agenda for further research.