Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Mathematical Principles of Dynamic Systems and The Foundations of Quantum Physics

Download as pdf or txt
Download as pdf or txt
You are on page 1of 88

Mathematical Principles of Dynamic Systems and the Foundations of Quantum Physics

Eric Tesse

arXiv:1106.2751v4 [physics.gen-ph] 25 Feb 2012

Abstract
Everybody agrees that quantum physics is strange, and that the world view it implies is elusive. However, it is rarely considered that the theory might be opaque because the mathematical language it employs is inarticulate. Perhaps, if a mathematical language were constructed specically to handle the theorys subject matter, the theory itself would be claried. This article explores that possibility. It presents a simple but rigorous language for the description of dynamics, experiments, and experimental probabilities. This language is then used to answer a compelling question: What is the set of allowed experiments? If an experiment is allowed, then the sum of the probabilities of its outcomes must equal 1. If probabilities are non-additive, there are necessarily sets of outcomes whose total probability is not equal to 1. Such experiments are therefore not allowed. That being the case, in quantum physics, which experiments are allowed, and why are the rest disallowed? What prevents scientists from performing the disallowed experiments? By phrasing these questions within our mathematical language, we will uncover answers that are complete, conceptually simple, and clearly correct. This entails no magic or sleight of hand. To write a rigorous mathematical language, all unnecessary assumptions must be shed. In this way, the thicket of ad hoc assumptions that surrounds quantum physics will be cleared. Further, in developing the theory, the logical consequences of the necessary assumptions will be laid bare. Therefore, when a question can be phrased in such a language, one can reasonably expect a clear, simple answer. In this way we will dispel much of the mystery surrounding quantum measurements, and begin to understand why quantum probabilities have their peculiar representation as products of Hilbert space projection operators.

Contents

I. Introduction On the Question of Interpretations Two Models for Non-Determinism Two Models for Non-Additivity II. Dynamics A. Parameters B. Parametrized Functions and Dynamic Sets C. Dynamic Spaces D. Special Sets E. Limits and Closed Sets III. Experiments A. Shells B. E-Automata 1. Environmental Shells 2. The Automata Condition 3. Unbiased Conditions 4. E-Automata C. Ideal E-Automata 1. Boolean E-Automata 2. All-Reet E-Automata 3. Ideal E-Automata D. Ideal Partitions E. Companionable Sets & Compatible Sets IV. Probabilities A. Dynamic Probability Spaces B. T-Algebras and GPSs 1. t and [t] 2. (XN , TN , PN )

4 6 7 8 8 8 10 13 14 15 17 17 19 19 20 21 23 25 25 26 28 30 36 40 40 42 43 44

3. Convergence on a GPS
S 4. TS , TS N , and TN

46 48 49 51 52 53 54 55 55 55 57 60 62 63 65 66 71 73 76 78 78 79 80 86

C. Nearly Compatible Sets D. Deterministic & Herodotistic Spaces 1. DPSs on Deterministic & Herodotistic Spaces 2. DPSs in Deterministic Universes 3. DPSs in Herodotistic Universes V. Application to Quantum Measurement A. Preliminary Matters 1. A Note On Paths 2. Discretely Determined Partitions 3. Interconnected Dynamic Sets B. Partitions of Unity for Quantum Systems 1. The Conditional Case 2. The Non-Conditional Case C. Maximal Quantum Systems D. Terminus & Exordium A. -Additivity B. Conditional Probabilities & Probability Dynamics C. Invariance On Dynamic Sets and DPSs 1. Invariance On Dynamic Sets 2. Invariance on DPSs D. Parameter Theory References

I.

INTRODUCTION

Our fascination with quantum physics has as much do to with its strangeness as its success. This strangeness can conjure contradictory responses: on the one hand, the sense that science has dug so deep as to touch upon profound metaphysical questions, and on the other, the sense that something is amiss, as science should strive to uncover simple explanations for seemingly strange phenomena. Fans of the rst response will nd little of interest in the paper, for it explores the second. Lets start by noting that the mathematical language employed by quantum mechanics was not developed to investigate the types of problems that are of interest in that eld. Hilbert spaces, for example, were developed to investigate analogies between certain function spaces & Euclidean spaces; they were only later adopted by physicists to describe quantum systems. This is in sharp contrast with classical mechanics; the development of dierential calculus was, to a large extent, driven by the desire to describe the observed motion of bodies - the very question with which classical mechanics is concerned. In consequence, it is no exaggeration to say that if a question is well dened within classical mechanics, it can be described using calculus. In quantum mechanics, the mathematical language is far less articulate. For example, it leaves unclear what empirical properties a system must posses in order for the quantum description to apply. Its also unclear how, and even whether, the language quantum mechanics can be used to describe the experiments employed to test the theory. It is similarly unclear whether and/or how quantum mechanics can be used to describe the world of our direct experience. Such considerations lead to a simple question: To what extent are the diculties of quantum theory due to limitations in our ability to phrase relevant questions in the theorys mathematical language? If such limitations do play a role, it would not represent a unique state of aairs. As most everyone knows, Zenos paradoxes seemed to challenge some of our most basic notions of time and motion, until calculus resolved the paradoxes by creating a clear understanding of the continuum. Somewhat more remotely, the drawing up of annual calendars (and other activities founded on cyclic heavenly activity) was once imbued with a mystery well beyond our current awe of quantum physics. With the slow advance of the theory of numbers the mystery waned, until now such activity requires nothing more 4

mysterious than straightforward arithmetic. In this article, the question of whether a similar situation exists for quantum mechanics will be investigated. Three simple mathematical theories will be created, each addressing a basic aspect of dynamic systems. The resulting mathematical language will then be used to analyze quantum systems. If the analysis yields core characteristics of quantum theory, some portion of the theorys underpinnings will necessarily be revealed, and a measure of insight into previously mentioned questions ought to be gained. The mathematics will be constructed to speak to two of the most basic dierences between experimental results in quantum mechanics and those in classical mechanics: in quantum mechanics experimental outcomes are non-deterministic & the experimental probabilities are not additive. Non-additivity in turn puts limitations on the kinds of experiments that can be performed. To see this, note that if an experiment has outcomes of {X, X }, and another has outcomes of {X1 , X2 , X } then P (X ) + P (X ) = 1 = P (X1 ) + P (X2 ) + P (X ), and so P (X ) = P (X1) + P (X2); if P (X ) = P (X1 ) + P (X2 ) then one of these experiments can not be performed (the only other possibility is that the probability of a given outcome depends on the make-up of the experiment as a whole, but this is not the case in quantum physics). Turning this around, if there is no set Y , s.t. {X } Y and {X1 , X2 } Y are both sets of

experimental outcomes, one may wonder if we can expect P (X ) to equal P (X1) + P (X2 ). This line of reasoning provides some of the central questions to be answered in this article: What sorts of experiments can be performed? What limits scientists to only being able to perform these experiments? What does the set of allowed experiments imply about the nature of the experimental probabilities? How do these experimental probabilities correspond to quantum probabilities? We will seek simple, readily understandable answers to these questions. Of the three mathematical theories to be constructed, the rst will be a simple theory of dynamic systems that encompasses both deterministic and non-deterministic dynamics. Determinism refers to the particular case in which a complete knowledge of the present grants complete knowledge of the future; all other cases represent types of non-determinism. The theory of dynamic systems will then be utilized to construct a theory of experiments; dynamic systems will be used to describe both experiments as a whole, and the (sub)systems whose natures the experiments probe. The analysis will be somewhat analogous to that found in automata theory: when a (sub)system path is read into an experimental set-up, 5

the set-up determines which outcome the path belongs to. By understanding how such processes can be constructed, we can obtain an understanding of what types of experiments are performable, reproducible, and have well dened outcomes.[1] Finally, building on this understanding, a probability theory will be given for collections of experiments by assuming that the usual rules of probability and statistics hold on the set of outcomes for any individual experiment & that if any two experiments in the collection share an outcome, then they agree on that outcomes probability. These constructs will then be applied to quantum measurements, which are commonly described in terms of projection operators in a Hilbert space. It will be found that the nature and structure of these measurements can indeed be reduced to a clear, simple, rational understanding. At no point will there be any need to invoke anything the least bit strange, spooky, or beyond the realm of human understanding, nor any need to rely on any procedures that are utilized because they work even though we dont understand why. Because this paper only considers a fraction of all quantum phenomenon, no claim can be made that all (or even most) quantum phenomenon can yield to some simple, rational understanding. What we seek to establish is more modest - that at least some of our baement in the face of quantum physics is due to the manner with which we address the phenomenon, rather than the nature of the phenomena themselves.

On the Question of Interpretations

Though quantum interpretations is a large topic, in this article it will play a small role. However, because a theory can not be fully understood without some notion of its possible interpretations, a word or two are in order before proceeding. The concept of an "interpretation" will be taken here as being more or less equivalent to the mathematical concept of a "model".[2] The theories in this article will have many models, and there will be no attempt to single out any one as being preferred. In this section, informal sample models will be given for the two properties of central interest: non-deterministic dynamics and non-additive probabilities. These should help provide a background understanding for the theories.

Two Models for Non-Determinism

As noted previously, non-determinism simply means that a complete knowledge of a systems current state does not imply a complete knowledge of all the systems future states. In the simplest model of non-determinism, at any given time the system is in a single state, but that state doesnt contain enough information to be able to deduce what all the future states will be. This will be referred to as type-i non-determinism (the i standing for individual, because the system is always in an individual state). Turning towards the past rather than the future, this is akin to the situations found in archeology and paleontology, in which greater knowledge of the present state does yield greater knowledge of the past, but you would not expect any amount of knowledge of the present to yield a complete knowledge of all history. In type-m non-determinism, the system takes multiple paths simultaneously - every path it can take, it will take (the m in type-m stands for multiple). Quantum mechanics is often interpreted as displaying type-m non-determinism; for example, in a double slit experiment, the particle is viewed as traveling through both slits. There are also mixed models. In non-deterministic automata, the input shows type-i non-determinism (in that a single character in the input string will not determine what the rest of the input must be), while the automata has type-m non-determinism (as individual characters are read in, the automata non-deterministically samples all possible transitions). Some quantum mechanical interpretations invert that view - the system being experimented on is seen as having type-m non-determinism while the experimental set-up is seen as showing type-i non-determinism (its often further assumed that if the system were to be deterministic, then the experimental set-up would also be deterministic, and more specically, classical). In a similar manner, decoherence often entails an unspoken assumption that a system displays type-m non-determinism if the non-diagonal elements of the density matrix do not vanish, and displays type-i non-determinism otherwise; in this sense, the non-determinism is considered to be type-i in so far the probabilities are additive, and type-m in so far as the probabilities are non-additive. Its important to stress that type-i, type-m, and mixed models are not the only possible models for non-determinism; many-worlds interpretations, for example, provide yet another type of model. 7

Two Models for Non-Additivity

Two models will now be given for systems that display both non-determinism & nonadditive probabilities. Systems with type-i non-determinism can have non-additive probabilities if interactions with the measuring devices can not be made arbitrarily small (e.g., if the elds mediating the interactions are quantized). As an example, imagine experimental outcome X and outcomes {X1 , X2 } s.t X = X1 X2 and X1 X2 = . The minimal interactions required

to determine X will in general be dierent from the minimal interactions to determine X1 or X2 . These diering interactions will cause dierent deections to the system paths, which can result in dierences between the statistical likelihood of the outcome being X and the statistical likelihood of the outcome being X1 or X2 . Such a state of aairs can be referred to as the intuitive model, because it allows our physical intuition to be applied. For type-m non-determinism, one manner in which non-additive probabilities can appear is if the various paths that the system takes interfere with one other. In this case, for outcome X , all paths corresponding to X may interfere, whereas for {X1 , X2 } paths can only interfere if they correspond to the same Xi ; the dierences in interference then lead to dierent probabilities for the outcomes. This can be referred to as the orthodox model, because it is shared by many of the most widely accepted interpretations of quantum mechanics. Once again, these are just two sample models for the non-additivity; there are many others.

II.

DYNAMICS

To begin, we require a theory of dynamic systems. Existing theories tend to make assumptions that are violated by quantum systems, so here a simple, lightweight theory will be presented; a theory in which all excess assumptions will be stripped away.

A.

Parameters

Dynamic systems are systems that can change over time; they are parametrized by time. In a sense, the one feature that all dynamic systems have in common is that the are parametrized. We therefore begin by quickly reviewing the concept of a parameter. 8

Parameters come in variety of forms. For example, some systems have discrete parameters, while others have continuous parameters. None the less, all parameters share several basic features. Specically, a parameter must be totally ordered, and must support addition. Thus, parameters are structures with signature (, <, +, 0), where < is a total ordering, + is the usual addition function, and 0 is the additive identity. It is shown in Appendix D that this, together with the requirement that there be an element greater than 0, yields the general concept of a parameter. The set of Real numbers are parameters in this sense, as are the Integers , the Rationals, and all innite ordinals. This is, however, a little too general. First, in this article we will only be interested in parameters whose values are nite. Second, for reasons of analysis, we will only be interested in parameters that are Cauchy convergent. These two requirements are equivalent to adding a completeness axiom; this axiom states that all subsets of that are bounded from above have a least upper bound. This limits the models to only four, classied by whether the parameter are discrete or continuous, and whether or not they are bounded from below by 0. These four types of parameters are closely related to the canonical number systems, and can be readily constructed from them by introducing the parameter value 1, and multiplication by a number. If the parameter is discrete, assign 1 to be the successor to 0; if its continuous, choose 1 to be any parameter value greater than 0. The choice of 1 sets the scale (e.g., 1 second); it is because parameters have a scale that multiplication is not dened on parameters. If is a parameter and , added to itself n times will be denoted n (for example, 3 + + ). 0 0. As shown in Appendix D, constructing the four types of parameters from numbers is now straightforward. For any discrete, bounded from below parameter, (, <, +, 0), the following will hold: = {n1 : n N}, n1 + m1 = (n + m)1, and n1 > m1 i n > m. Discrete, unbounded parameters are similar, but with the Integers replacing the natural numbers. For continuous, unbounded parameters, take 1 to be any positive value, and replace the natural numbers with the Reals (multiplication by a real number is dened in Appendix D). Similarly, for continuous, bounded parameters, replace the natural numbers with the non-negative Reals. This close correspondence between parameters and numbers is why the two concepts are often treated interchangeably. We conclude this section by reviewing some standard notation. First, the notation for 9

parameter intervals: Denition 1. If is a parameter and 1 , 2 [1 , 2 ] = { : 1 2 } (1 , 2 ) = { : 1 < < 2 } and similarly for [1 , 2 ) and (, 2]. [1 , ] = { : 1 } [, 1 ] = { : 1 } and similarly for (, ), etc. (note that [1 , ) = [1 , ]). Next, functions for least upper bound and greatest lower bound: Denition 2. If is a parameter and then if is bounded from above, lub() is s least upper bound, and if is bounded from below, glb() is s greatest lower bound. Min() is equal to glb() if is bounded from below, and otherwise. Similarly, Max() is equal to lub() if is bounded from above, and otherwise. And lastly, the denition of subtraction: Denition 3. If is a parameter and , : 0 if Min() = 0 and < where + = , otherwise element of .

It follows from the numeric constructions outlined above that always yields a unique

B.

Parametrized Functions and Dynamic Sets

Parametrized functions and dynamic sets are the rudimentary concepts on which all else will be built. This section introduces them, along with their notational language. We start with parametrized functions. Denition 4. A parametrized function is a function whose domain is a parameter. If f is a parametrized function, and [x1 , x2 ] is an interval of Dom(f ), then f [x1 , x2 ] is f restricted to domain [x1 , x2 ] (values of x1 = and/or x2 = are allowed.) We dene one operation on parametrized functions, concatenation. 10

Denition 5. If f and g are parametrized functions, Dom(f ) = Dom(g ), and f () = g (), then f [x1 , ] g [, x2] is the function on domain [x1 , x2 ] s.t. f ( ) if [x1 , ] f [x1 , ] g [, x2 ]( ) = g () if [, x2 ] dynamic sets. Denition 6. A dynamic set, S , is any non-empty set of parametrized functions s.t. all elements share the same parameter If f S then f can be referred to as a path, and will be generally be written p With p S , S Dom( p) (that is, S is the parameter that all elements of S share) PS
p S

That is all thats needed for parametrized functions. They will now be used to dene

Ran( p); the elements of PS are states

S [x1 , x2 ] {p [x1 , x2 ] : p S} For S , S () {p PS : f or some p S, p () = p} (That is, S () is the set of possible states at time ) Uni(S ) {(, p) S PS : p S ()} ( Uni(S ) is the universe of S - the set of all possible time-state pairs) In the above denition, time was occasionally mentioned. In what follows, we will refer only to the parameter, and not time. This is because time has grown into an overloaded concept. For example, in relativity the time measured on a clock is a function of the path the clock takes. Time measured by a clock is generally referred to as the proper time. On the other hand, in relativity theory a particle state is generally taken to be a four dimensional vector, x , and coordinates are often chosen so that the 0th element parametrizes the dynamics. This coordinate is generally referred to as the time coordinate, and this notion of time referred to as coordinate time. For such single particle systems, the elements of Uni(S ) will be of the form (, x ), and for coordinates that have a time coordinate, the 0th element of x will always equal . Since coordinates may be used to describe all paths, and proper time is path dependent, it follows that in such this case may not equal the proper time. In this paper, all such complications will be shrugged o. First, we will make no attempt to map states onto coordinates. Moreover, there will be a preference to reserve the term time for the quantity that is measured by clocks. Since this may or may not equal the 11

quantity used to parametrize the system dynamics (depending on the path the clock takes), we retire the term time and speak only of parameters. Back to dynamic sets. The concatenation operation can be extended to sets of path-segments. Denition 7. If S is a dynamic set and A and B are sets of partial paths of S then A B {p 1 [x1 , ] p 2 [, x2 ] : p 1 [x1 , ] A, p 2 [, x2 ] B and p 1 () = p 2 ()} The following notation for various sets of path-segments will prove quite useful. Denition 8. If S is a dynamic set, (, p), (1, p1 ), (2 , p2 ) Uni(S ), and 1 2 S(,p) {p [, ] : p S and p () = p} S(,p) {p [, ] : p S and p () = p} S(1 ,p1)(2 ,p2) {p [1 , 2 ] : p S, p (1 ) = p1 , and p (2 ) = p2 } If S is a dynamic set and p, p1 , p2 PS S p { p [, ] : p S, S , and p () = p} Sp { p [, ] : p S, S , and p () = p} Sp 1 p 2 { p [1 , 2 ] : p S, 1 , 2 S , p (1 ) = p1 , and p (2 ) = p2 } If S is a dynamic set, Y, Z PS SY Z
pY,p Z

Sp p

...and similarly for SZ , SY , S(,Y )(,Z ) , etc. This notation may be extended as needed. For example, Sp1 (2 ,p2)p3 is the set of all paths in S that pass through p1 , then through (2 , p2 ), then through p3 . One of the most basic properties of a dynamic systems is whether or not its dynamics can change over time. A dynamic set is homogeneous if its dynamics are the same regardless of when a state occurs. More formally: Denition 9. A dynamic set, S , is homogeneous if for all (1 , p), (2 , p) Uni(S ), 1 2 , p S(1 ,p) i there exists a p S(2 ,p) s.t. for all S , p () = p ( + 2 1 ). Note that if D is bounded from below, S can still be homogeneous; it would simply mean that the paths running through (1 , p) and those running through (2 , p) only dier by a shift & by the fact the initial part of the paths running though (1 , p) are cut o. A related property is whether or not a given state, or set of states, can occur at any time. 12

Denition 10. If S is a dynamic set, p PS is homogeneously realized if for every S , p S (). A PS is homogeneously realized if every p A is.

C.

Dynamic Spaces

Dynamic sets embrace quite a broad concept of dynamics. This can make them dicult to work with, and so it is often helpful to make further assumptions about the system dynamics. A common assumption for closed systems is that the systems possible future paths are determined entirely by its current state. This notion is captured by the following type of dynamic set: Denition 11. A dynamic space, D , is a dynamic set s.t. if p , p D , D , and p () = p (), then p [, ] p [, ] D . Thus, dynamic spaces are closed under concatenation. This will prove to be an enormous simplication. Closed systems are generally assumed to have this property, and so in what follows, closed systems will always be assumed to be dynamic spaces. (Note that a system being experimented on interacts with experimental equipment, and so is not closed. As a result, such a system might not be a dynamic space.) Denition 12. If D is a dynamic space, D Dp1 p2 may by written p1 p2 ). Similarly, D be written
1 2 1 2

may be written

2.

(For example, D

D
2

may be written

3,

may

, etc.

If D is a dynamic space then, for example, Dp1 (2 ,p2)p3 = Dp1 (2 ,p2) D(2 ,p2 )p3 ; if, on the other hand, S is simply a dynamic set, we could only assert that Sp1 (2 ,p2)p3 Sp1 (2 ,p2 ) S(2 ,p2 )p3 . This is why, for dynamic spaces, we may relax the notation and simply refer to p1 (2 , p2 ) p3 , while for dynamic sets we need to be clear about whether we mean Sp1 (2 ,p2 ) S(2 ,p2 )p3 or Sp1 (2 ,p2)p3 . For dynamic spaces the use of outer arrows, such as (1 , p1 ) p2 , may be extended to arbitrary sets of partial paths: Denition 13. If D is a dynamic space, and A is a set of partial paths s.t. if p [x1 , x2 ] A then p [x1 , x2 ] D [x1 , x2 ], 13

p A if p D and for some p [x1 , x2 ] A, p [x1 , x2 ] = p [x1 , x2 ]. p [x, ] A if p [x, ] D [x, ] and for some p [x, x1 ] A, p [x, x1 ] = p [x, x1 ]. p [, x] A if p [, x] D [, x] and for some p [x0 , x] A, p [x0 , x] = p [x0 , x].

D.

Special Sets

Although the notation introduced in Secs II B and II C will be sucient for nearly all circumstances, two straightforward additions will prove useful in the discussion of experiments. First, it will be useful to isolate the subset of X Y consisting of the path-segments that dont re-enter X after their start; it will similarly be useful to isolate the path-segments that dont enter Y until their end. Denition 14. If S is a dynamic set and X, Y, Z PS : SX Y Z { p [1 , 2 ] SX Y : f or [1 , 2 ), p () / Z} SZ X Y { p [1 , 2 ] SX Y : f or (, 2 ], p () / Z} SX Y Y is then the set of path segments from S that start at X and end at Y , but do not enter Y before the end of the segment. Similarly for SX X Y . SX/Y , to be introduces momentarily, bears a resemblance to SX Y Y , but overcomes a diculty that occurs in the continuum: Denition 15. SX/Y {p [1 , 2 ] : p [1 , ] SX , Y glb( p1 [1 , ][Y ])} In the above denition, 2 is either the rst time Y occurs in p [1 , ] or, if the rst time cant be obtained in the continuum, the moment before Y rst appears. Denition 16. If S is a dynamic set and X, Y f or some p [1 , 2 ] SX/Y , p = p (2 )} Theorem 17. If D is a homogeneous dynamic space then DX/Y = X (X/Y ) Y Proof. If p [1 , 2 ] DX/Y then p (1 ) X , p (2 ) (X/Y ), and for all [1 , 2 ), p () / Y, so DX/Y X (X/Y ) Y . (Note that this doesnt require D to be homogeneous, or a dynamic space.) 14 PS , (X/Y ) {p PS : Ran( p[1 , ]) = , and 2 =

It remains to show that X (X/Y ) Y DX/Y . Take p [1 , 2 ] X (X/Y ) Y , p p (2 ) (X/Y ). By the denition of (X/Y ), theres a p [3 , ] p s.t. 3 = glb( p [3 , ]1[Y ]). By homogeneity, theres a p s.t. for all D , p () = p ( +(3 2 )) (assuming 3 2 or D is unbounded from below; otherwise p () = p ( + (2 3 )) may be asserted for all D ). By considering p [1 , 2 ] p [2 , ], it follows that p [1 , 2 ] DX/Y .

E.

Limits and Closed Sets

n this nal section, system dynamics will be used to dene limits. Four ways in which a point p PS can be a limit point of set X PS at will be dened. They correspond to: if the system is in state p at then X must be about to occur, if the system is in state p at then X might be about to occur, if the system is in state p at then X might have just happened, and if the system is in state p at then X must have just happened. For formally: Denition 18. If S is a dynamic set, p PS , S , and X is a non-empty subset of PS then p lim [, ] p , every > , theres a (, ) s.t. p ( ) X X if for every p
[, ] p and a (, ) s.t. p ( ) X p lim> X if for every > , theres a p p lim< [, ] p and a X if > Min(D ) and for every < , theres p

( , ) s.t. p ( ) X p lim [, ] p s.t. , every < , theres a X if > Min(D ) and for every p ( , ) s.t. p ( ) X If S is homogeneous, then the subscript may be dropped, and we may simply write p lim X , etc.
Denition 19. If S is a dynamic set, X PS , and S then X X > X X < X lim> X , X lim< X , and X X

lim X ,

lim X .

Once again, if S is homogeneous, then the subscript may be dropped, and one may simply write X , etc. A set is closed if it contains all its limit points. The following theorem lays out conditions under which X , etc., are closed. 15

Theorem 20. 1) If S is a homogeneous dynamic set, and X PS , then X = X and X = X . 1) If D is a homogeneous dynamic space, and X PD , then X >> = X > and X << = X < . Proof. If D is discrete, this holds because if D is discrete then for all X D , lim X = . Take D to be continuous. 1) Assume p lim X and take p [, ] Sp & > . Theres a (, + 1 ( )) 2 s.t. p ( ) X . Because p ( ) X , either p ( ) X or p ( ) lim X , so theres a
1 ( )) s.t. p ( ) X . Therefore theres a (, ) s.t. p ( ) X , [ , + 2

and so p lim X , and so p X . is similar 2) Assume p lim> X > . For any , > theres a p [, ] p and a
1 (, + 2 ( )) s.t. p ( ) X > . Because p ( ) X > , either p ( ) X or p ( )

( )) s.t. p ( ) X . Take lim> X , so theres a p [ , ] p , [ , + 1 2 p [, ] = p [, ] p [ , ]. p p , (, ), and p ( ) X , so p lim X , and so p X . < is similar. Under the conditions of Theorem 20, the collection of all closed sets of type > are closed under intersections and nite unions. They therefore form a topology on PD ; the lower-limit topology. The closed sets of type < also form a topology; the upper-limit topology. The closed sets of type , and those of type , are closed under intersection, but not union. They therefore do not form topologies. That is because these limits are fairly strict, and so can not always determine whether or not some point p is local to some set X . Theorem 21. If S is a homogeneous dynamic set, and X, Y PS , then (X/Y ) Y > Proof. By the denition of (X/Y ), p (X/Y ) i for some p S , 1 , 2 D (1 2 ), p (2 ) = p, p (1 ) X , [1 , 2 ) p Y or p lim> Y . p 1 [Y ] = , and for all > 2 [2 , ) p 1 [Y ] = . In this case, p [2 , ] p and for all > 2 theres a [2 , ) s.t. p ( ) Y , so either

16

III. A.

EXPERIMENTS Shells

In this part, experimentation on dynamic systems will be formalized. As mentioned in the Introduction, this will be analogous to creating a formal theory of computation via automata. In computation theory, sequences of characters are read into an automata, and the automata determines whether they are sentences in a given language. One of the goals of the theory is to determine what languages automata are capable of deciding. Experiments are similar; system paths are read into an experimental set-up, which then determines which outcomes these paths belong to. Our goal will be to determine what sets of outcomes experiments are capable of deciding. In this rst section, we will not divide the experiment into the system being experimented on, and the environment containing the experimental apparatus; for now, we consider the closed system that encompasses both. The dynamic space of this closed system will be used to formalize some of the external properties of experiments; namely, that experiments are re-runnable, they have a clearly dened start, once started they must complete, and once complete they remember that the experiment took place. Denition 22. A shell is a triple, (D, I, F ), where D is a dynamic space and I, F PD are the sets of initial and nal states. These must satisfy: 1) D is homogeneous 2) I is homogeneously realized 3) I = (I I F ) (That is, I = DI I F ) 4) F = (I I F ) In the above denition, I represents the set of initial states, or start states; when the system enters into one of these states, the experiment starts. F represents the set of nal states; when the system enters into one of these states, the experiment has ended. Axioms (1) and (2) ensure that the experiment is reproducible. Axiom (3) ensures that once the experiment begins, it must end. Axiom (4) ensures that the system can only enter a nal state via the experiment. A note of explanation may be in order for this nal axiom. D is assumed to be a closed system encompassing everything that has bearing on the experiment, including the person 17

performing the experiment. As a result, all recording equipment is considered part of D , including the experimenters memory. So if the nal axiom were violated, when the nal state is reached, you wouldnt be able to remember whether or not the experiment had taken place. In (3) and (4), I I F is used rather than I F in order to ensure that theres a clearly dened space in which the experiment takes place. For example, F = I F
1 1 ) I and p ( 2n ) F; would allow for a path, p [, 0] F , s.t. for all n N+ , p ( 2n +1

in this case, the F state at = 0 can not be paired with any particular experimental run. Similarly, I = I F would allow a path to cross I several times before crossing F ; in this case it would not be clear which crossing of I represented the start of the experiment. An experiment is considered to be in progress while the space is in a path segment that runs from I to F . The set of states in those path segments constitute the shell interior, or more formally: Denition 23. If Z = (D, I, F ) is a shell, Int(Z ) = {p PD : I p F = } is the shell interior. Theorem 24. If Z = (D, I, F ) is a shell then Int(Z ) = Ran(DI/F ) = Ran(I (I/F ) F ). (The abbreviation Ran(A) stands for the set of states
p [x1 ,x2 ]A

Ran( p[x1 , x2 ]).)

Proof. By Thm 17 and shell axiom 1, DI/F = I (I/F ) F . If p Ran(I (I/F ) F ) then for some p [1 , 2 ] I (I/F ) F , some 3 [1 , 2 ], p (3 ) = p, so p [1 , 3 ] I p F , and so p Int(Z ). If p Int(Z ) then theres a p [1 , 2 ] I p F . Since I = (I I F ) there must be a p [1 , 3 ] DI/F s.t. p [1 , 2 ] = p [1 , 2 ], so p Ran(DI/F ). The following items will prove quite useful:. Denition 25. If Z = (D, I, F ) is a shell, Dom(Z ) F Int(Z ) is the shell domain

If A Dom(Z ), A {p [0, ] I I A} = I (0, I ) A If A F , A {p [0, ] DI/F : f or some p [0, ] A , p [0, ] = p [0, ]} Given that our current state is in A, A tells us what has happened in the experiment; because shell dynamics are homogeneous, and I is homogeneously realized, it is sucient only consider paths that start at 0 = 0. 18

Given that the nal state is in A, A captures what occurred during the experiment. The shell axioms validate these interpretations of A and A .

B.

E-Automata

Further structure will now be added to shells, starting with dividing the shell domain into a system and its environment.

1.

Environmental Shells

In the following denition: R, on A1 ...

will refer to the Cartesian product, and for n-ary relation,

An , Pi (R) refers the set of a Ai s.t. for some r R, a = ri .

Denition 26. An environmental shell, Z = (D, I, F ), is a shell together with three sets, SZ , EInt(Z ) , and EF satisfying 1) (SZ 2) (SZ EInt(Z ) ) EF ) PD = Int(Z ) PD = F EF is the environment.

3) SZ = P1 (Dom(Z )), EInt(Z ) = P2 (Int(Z )), and EF = P2 (F ) SZ is the system and EZ EInt(Z )

Informally speaking, its assumed that the system (SZ ) is being observed, measured, recorded, etc., and that any observers, measuring devices, recording equipment, etc., are in the environment. Therefore, it is components in the environment which decide whether the experiment is complete; this is why a subset of the environmental states, EF , determine whether the shell is on F . If p [1 , 2 ] is a path segment from I I F , p [1 , 2 ] S is the system component of the path segment, and p [1 , 2 ] E is the environmental portion. More formally: Denition 27. For environmental shell Z : If s [1 , 2 ] : [1 , 2 ] SZ , e [1 , 2 ] : [1 , 2 ] EZ , and p [1 , 2 ] = ( s[1 , 2 ], e [1 , 2 ]) then p [1 , 2 ] S s [1 , 2 ] and p [1 , 2 ] E e [1 , 2 ]. If A is any set of shell domain path segments, A S {p [1 , 2 ] S : p [1 , 2 ] A} and A E {p [1 , 2 ] E : p [1 , 2 ] A}. This can be applied to A and A to extract the system information: 19

Denition 28. If Z is an environmental shell: If X EF , OX SZ If A Int(Z ): A {s [0, 1 ] : f or some p [0, 2 ] F , p (1 ) A & p [0, 1 ] S = s [0, 1 ]} A {s [1 , 2 ] : f or some p [0, 2 ] F , p (1 ) A & p [1 , 2 ] S = s [1 , 2 ]}. If an experiment completes with the environment in X , OX tells us what happened in the system during the experimental run (the O stands for outcome). OEF may be abbreviated OF . A and A give insight into whats happening to the system in the shell interior: A gives the system paths from I to A and A gives the system paths from A to (I/F ). For convenience,
EInt(Z ) X

may be written .

Theorem 29. OF = (I/F ) Proof. Its clear that OF (I/F ) . Assume s [0, ] (I/F ) . For some p [0, 2 ] F , p (1 ) p (I/F ) and p [0, 1 ] S = s [0, 1 ]. Since p (I/F ) and D is homogeneous there must be a p [1 , ] p s.t. for all 2 > 1 theres a [1 , 2 ) s.t. p ( ) F . Therefore p [0, 1 ] p [1 , ] I and p [0, 1 ] DI/F ; this means that p [0, 1 ] F , and so s [0, 1 ] OF .

2.

The Automata Condition

The automata condition demands that an experiment will terminate based solely on what has transpired in the system. In particular, it species that if p 1 [0, 1 ], p 2 [0, 2] F , p 1 [0, ] F , and p 1 [0, ] S = p 2 [0, ] S then p 2 [0, ] F . Given shell axiom (3), this may be rephrased as follows: Denition 30. An environmental shell satises the automata condition if for every s 1 [0, 1 ], s 2 [0, 2 ] OF s.t. 2 > 1 , s 1 [0, 1 ] = s 2 [0, 1 ] The constraints this places on the closed systems dynamics are summarized by the following theorem. Theorem 31. The following assertions on environmental shell Z are equivalent: 1) Z satises the automata condition 20

2) For p Int(Z ), if p

OF = then p F (I/F ) = OF =

3) (I/F ) F and Int(Z )(I/F )

Proof. 1 2: In order for the automata condition to hold, If p Int(Z ) and p then either p F p F . 2 3: Since OF = (I/F ) , (I/F ) p / F , and so p S then p 1 (1 ) Int(Z )(I/F ) OF = . OF = , and so (I/F ) F .

(I/F ), or all paths leaving p must immediately enter F ; either way,

If p Int(Z ) (I/F ) then the shell is not in F and is not about to transition into F , so 3 1: Take p 1 [0, 1], p 2 [0, 2 ] F and 2 1 . If p 1 [0, 1] S = p 2 [0, 1 ]

p 2 (1 ) (I/F ) (since p 1 (1 ) (I/F ) and 2 (1 ) = and so p

(I/F ) = ). Since p 2 (1 ) (I/F ) , p 2 (1 ) F , and so p 2 [0, 1 ] F ,

and for all 3 > 1 p 2 [0, 3 ] / F . Therefore 2 = 1 . So if p 1 [0, 1 ], p 2 [0, 2 ] F and 2 > 1 then p 1 [0, 1 ] S = p 2 [0, 1 ] S

3.

Unbiased Conditions

An environmental shell is unbiased if the environment does not inuence the outcome. In the strongest sense this demands that, while the environment may record the systems past, it has no eect on the systems future. Naively, this may be written: For every (s, e1 ), (s.e2 ) Int(Z ), (s,e1 ) = (s,e2 ) . However, because in general (s,e1 ) = (s,e2 ) , the elements of (s,e1 ) and (s,e2 ) may terminate at dierent times; this leads to a denition whose wording is more convoluted, but whose meaning is essentially the same. Denition 32. An environmental shell, (D, I, F ) is strongly unbiased if for every Dom(F ), every (s, e1 ), (s.e2 ) F (), s 1 [, 1 ] (s,e1 ) i there exists a s 2 [, 2 ] (s,e2 ) s.t., with Min(1 , 2 ), s 1 [, ] = s 2 [, ]. This condition means that any eect the environment may have on the system can be incorporated into the system dynamics, allowing the system to be comprehensible without having to reference its environment. The following theorem shows that if an environmental shell is strongly unbiased, OF behaves like a dynamic space.

21

Theorem

33. If an environmental shell is strongly unbiased then for every

s 1 [0, 1 ], s 2 [0, 2 ] OF , s.t. for some [0, Min(1 , 2 )], s 1 () = s 2 (), there exists an s 3 [0, 3 ] OF s.t., with Min(2 , 3 ), s 3 [0, ] = s 1 [0, ] s 2 [, ] Proof. First note that, regardless of whether the environmental shell is unbiased, for any p Int(Z ), if s [0, ] p and s [, ] p then s [0, ] s [, ] OF . Taking s = s 1 (), for some e1 , e2 EInt(Z ) , s 1 [0, ] (s,e1 ) and s 2 [, 2 ] (s,e2 ) . Since the shell is unbiased, there is a s 3 [, 3 ] (s,e1 ) s.t. s 3 [, ] = s 2 [, ] ( Min(2 , 3 )). By the considerations of the prior paragraph, s 1 [0, ] s 3 [, 3 ] OF . There are experiments for which the strong unbiased condition can fail, but the experiment still be accepted as valid. Taking an example from quantum mechanics, consider the case in which a particles position is measured 1 , and if the particle is in region A, the particles spin will be measured along the y-axis at 2 , and if particle is not in region A, the spin will be measured along the z-axis at 2 . Now consider two paths, s 1 and s 2 , s.t. s 1 is in region A at 1 , s 2 is not in region A at 1 , and for some (1 , 2 ), s 1 () = s 2 () s. s 1 [, ] s 2 [, ] is not a possible path because a particle cant be in region A at 1 and have its spin polarized along the z-axis at 2 . Therefore the set of particle paths is not a dynamic space (though it is still a dynamic set). Since the strong unbiased condition insures that the system can be described by a dynamic space, the unbiased condition must have failed. To see this, assume that the e-automata is in state (s, e1 ) at in the case where the system takes path s 1 , and state (s, e2 ) in the case where the system takes path s 2 . e1 and e2 determine dierent futures for the particle paths, e1 insures that the spin will be measured along the y-axis at 2 while e2 insures that the spin will be measured along the z-axis at 2 . This violates the strong unbiased condition. However, it doesnt necessarily create a problem for the experiment because e1 and e2 know enough about the system history to know s1 [, ] and s2 [, ] must reside in separate outcomes (one belongs to the set of A outcomes, the other to the set of not A outcomes); once paths have been dierentiated into separate outcomes, they may be treated dierently by the environment. This motivates a weak version of the unbiased condition, one that holds only when (s, e1 ) and (s, e2 ) have not yet been dierentiated into separate outcomes. For an environmental shell, Z = (D, I, F ), p, p Int(Z ) have not been dierentiated 22

if for some e EF , , p

Oe [0, ] = and p

Oe [0, ] = . If (s, e) and (s, e )

have not been dierentiated then the unbiased condition should hold for them. Similarly, if (s, e1 ) and (s, e2 ) have not been dierentiated, and neither have (s, e2 ) and (s, e3 ), then the unbiased condition should hold for (s, e1 ) and (s, e3 ). This motivates the following denition Denition 34. For e-automata Z , e EF , 0 |Oe [0, ]|0 Oe [0, ] |Oe [0, ]|n+1 {s [0, ] : f or some e EF , Oe [0, ] |Oe [0, ]|
nN

|Oe [0, ]|n = , & s [0, ] Oe [0, ]}

|Oe [0, ]|n

Note that if s [0, 0 ] Oe then s [0, ] Oe [0, ] i 0 . Denition 35. For environmental shell, Z = (D, I, F ), p, p Int(Z ) are undierentiated if for some e EF , , p |Oe [0, ]| = and p |Oe [0, ]| =

Now dene the weak unbiased condition in the same way as the strong condition, except that it only applies to undierentiated states. Denition 36. An environmental shell, (D, I, F ) is weakly unbiased if for every Dom(F ), every (s, e1 ), (s.e2 ) F () s.t. (s, e1 ) and (s.e2 ) are undierentiated, s 1 [, 1 ] (s,e1 ) i there exists a s 2 [, 2 ] (s,e2 ) s.t. with Min(1 , 2 ), s 1 [, ] = s 2 [, ]. Theorem 37. If an environmental shell is weakly unbiased then for every s 1 [0, 1 ], s 2 [0, 2 ] OF , s.t. for some [0, Min(1 , 2 )], s 1 () = s 2 () = s and for some e EF , s 1 [0, ], s 2 [0, ] |Oe [0, ]|, there exists an s 3 [0, 3 ] OF s.t., with Min(2 , 3 ), s 3 [0, ] = s 1 [0, ] s 2 [, ] Proof. Choose e1 , e2 EInt(Z ) s.t. s 1 [0, ] (s,e1 ) , s 2 [, 2 ] (s,e2 ) , (s,e1 ) , and (s,e2 ) Thm 33. |Oe [0, ]| =

|Oe [0, ]| = . Such e1 and e2 must exist because s 1 [0, ], s 2 [0, ] |Oe [0, ]|.

(s, e1 ) and (s, e2 ) are undierentiated, and so the proof now proceeds similarly to that for

4.

E-Automata

Denition 38. An environmental shell is an e-automata if it is weakly unbiased and satises the automata condition. 23

Theorem 39. If Z is an e-automata, p1 , p2 Int(Z ), and p1 Proof. Since p1

p2 = then p1 = p2

p2 = , p and p are undierentiated, so the result follows from Z being

weakly unbiased & satisfying the automata condition As mentioned earlier, an important goal will be to determine what sets of outcomes can be decided by various types of e-automata. Towards that end, the meaning of an eautomata deciding a set of outcomes will now be dened. For the moment well concentrate measurements on dynamic spaces; the more general case will be taken up in a later section. Denition 40. If X is a set, C is a covering of X if it is a set of subsets of X and (Note: it is more common to demand that X for some mild simplication.) C is a partition of X if it is a pairwise disjoint covering ... Denition 41. If Z = (D, I, F ) is an e-automata, D , A F , and X EF : [, ] DI/F : f or some p [, ] I I A, p [, ] = p [, ]} A {p
OX SZ X

C = X.

C ; the restriction to

C = X will allow

Because D is homogeneous and I is homogeneously realized, A and OX are simply A

and OX shifted by , and theorems for A and OX can be readily translated to theorems
for A and OX .

When an e-automata decides a set of outcomes on a dynamic space, it is fairly straightforward to dene the relationship between the dynamic space and the e-automatas outcomes. Denition 42. A covering, K , of dynamic space D is decided by e-automata Z = (DZ , I, F )
if D = DZ and, for some D , K { Oe : e E F }.

Note that Z can not decide K if starts after some . This may seem to violate experimental reproducibility. In a later section it will be seen that any such a measurement can be consistent with experimental reproducibility. Its not dicult to see how. If we were interested, for example, in the likelihood that a system is in state y at time 1 sec given that is was in state x at time 0, we do not mean that we are only interested in those probabilities at a particular moment in the history of the universe; it is assumed that the system can be 24

assembled at any time, and the time at which it is assembled is simply assigned the value of 0. If D is a dynamic space, and A D , an e-automata can determine whether or not A occurs if it decides a covering on D , K , s.t. and for all K , either A or A = . While this concept of measurement is adequate for many purposes, its a little too broad for quantum physics; in the next section a subclass of e-automata will be introduced that will eliminate the problematic measurements.

C.

Ideal E-Automata

As they stand, e-automata draw no clear distinction between two dierent types of uncertainty: extrinsic uncertainty - uncertainty about the state of the environment, particularly with regard to knowing precisely which e EF occurred, and intrinsic uncertainty - uncertainty about the system given complete knowledge of the environment. It is often necessary to treat with these two type of uncertainty separately. Such is the case in quantum physics: if s 1 and s 2 are system paths, theres a dierence between either s 1 or s 2 was measured, but we dont know which (extrinsic uncertainty) and {s 1 , s 2 } was measured (intrinsic uncertainty), because in general P (A B ) = P (A) + P (B ). In this section two properties will be introduced that together will remove the ambiguity between these types of uncertainty. These are the only renement to e-automata that will be required.

1.

Boolean E-Automata

Denition 43. An e-automata is boolean if for every e, e EF either Oe = Oe or Oe Oe = .

Theorem 44. If K is decided by a boolean e-automata then it is a partition Proof. Follows from the boolean & automata conditions. If an e-automata is not boolean, its possible to have e, e EF s.t. Oe = Oe and Oe Oe = . If its known that e occurred, and so e did not occur, its not clear whether Oe could have occurred. Boolean e-automata eliminate this sort of 25 the elements of Oe

ambiguity; if e occurred and Oe = Oe then none of the paths in Oe could have been taken. This means that for boolean e-automata, one can always separate uncertainty about which path(s) corresponding to Oe occurred from uncertainty about which e EF occurred. Denition 45. For a boolean e-automata, e EF , [e] {e EF : Oe = Oe }

2.

All-Reet E-Automata

It is possible for the environment to record a piece of information about the system at one point, and then later forget it, so that it is not ultimately reected in the experimental outcome. This leads ambiguity of the type discussed above; should the outcome be understood to be as A B or as A or B (or as something distinct from either)?

An all-reet e-automata is all retaining, once it records a bit of information about the system, it never forgets it. In order to dene this type of e-automate, it will be helpful to rst dene the subsets of Dom(Z ) , Int(Z ) , F , and OF containing just those path-segments that are consistent with a given environmental path-segment. Denition 46. For e : EZ , e [0, ] Dom(Z ) : p [0, ] E = e [0, ]} [0,] {p e [0, ] : ( s[0, ], e [0, ]) Int(Z ) } [0,] {s If e () EF : e [0, ] F : f or some p [0, ] e [0, ] = p [0, ]} [0,] {p [0,] , p Oe [0,] e [0,] S Theorem 47. If Z is an e-automata and e EF then Oe = Proof. Since F = (SZ EF )
e [0,]e E

Oe [0,]

PD , for any ( s1 [0, ], e [0, ]), ( s2 [0, ], e [0, ]) F , any


e [0,]e E

, s 1 [0, ] Oe 2 [0, ] Oe () i s () ; therefore which case s [0, ] = Op [0, ]E . Therefore Oe

Oe [0,] Oe .

If s [0, ] Oe then there must exist a p [0, ] e s.t.consistent s [0, ] = p [0, ] S , in


e [0,]e E

Oe [0,] .

Lets assume that as an experiment unfolds, the environment sequentially writes everything it discovers about the system to incorruptible memory. The state of the environment would include the state of this memory, so for every e 1 [0, 1 ], e 2 [0, 2 ] F E , if Oe 1 (1 ) = e 2 (2 ). Since Oe = 1 [0,1 ] = Oe 2 [0,2 ] then e following denition. 26
e [0,]e E

Oe [0,] , we are lead to the

Denition 48. For e-automata Z = (D, I, F ), e EF is reet if for all e [0, ] e E , Oe [0,] = Oe Z is all-reet if every e EF is reet. Theorem 49. If Z is an all-reet e-automata, e EF , e [0, ] e E , and e [0, ]( ) = e EF then Oe = Oe Proof. Take 1 = glb( e1 [0, ][EF ]); since e [0, ]( ) EF , 1 . e [0,] = e [0, ] = e [0,1 ] , and so Oe = Oe [0,] = Oe [0, ] = Oe . Theorem 50. If Z is an all-reet e-automata then for every e EF , every s 1 [0, 1 ], s 2 [0, 2 ] Oe , 1 = 2
Proof. Take any e [0, ] e E ; Oe = Oe [0,] . Assume that 1 < 2 ; since (I/F ) F

theres a [1 , 2 ) s.t. e ( ) EF . However, in that case s 2 [0, 2 ] Oe and s 2 [0, 2 ] / Oe [0, ] = Oe ( ) , contrary to Thm 49. Denition 51. If Z is an all-reet e-automata, e EF and s [0, ] Oe then (Oe ) . Thm 50 implies a similar statement for all of Int(Z ). Theorem 52. If Z = (D, I, F ) is an all-reet e-automata then for every p Int(Z ), every s 1 [0, 1 ], s 2 [0, 2 ] p , 1 = 2 Proof. Assume 1 2 . Take any p 1 [0, 1 ], p 2 [0, 2 ] p and any p [1 , 3 ] I p F . Because D is homogeneous there must be a p [2 , 3 + 2 1 ] I p F s.t. for all [1 , 3 ], p () = p ( + 2 1 ). Dene p 3 [0, 3 ] p 1 [0, 1 ] p [1 , 3 ] and p 4 [0, 3 + 2 1 ] p 2 [0, 2 ] p [2 , 3 + 2 1 ]. If p 3 [0, x ] F then p 4 [0, x + 2 1 ] F , so by Thm 50, 1 = 2 .consistent The next two theorems will prove to be quite useful. Theorem 53. If Z is an all-reet e-automata then for any e EF , [0, (Oe )], Oe = Oe [0, ] Oe [, (Oe )]. Proof. Clearly Oe Oe [0, ] Oe [, (Oe )]. Take any s [0, (Oe )], s [0, (Oe )] Oe s.t. s () = s (). For e [0, ] e E , Oe = Oe s[0, ], e [0, ]) ( s1 [, (Oe )], e [, (Oe )]) e [0, ] = e [0,(Oe )] S . ( [0,(Oe )] , so since Oe = e [0, ] s [, (Oe )] Oe . [0,(Oe )] S , s 27

Theorem 54. If Z is an all-reet e-automata then for every e EF , every [0, (Oe )], every e (e E )(), Oe [0, ] =
sOe ()

(s,e )

Proof. First note that there exists a e [0, ] e E s.t. e () = e . A:


sOe ()

(s,e ) Oe [0, ]

- Take any s Oe (). Since Z is all-reet, s Oe [0, ] [0, ] (), so there must be a p e () = (s , e ). Since p [0, ] e [, ] I (s , e ) e. Now take any [0, ] s.t. p [0, ] , p s [0, ] (s ,e ) ; there must exist a ( s[0, ], e [0, ]) (s ,e ) . ( s[0, ], e [0, ]) p [, (Oe )] e so s [0, ] Oe [0, ]. B: Oe [0, ] The following are some of the consequences of Thm 54. Theorem 55. If Z is an all-reet e-automata, p, p1 , p2 Int(Z ), e EF , and [0, (Oe )] 1) If (s1 , e1 ), (s2 , e2 ) e () then (s1 , e2 ) e () 2) If (s, e1 ), (s, e2) e () then (s,e1 ) = (s,e2 ) 3) If Z is boolean and all-reet and (s, e1 ), (s, e2 ) [e] () then (s,e1 ) = (s,e2 ) Proof. 1 and 2 are immediate from Thm 54. 3 follows from Thm 54 and the denition of [e].
sOe ()

(s,e )

- If s [0, ] Oe [0, ] then s [0, ] Oe () Oe (), s [0, ] (s,e ) . [0, ] [0, ], so with s = s

3.

Ideal E-Automata

Denition 56. An e-automata is ideal if it is boolean and all-reet. Ideal e-automata lack the ambiguities mentioned at the beginning of the section. The remainder of this section will seek to establish a central property ideal e-automata, to be given in Thm 59. Theorem 57. For ideal e-automata Z , for any p1 , p2 Int(Z ) 1) If p1 S = p2 S , and for some e EF , [0, (Oe )], p1 , p2 [e] () then p1 = p2 , in all other cases p1 p2 = . 2) If p1 = p2 then for all e EF , [0, (Oe )], p1 [e] () i p2 [e] () 28

Proof. 1) If the conditions hold then p1 = p2 by Thm 55.3. If p1 S = p2 S then clearly p1 p1 p2 = . p2 = If theres doesnt exist a e EF s.t. p1 , p2 Ran([e] ) then, since Z is boolean, p1 p2 p2 = . By Thm 39, p1 p2 = by Thm 52. If there exits an e EF s.t. p1 , p2 Ran([e] ), but no [0, (Oe )] s.t. p1 , p2 [e] () then p1 2) By Thm 39 p1 p1 = p2 p2 ; since Z is boolean p1 Ran([e] ) i p2 Ran([e] ). From Thm 52 it then follows that p1 [e] () i p2 [e] (). For ideal e-automata, the sets |Oe [0, ]| (see Section III B 3) are particularly useful; they tell you what has been measured as of . Theorem 58. If Z is an ideal e-automata then for every e EF , every [0, (Oe )], every e ([e] E )(), |Oe [0, ]| =
s|Oe [0,]|()

(s,e )

Proof. A: If Z is an ideal e-automata then for every e EF , every [0, (Oe )], every e ([e] E )(), Oe [0, ] =
sOe ()

(s,e ) (s,e ) . By (A)

- Immediate from Thm 54 and Thm 55.3 It is sucient to show that for every n N, |Oe [0, ]|n = For any Oe s.t. Oe [0, ]
sOe () s|Oe [0,]|n ()

this holds for n = 0. Assume it holds for n = i and consider n = i + 1. |Oe [0, ]|i = for every e2 ([e ] E )(), Oe [0, ] =
s|Oe [0,]|i ()

(s,e2 ) . By assumption |Oe [0, ]|i =

(s,e ) , so by Thm 57.1 theres

an s |Oe [0, ]|i () Oe [0, ] =


sOe ()

Oe () s.t. (s,e2 ) = (s,e ) ; by Thm 57.2 (s, e ) [e] (), so by (A) (s,e ) . Since |Oe [0, ]|i+1is the union over all such Oe [0, ] it follows
s|Oe [0,]|i+1 ()

immediately that |Oe [0, ]|i+1 =

(s,e ) .

Theorem 59. If Z is an ideal e-automata then for any e EF , [0, (Oe )], Oe = |Oe [0, ]| Oe [, (Oe )] Proof. With e (e E )(): Oe = Oe [0, ] Oe [, (Oe )] (Thm 53) =( =(
sOe ()

(s,e ) ) Oe [, (Oe )] (Thm 54) Oe [, (Oe )]

s|Oe [0,]|() (s,e ) )

= |Oe [0, ]| Oe [, (Oe )] (Thm 58)

29

D.

Ideal Partitions

This section will be concerned with the necessary and sucient conditions for a partition of a dynamic set to be the set of outcomes of an ideal e-automata. As a warm up, well start by picking up where we left o in Section III B 4 and consider the case where the system is a dynamic space; the more general case, where the system is a dynamic set, will be handled immediately thereafter. In order to attack these problems, several basic denitions rst have to be given. Denition 60. If S is a dynamic set, S is bounded from above (by ) if = [, ] S [, ], is bounded from below (by ) if = S [, ] [, ], and is bounded if it is both bounded from above and bounded from below. If A is a set of subsets of S , A is bounded from above/below (by ) every element of A is bounded from above/below by ; its bounded if its bounded from both above and below. Bounded from below by may be abbreviated bb and Bounded from above by may be abbreviated ba. Next to transfer the concept of |Oe [0, ]| to partitions. Denition 61. If K is a covering of dynamic set S , , D , and x is either or an element of S |[x, ]|0 [x, ] |[x, ]|n+1 {s [x, ] : f or some K s.t. [x, ] |[x, ]|
nN

|[x, ]|n = , s [x, ] [x, ]}

|[x, ]|n

By far the most important case is when x = ; this is a because of its use in dening ideal partitions. Ideal partitions will initially be dened just for dynamic spaces. Denition 62. If a partition of dynamic space D , it is an ideal partition (ip ) if it is bounded from below, all are bounded from above, and for all , S , = |[, ]| [, ]. The result to be attained in this section is that a covering is decidable by an ideal eautomata if and only if the covering is an ideal partition. The rst step towards that result is the following theorem.

30

Theorem 63. If a covering, K , of dynamic space D is decided by an ideal e-automata, Z = (DZ , I, F ), then K is an ideal partition
0 Proof. Since K is decided by Z , for some 0 D , K { Oe : e EF }. For each 0 K dene 0 + (Oe ). = Oe

A: K is bb0 :
0 - Immediate from the denition of Oe -

B: Every K is bounded from above


0 - Immediate from Thm 50 and K = { Oe : e E F } -

C: K is a partition - Immediate from Thm 44 D: For all > 0 , K , |[, ]| = |[0 , ]| - It is sucient to show that for all n N, |[, ]|n = |[0, ]|n . By (A), this holds for n = 0. Assume it holds for n = i. By (A) and assumption on i, for all K , [, ] |[, ]|i = i [0 , ] |[0 , ]|i = . Since |[x, ]|i+1 is equal to the union over the s that intersect |[x, ]|i , |[, ]|i & |[0 , ]|i are intersected by the same set of K , and because all such are bb, |[, ]|i+1 = |[0 , ]|i+1 It remains to show that for all K , D , = |[, ]| [, ]. Clearly |[, ]| [, ], so it only needs to be shown that |[, ]| [, ] . For 0 : Immediate from (A). For (0 , ): By Thm 59, [0 , ] = |[0 , ]| [, ]. By (B) and (D) |[, ]| [, ] = ( |[0, ]|) ([, ] ) = (|[0 , ]| [, ]) = [0 , ] = . For : By (B) and (C), |[, ]| = [, ], so |[, ]| [, ] = [, ] [, ] [, ] = . Because ideal e-automata need only be weakly unbiased, it is insucient to only consider measurements on dynamic spaces, so lets now to consider the more general case of a system thats a dynamic set. The rst thing to do is to extend the outer notation to dynamic sets. Denition 64. If S is a dynamic set and A is a set of partial paths then A , A, and A are dened as before: p A if there exists a p [x1 , x2 ] A s.t. p S [, x1 ] p [x1 , x2 ] S [x2 , ], etc. A is A relativized to S : A ( A ) 31 S;

similarly for A and A . As a convenient shorthand, +A A . If S is a dynamic space then A = A and A = A . It will also be useful to also extend the notion of boundedness to cover various situations that can arise with dynamic sets. (Note that the denition for bounded from above/below was given for dynamic sets, and so still holds.) Denition 65. If S is a dynamic set and S : If = [, ] then is weakly bounded from above (by ) ; if = [, ] then is weakly bounded from below (by ). If for all , = [, ], is strongly bounded from below (by ); similarly for strongly bounded from above (by ). If A is a set of subsets of S , A is strongly/weakly bounded from above/below (by ) every element of A is strongly/weakly bounded from above/below (by ). Weakly bounded from above by may be abbreviated wba, strongly bounded from above by may be abbreviated sba, etc. If S is a dynamic space there is no dierence between being bounded from above/below, weakly bounded from above/below, and strongly bounded from above/below. Theorem 66. If is wba and > then is wba Proof. Assume p 1 [, ] [, ], p 2 [ , ] S [ , ], and p = p 1 [, ] p 2 [ , ] S . Since is wba and p [, ] = p 1 [, ] [, ], p . The most inclusive notion of decidability on a dynamic set would be: e-automata (DZ , I, F ) decides covering K on dynamic set S if for some S
1) For all p [, ] OF ,p [, ] S [, ] 2) K = {+Oe : e E F }.

However, this would mean that the interactions between the system and the environment are only regulated while the experiment is taking place; before and after the experiment, any kind of interactions would be allowed, even those that would undermine the autonomy of the system. It is more sensible to assume that these interactions are always unbiased. If the system and environment dont interact outside of the experiment, which would be a natural 32

assumption, then its certainly the case that the interactions are always unbiased. Because nothing is measured prior to , prior to there is no distinction between being weakly & strongly unbiased. From onward, Thm 37 ought to hold. This leads to the following with regard to the make up of a system: Denition 67. If S is a dynamic set, K is a covering of S , and S , S is unbiased with respect to (, K ) if 1) S is sbb 2) For all a K , all > , all p 1 , p 2 +|a[, ]|, if p 1 ( ) = p 2 ( ) then p 1 [, ] p 2 [ , ] S . It may be assumed that K is sbb, but it is not demanded. Were now in a position to dene decidability in general. Denition 68. A covering, K , of dynamic set S is decided by e-automata Z = (DZ , I, F ) if S = DZ and for some D :
1) For all p [, ] OF ,p [, ] S [, ].

2) S is unbiased with respect to (, K ).


3) K {+Oe : e E F }.

Ideal partitions now also need to be generalized for dynamic sets. Denition 69. If a partition of dynamic set S , it is an ideal partition (ip ) if it is strongly bounded from below, all are weakly bounded from above, and for all , S , = |[, ]| [, ]. In the case where S is a dynamic set, this reduces to the prior denition. Now to generalize Thm 63. Theorem 70. If a covering, K , of dynamic set S is decided by an ideal e-automata, Z = (DZ , I, F ), then K is an ideal partition
0 : e EF }. For each Proof. Since K is decided by Z , for some0 S , K {+Oe 0 K dene 0 + (Oe ). = +Oe

A: K is sbb0 - Take any K , 0 ; it is necessary to show that = [, ]. For any p 1 [, ] S [, ], p 2 [, ] [, ] s.t. p 1 () = p 2 (), p = p 1 [, ] p 2 [, ] S 33

because S is unbiased with respect to (, K ). Since 0 , p 2 [0 , ] [0 , ], so since = +[0 , ], p . B: Every K is weakly bounded from above
0 -Immediate from Thm 50 and K = { Oe : e EF } -

C: K is a partition - Follows from the boolean & automata conditions D: For all S , K , [, ] [, ] S - Follows from S being unbiased with respect to (, K ). E: For all > 0 , K , |[, ]| = |[0, ]| - Identical to (D) in Thm 63 It remains to show that for all K , S , = |[, ]| [, ]. As before, |[, ]| [, ], so it only needs to be shown that |[, ]| [, ] . For 0 : Immediate from (A). For (0 , ): Assume p 1 [, ] |[, ]| and p 2 [, ] [, ]. It follows from (E) that p 1 [0 , ] |[0 , ]|, and so from Thm 59 that p 1 [0 , ]p 2 [, ] [0 , ]. It then follows from (A) that p 1 [, ] p 2 [, ] [, ]. Since S is unbiased with respect to (, K ), p 1 [, ] p 2[, ] S . Therefore, since = [, ] , p 1 [, ] p 2[, ] . For : By (B) and (C), |[, ]| = [, ], and by (D) [, ] [, ] [, ] . By (B) = [, ] . And now establish the inverse, that all ips are ideally decidable. Theorem 71. All ideal partitions are ideally decidable Proof. Assume is an ideal partition of dynamic set S . An ideal e-automata, Z = (D, I, F ), that decides will be constructed. 1) Start by constructing a new dynamic space D0 = S E0 as follows: For every D create a set E (), and a bijection b : | [, ]| E () s.t. for all 1 = 2 , E (1 ) E (2 ) = ; ( s, e ) D0 i s S and for all , if s then e () = b (|[, ]|) 2) From D0 construct Z s dynamic space, D : If S is unbounded from below, for every D dene D {p : f or some p D0 , f or all S , p ( + ) = p ( )}. DZ 34

D .

, is unbounded If D is bounded from below, take D0 to be any dynamic space s.t. D0 from below, D0 [0, ] = D0 , and for all < < 0, D0 () D0 ( ) = (this already holds

for all > 0). Construct D as above and take D = D [0, ]. 3) Construct I : Select any 0 D s.t. is bb0 , I D0 (0 ) 4) Construct F : For every select a D s.t. = +[, ]; F

( ) {ba (|[, ]|)}.

Now to show that (D, I, Z ) is an ideal e-automata that decides . A: D0 is a dynamic space - Take any ( s1 , e 1 ), ( s2 , e 2 ) D0 s.t. for some S , ( s1 ( ) , e 1 ()) = ( s2 ( ) , e 2 ()). Take s 1 1 and s 2 2 ; note that since e 1 () = e 2 (), |1 [, ]| = |2 [, ]|. Take ( s, e ) ( s1 [, ] s 2 [, ], e 1 [, ], e 2 [, ]). s |1 [, ]| 2 [, ] = |2 [, ]| 2 [, ] = 2 , so ( s, e ) D0 i for all D , e ( ) = b (|2 [, ]|), which follows from the fact that s 2 . B: D is a dynamic space - For any p PD0 = PD , p is only realized at a single S in D0 . Because D is a union over shifted copies of D0 , paths in D can only intersect if the belong to the same copy; it follows that D is a dynamic space if D0 is. C: D is homogeneous; I is homogeneously realized - Because for any p PD0 , p is only realized at a single S in D0 , D0 must be homogeneous. From the nature of the construction of D from D0 its clear that D is also homogeneous, and that all of PD0 is homogeneously realized in D . D: I = I (I F ) F = I (I F ) - D0 = I I F and so for D0 , I = I (I F ) and F = I (I F ). If this holds for D0 , it must also hold for D E: Z is an environmental shell - All elements of Dom(Z ) can be decomposed into system & environmental states, and F has its own set of environmental states F: Z is weakly unbiased - Recall that I = D0 (0 ). Choose any e1 EF , and take , 1 + 0 S s.t. b0 +1 (|[, 0 + 1 ]|) = e1 (since e1 EF there can only be one such ). Take any e EInt(Z ) , D s.t. e |Oe1 [0, ]| = . It follows that e = b0 + (|[, 0 + ]|). 35

Therefore, if e, e EInt(Z ) and for some , e then e = e . Z must then be weakly unbiased G: Z is an e-automata - Take any s 1 [0, 1 ], s 2 [0, 2 ] OF s.t. p 2 (1 ) F , so 2 = 1 . H: Z is all-reet

|Oe1 [0, ]| = and e

|Oe1 [0, ]| =

2 1 and s 1 [0, 1 ] = s 2 [0, 1 ].

For

p 1 [0, 1 ], p 2 [0, 2] F s.t. p 1 [0, 1 ] S = s 1 [0, 1 ] and p 2 [0, 2 ] S = s 2 [0, 2], p 1 (1 ) =

- For any e EF theres only a single e [0, ] F E , so naturally Oe = Oe [0,] I: Z is boolean - For any s [0, ] OF theres only one ( s[0, ], e [0, ]) F . Since e () EF theres only one e EF s.t. s [0, ] Oe J: Z decides
0 - Given any , take e = b (|[, ]|); e EF and = +Oe

Note that S need not be homogeneous, nor does any part of PS need to be homogeneously realized.[3] Theorem 72. If is an ip and unbiased ideal e-automate. Proof. Use the same construction as the prior theorem. b (|[, ]|), (D0 )(s,e) S = (+|[, ]|)(,s) . If that (D0 )(s,e) S = ( For every (s, e) D0 , e = is a dynamic space, then is decided by a strongly

is a dynamic space, this means

)(,s) . Since (s,e) is simply (D0 )(s,e) S shifted by 0 and

truncated at the s (for denitions of 0 and , see (3) and (4) in the proof of the prior theorem), it follows immediately that the e-automata is strongly unbiased.

E.

Companionable Sets & Compatible Sets

In the nal section of this part, the make-up of ips will be investigated. Denition 73. If S is a dynamic set: A S is a subspace (of S ) if it is non-empty and for every S , A = A[, ]A[, ]. A S is companionable if it is a subspace thats weakly bounded from above and strongly bounded from below. 36

Theorem 74. If is an ip and then is companionable Proof. For any , [, ] [, ] |[, ]| [, ]. Since = |[, ]| [, ], = [, ] [, ]. By the denition of an ip, must be strongly bounded from below and weakly bounded from above. Denition 75. If S is a dynamic set, is a non-empty set of subsets of S , and : |[, ]| is dened identically to how its dened for covering in denition 61 ( ) { : [, ] |[, ]| }. is compatible if it is a pairwise disjoint set of companionable sets, strongly bounded from below, and for all , , all S , if () and p () (,p) = (,p) . If the set is understood, |[, ]| may be written |[, ]| and () may be written () . Theorem 76. If S is a dynamic set, is a non-empty set of subsets of S , and , , then 1) () i theres a nite sequence of elements of , (ai )in , s.t. a1 = , an = , and for all 1 i < n, ai [, ] 2) () i | [, ]| ai+1 [, ] = . |[, ]| = i | [, ]| = |[, ]|. () then

Proof. Immediate from the denition of |[, ]|. Theorem 77. is compatible i its pairwise disjoint, strongly bounded from below, all are weakly bounded from above, and for all , = |[, ]| [, ]. Proof. |[, ]| = ()
()

[, ]. Since is compatible, for all () , p

(), (,p) = (,p) . Therefore [, ] [, ] = |[, ]| [, ]. Since

is companionable, = [, ] [, ]. For , , if () then |[0, ]| = | [0, ]|, so = |[, ]| [, ], and so for any p () (), (,p) = (,p) . [, ] [, ] |[, ]| [, ] = , so is companionable. Theorem 78. A covering of a dynamic set is an ip i its compatible 37

Proof. Immediate from Thm 77. This establishes that if is the element of some ip then its companionable, and if t is the subset of some ip then its compatible. For dynamic spaces, the converse also holds. For any compatible set, t, an all-reet not t can be constructed by taking the paths not in t and grouping them by when they broke o from t, and which |[, ]| they broke o from. t together with the all-reet not t form an ip. When the parameter is not discrete one mild complication is that, in dealing with paths that broke o from t at , well need to distinguish between paths that were in t at , but arent in t at any > from those that are not in t at , but were in t at all < . Denition 79. If t is compatible, S is a dynamic set s.t. t {p S : p [, ] / | | S : p [, ] / {p t[, ]} t[, ]|} t[, ]| and f or all < , p [, ] |[, ]|} t S , and t then

| |+ S : p [, ] |[, ]| and f or all > , p [, ] / {p

These sets can be used to construct an ip containing t. Theorem 80. If D is a dynamic space, is a subset of D , and t is a set of subsets of D then 1) If is companionable then there exists an ip, , s.t. 2) If t is compatible then there exists an ip, , s.t. t Proof. (1) Follows immediately from (2). For (2), assume t is bb and take to be the set s.t. t , if t = then t , and for all t, > , if ||
+/ || +/ || +/

= then

. is then partition of D . (From here on, the qualiers if t = and if = will be understood.)

The following is key to establishing that is an ip:


A: For all 1 < 2 , p || 2 (1 ), ( ||2 )(1 ,p) = |[, 2 ]|(1 ,p) - By the denition of || its clearly the case that ( ||2 )(1 ,p) |[, 2 ]|(1 ,p) .

Take any p 1 +|[, 2]| and any p 2 || 1 (1 ) = p 2 (1 ) = p. Take p 2 s.t. p p 1 [, 1 ] p 2 [1 , ] D and note that for all < 2 p [, ] |[, ]|. Assume that for some t, p [, 2 ] [, 2 ]. That would mean | [, 1 ]| = |[, 1 ]| and so p 2 [, 1 ] p [1 , 2 ] = p 2 [, 2 ] [, 2 ]. This contradicts p 2 || 2 , so

38

there can be no such t. Therefore p | | 2 , and so for all 1 < 2 , ( ||2 )(1 ,p) =

|[, 2 ]|(1 ,p) . + Similarly, for all 1 2 , p ||+ 2 (1 ), ( ||2 )(1 ,p) = |[, 2 ]|(1 ,p) + For any > the following properties therefore hold for || and || : | | is ba and for all all 1 < , p || (1 ), ( || )(1 ,p) = |[, ]|(1 ,p) . + Similarly, for all > , ||+ is ba and for all 1 , p || (1 ), (

| |+ )(1 ,p) = |[, ]|(1 ,p) . As an aside, it follows that the ||


+/

are bb. t()) .

Finally, since t is bb, t = (, D ()

These properties are sucient to establish that is an ip. This result may be generalized for systems that are not dynamic spaces. Theorem 81. If S is a dynamic set and is an ip of S then 1) If and is companionable then there exists an ip of S , , s.t. 2) If t is a compatible set of subsets of S s.t. for each t theres a then there exists an ip of S , , s.t. t Proof. Once again, (1) follows from (2). Assume t is bb and this time take to be the set s.t. t and for all if ( t ) = then ( || ) . The nature of these sets are similar to those of the prior proof, except that theyre relativized to each . The following properties are sucient to establish that is an ip: For all < , p ( ; since ( | | ) = (
( | | ))( ), ( +/

( t ) , and for all t, > , if

( || ) = then

+/

( | | ))( ,p) = |[, ]|( ,p) (Assume

( | | ) = , and < , [, ] | [, ]|, so ( ,p) =

| [, ]|( ,p), and so |[, ]|( ,p) ( ,p).)


( | | ))[, ] [ , ]. ( | | + )( ), (

For all , p ( For all > , With X [

( | | + )( ,p) = |[, ]|( ,p)

( | | + ) = (

( | | + ))[, ] [ , ].

( t )](),

( t ) = (,X ) (,X ) .

The rst theorem follows from the second, since if D is a dynamic space then {D } is an ip. 39

IV. A.

PROBABILITIES Dynamic Probability Spaces

For a single ip, probabilities are no dierent than in classic probability and statistics[4]: Denition 82. An ip probability space is a triple, (, , P ), where is an ip, is a set of subsets of , and P : [0, 1] s.t.: 1) 2) If then 3) If is nite then 4) P ( ) = 1 5) If is nite and pairwise disjoint then P ( ) = P ( ) Very commonly, nite in (3) and (5) is replaced with countable. Countable additivity is invaluable when dealing with questions of convergence, however convergence will become a more multifaceted issue in the structures to be introduced, so the countable condition has been relaxed & questions of convergence delayed. Theorem 83. In the denition of an ip probability space, (2) and (3) can be replaced with: If 1 , 2 then 1 2 Proof. If (2) holds then (3) is equivalent to If 1 , 2 then 1 now follows from 1 (1 2 ) = 1 2 and 1 2 = 1 2 . The equivalence

( 2 ).

Denition 84. If S1 and S2 are dynamic spaces and 1 and 2 are ips of S1 and S2 respectively, then ip probability spaces (1 , 1 , P1 ) and (2 , 2 , P2 ) are consistent if 1) 1 2 1 and 1 2 2 2) If 1 2 then 1 i 2 2 , P1 (t) = P2 (t)

3) For any t 1 consistent.

If Y is a set of ip probability spaces, Y is consistent if, for any x, y Y , x and y are

A dynamic probability space is, essentially, a consistent collection of ip probability spaces. Denition 85. A dynamic probability space (dps) is a triple (X, T, P ) where X is a set of dynamic sets, T is a set of compatible sets, and P : T [0, 1] s.t.: 40

1) For every S X theres a T s.t. is an ip of S 2) For every t T theres a T s.t. is an ip of some S X and t 3) If t1 , t2 T then t1 t2 T 4) If T is an ip of some S X then P ( ) = 1 5) If t1 , t2 T are disjoint, and t1 t2 T then P (t1 t2 ) = P (t1 ) + P (t2 )

For dynamic probability space (X, T, P ) GT { T : is a partition of some S X } If S is a dynamic space and , S , its important to stress that axiom 5 means that P ({, }) = P ({}) + P ({ }) (assuming {, }, {}, { } T ); it does not mean that P ({, }) = P ({ P ({ } ) + P ({ } ). Note that Thm 83 does not hold for dpss. This is essentially because there is no Z T s.t. for all t T , t Z ; as a result, not t is not uniquely dened, and arbitrary nite unions can not be expected to be elements of T (though arbitrary nite intersections are elements of T ). To formally describe the connection between dynamic probability spaces and classic probability theory, the following denitions will be useful. Denition 86. If (X, T, P ) is a dps and GT then T {t T : t } and P P |T (that is, Dom(P ) = T and for all t T , P (t) = P (t)) If = (X, T, P ) is a dps and A GT , A {(, T , P ) : A} If Y is a consistent set of ip probability spaces, XY { : (, , P ) Y }, TY
(,,P )Y

}) (even if {, }, {

} T ), and so in general P ({

}) =

and PY : TY [0, 1] s.t. if (, , P ) Y and t then PY (t) = P (t).

The following theorem states that a dynamic probability space is simply a consistent set of ip probability spaces. Theorem 87. 1) If = (X, T, P ) is a dps and A GT then A is a consistent set of ip probability spaces 2) If Y is a consistent set of ip probability spaces then (XY , TY , PY ) is dps 3) If (X, T, P ) is a dps and Y {(, T , P ) : GT } then X = XY , T = TY and P = PY 4) If Y is a consistent set of ip probability spaces then Y = {(, (TY ) , (PY ) ) : G(TY ) } 41

Proof. (1) says that a dps can be rewritten as a consistent set of ip probability spaces, (2) says that a consistent set of ip probability spaces can be rewritten as a dps, and (3) & (4) say that in moving between dpss & consistent sets of ip probability spaces, no information is lost. A: For dps (X, T, P ), if t1 , t2 T and is any element of GT s.t. t1 then t1 t2 T - Follows from t1 t2 = t1 t1 consistent follows from 1 follows from axiom 2. (3) & (4) That no information is lost in going from a dps to a set of ip probability spaces follows from (A). Its clear that no information is lost when going from a set of ip probability spaces to a dps. t2 = t1 t2 (note that t2 T ) (1) That the elements of A are ip probability spaces follows from Thm 83. That theyre 1 = 1 ( 1 2 ) (2) Axioms 1, 2, and 4 clearly hold. Axiom 3 follows from Thm 83 and (A). Axiom 5

B.

T-Algebras and GPSs

Very little in the denition of a dps depends on X being a set of dynamic sets or GT being a set of ips. As things often get simpler as they get more abstract, it will be useful to take a step back & generalize the probability theory. Denition 88. A t-algebra is a double, (X, T ), where X is a set of sets and 1) For every x X theres a T s.t. is a partition of x 2) For every t T theres a T s.t. is a partition of some x X and t 3) If t1 , t2 T then t1 t2 T A t-algebra, (X, T ), may be referred to simply by T . As before, GT { T : is a partition of some x X }. Denition 89. A generalized probability space (gps) is a triple, (X, T, P ), where (X, T ) is a t-algebra and P : T [0, 1] s.t. 1) If t T is a partition of some x X then P (t) = 1 2) If t1 , t2 T are disjoint and t1 t2 T then P (t1 t2 ) = P (t1 ) + P (t2 )

42

1.

t and [t]

Denition 90. For t-algebra (X, T ), t T , t {t T : t For A T , A


tA

t = and t

t GT } .

t.

(1) t t, (2) t t, (3) t t, etc. Theorem 91. For gps (X, T, P ), t T , t (n) t 1) If n is odd, P (t) + P (t ) = 1 2) If n is even, P (t) = P (t ) Proof. Simple induction over N+ . Theorem 92. For gps (X, T, P ), t, t T , n, m > 0 1) t (n+m) t i for some t T , t (n) t and t (m) t 2) t (n) t i t (n) t 3) t (2n) t 4) If m < n then (2m) t (2n) t and (2m+1) t (2n+1) t Proof. (1) follows from induction over m. (2) follows from induction over n. (3) follows from induction over n and (1). (4) follows from (1) and (3). Denition 93. If (X, T ) is a t-algebra and t T , [t] Theorem 94. [t] is an equivalence class Proof. By Thm 92.3, t [t] By Thm 92.2, if t1 [t2 ] then t2 [t1 ] By Thm 92.1, if t2 [t1 ] and t3 [t2 ] then t3 [t1 ] Theorem 95. [t] = Proof. [t]
t [t] nN+ nN+

(2n) t.

(2n+1) t
t [t] t [t]

t . t

t i for some t [t], t t . t [t] i for some

n N+ , t (2n) t. So t

t i for some n N+ , t (2n+1) t.

Theorem 96. If t [t] then t [t ] Proof. Follows from Thm 95 and Thm 92.2 Denition 97. A t-algebra is simple if for every t T , t = t. Theorem 98. If T is a simple t-algebra and t T then [t] = t and [t] = t Proof. Clear 43

2.

(XN , TN , PN )

In general, dpss will not have simple t-algebras. This can be seen from the fact that, if t1 t2 , t2 t3 , and t4 t3 , t1 ip. When t-algebras are not simple, they can get fairly opaque. For example, its possible to have t1 [t2 ], and t = t1 t2 , but t1 t / [t2 t], even though its clear that for any probability function P , P (t1 t) = P (t2 t). This could never happen for a simple t-algebra. Fortunately, starting with any t-algebra, its always possible to use and [t] to build an equivalent simple t-algebra. This will be done iteratively, the rst iteration being the t-algebra T1 : Denition 99. For t-algebra (X, T ), For t, t T , t t if for some t1 , t2 T , t1 [t2 ], t t1 and t t2 T1 {t t : t t } (t ) : t T and t [t]} X1 {(t) t4 will be a partition of D , but will generally not be an

Theorem 100. If (X, T ) is a t-algebra then (X1 , T1 ) is a t-algebra Proof. Axioms (1) and (2) hold by the denitions of and X1 .
For axiom (3), given any t1 , t2 T1 , there exist t1 , t 1 , t2 , t2 T s.t. ti ti and ti = ti

t i.

Dene t3 (t1 t2 ) t 2 and t4 (t1 t2 ) t2 ; t3 , t4 T ; since t3 t1 and t4 t1 , t3 t4 ,

so t3

t4 T1 . t1 t2 = t3

t4 so t1 t2 T1 .

We now turn to constructing a probability function for T1 . Theorem 101. If (X, T, P ) is a gps, t1 t2 , t3 t4 , and t1 t2 = t3 t4 , then P (t1 ) +

P (t2 ) = P (t3 ) + P (t4 ) [0, 1]; further, if t1 [t2 ] then P (t1 ) + P (t2 ) = 1 Proof. Take t1 = t1 t3 , t2 = t1 t4 , t3 = t2 t3 , t4 = t2 t4 . All ti are disjoint, and each ti

is a union of two of the tj , so P (t1 ) + P (t2 ) = P (t1 ) + P (t2 ) + P (t3 ) + P (t4 ) = P (t3 ) + P (t4 ). If t1 [t2 ] then P (t1 ) + P (t2 ) = 1 by Thm 91.1. More generally, if t1 t2 , for some t5 , t6 T , t6 [t5 ], t1 t5 , and t2 t6 . P (t5 ) + P (t6 ) = 1, so P (t1 ) + P (t2 ) [0, 1]. Denition 102. If (X, T, P ) is a gps, P1 : T1 [0, 1] s.t. for t1 , t2 T , t1 t2 , P1 (t1 P (t1 ) + P (t2 ). 44 t2 ) =

Theorem 103. If (X, T, P ) is a gps then (X1 , T1 , P1 ) is a gps. Proof. Follows from Thm 100 and Thm 101. This process can now be repeated; starting with (X1 , T1 , P1 ), (X2 , T2 , P2 ) can be constructed as ((X1 )1 , (T1 )1 , (P1 )1 ). Denition 104. If (X, T, P ) is a gps, (X0 , T0 , P0 ) (X, T, P ) and (Xn+1 , Tn+1 , Pn+1) ((Xn )1 , (Tn )1 , (Pn )1 ). Theorem 105. If (X, T, P ) is a gps 1) (Xn , Tn , Pn ) is a gps 2) t Tn i for some nite, pairwise disjoint A T with not more than 2n members s.t. A GT n , some B A, t = B A then Pn (t) =
xA

3) If t Tn and A T is nite, pairwise disjoint, and t = 4) If t Tn and t Tm then Pn (t) = Pm (t)

P (x)

Proof. For 1-3, Straightforward induction. (4) follows from (2) & (3). Denition 106. If (X, T ) is a t-algebra XN TN
nN+ nN+

Xn Tn

If (X, T, P ) is a dps, PN : TN [0, 1] s.t. if t Tn then PN (t) = Pn (t) Theorem 107. If (X, T, P ) is a gps then (XN , TN , PN ) is a simple gps Proof. That (XN , TN ) is a t-algebra follows from the fact that if t TN then for some m N+ , all n > m, t is an element of Tn , and all Tn are t-algebras. That (XN , TN , PN ) is a gps follows similarly. It remains to show that TN is simple. Take n to be dened on Tn and N to be dened on TN . For t TN , if t N N N t then for some t1 , t2 TN , t N t2 , t2 N t1 , and t1 N t, so for some m, n, p N, t m t2 , t2 n t1 , and t1 n t, in which case, with q = lub({m, n, p}), t q+1 t, and so t N t The following will be of occasional use. Denition 108. [t]n and n t may be use to refer to [t] and t on (Xn , Tn ). [t]N and N t may be use to refer to [t] and t on (XN , TN ). 45

Theorem 109. If (X, T, P ) is a gps 1) t TN i for some nite, pairwise disjoint A T s.t. t= B A TN then PN ( A) = tA P (t) 2) If A T is nite, pairwise disjoint, and Proof. Follows from Thm 105, A GT N , some B A,

3.

Convergence on a GPS

Earlier, the question of countable convergence was deferred. Some basic concepts will now be presented. Denition 110. For t-algebra (X, T ), A is a c-set if it is a countable partition of some y X , and there exists a sequence on A, (An )nN , (A = {An : n N}) s.t. for all n N,
i n

Ai T .

Theorem 111. If (X, T ) is a t-algebra, A is a c-set i it is a countable partition of some y X and for any nite B A, BT

Proof. There exists a sequence (An )nN s.t. A = {An : n N} and for all n N,
i n

Ai T .
j i

For all Ai A, Ai = number. B

Aj

k i1 Ak ,

so Ai T . B

For each b B there exists a unique i N s.t. Ai = b; take k to be the largest such
i k

Ai T . Since (X, T ) is a t-algebra, B is a nite subset of T , and B T.

is a subset of an element of T , Immediate.

If (X, T ) is a t-algebra and (tn )nN is a sequence of elements of T s.t. for all i N, ti ti+1 and
iN ti

is the partition of some S X , then A {ti+1 ti : i N} is a c-set.


j i

Further, if A is a c-set and (An )nN is any sequence of its elements then ti = such a sequence of elements of T , so these two notions are equivalent. Theorem 112. If (X, T, P ) is a gps and A is a c-set then Proof. If (An )nN is any sequence on A, for all N N,
n N xA

Aj form

P (x) 1.
n N

P (An ) = P (

An ) 1.

Denition 113. A gps, (X, T, P ), is convergent if for every c-set, A, 46

xA

P (x) = 1.

Theorem 114. If (X, T, P ) is a convergent dps, A and B are c-sets, Z A, Y B, and Z= Y then
xZ

P (x) =

xY

P (x). b and = } and V (B Y ) C. V

Proof. Take C { : f or some a Y, b Z, = a there exits a nite b B s.t. & V =


xB Y xY xC

has the following properties: every element of V is an element of T ; for every nite V b, and so T ; and nally V is pairwise disjoint

B . Therefore V is a c-set. P (x) +


xC xZ xC

P (x) =
xY

xV

P (x) = 1 =

xB

P (x) =

xB Y

P (x) +

P (x), so P (x) =

P (x) =

P (x). Proceeding similarly with (A Z )


xZ

C yields

P (x). Therefore

P (x) =

xY

P (x). Z }.

Denition 115. If (X, T ) is a t-algebra, Tc {x : F orsomec-set, A, someZ A, x = Pc ( B ) = P (x).

If (X, T, P ) is a convergent dps, Pc : Tc [0, 1] s.t. if A is a c-set and B A then


xB

Theorem 116. If (X, T, P ) is convergent A Tc is countable, pairwise disjoint, and Tc then


xA

Pc (x) = Pc ( A). Bx = x. Take Z to be

Proof. For every x A take Yx to a be a c-set s.t. for Bx Yx , a c-set s.t. for D Z , Bx , b D, = a Each (Yx Bx ) D= b & = }. Note that Cx is a c-set, so A) =
xA tCx tCx

A. Finally, for each x A, take Cx { : f or some a Cx = x. Pc (t) = Pc (x). With C


xA xA

C x , (Z D )

is also a c-set, so Pc (

Pc (t) =

Pc (x).

If (X, T ) is a t-algebra, theres no guarantee that (X, Tc ) will be a t-algebra; in particular, if A and B are c-sets, ( A) ( B ) might not be an element of Tc .[5] Also, if (X, T, P ) is convergent, (XN , TN , PN ) may not be (though if (XN , TN , PN ) is convergent then (X, T, P ) is). Because convergence on (XN , TN , PN ) is a useful quality for a gps to posses, the following simplied notation will be used. Denition 117. If (X, T ) is a t-algebra, T (TN )c . If (X, T, P ) is a dps, it is -convergent if (XN , TN , PN ) is convergent. If (X, T, P ) is an -convergent dps then P (PN )c . The easiest way to remember this is that has absolutely nothing to do with these properties.

47

4.

S TS , TS N , and TN

Denition 118. If (X, T ) is a t-algebra and S X then TS {t T : f or some GT ,


S TS N (TS )N and TN (TN )S . S It follows readily that ({S }, TS ) is a t-algebra, both ({S }, TS N ) and ({S }, TN ) are simple S S t-algebras, and TS N TN . One question that arises is, under what conditions will TS N = TN ;

= S and t }

that is, when does the rest of T yield no further information about the probabilities on S ? There are a number of ways in which this can occur; the most obvious is: for all S X , S = S, S S = (or more generally, for all GT S , GT S , if S = S then = ). In such cases, TS is completely independent of the rest of T . Another way in

S which TS N = TN is if the structure of T as a whole can be mapped onto TS ; this possibility

will now be sketched. (In the following denition, P(S ) is the power set of S ) Denition 119. If (X, T ) and ({S }, T ) are t-algebras, f : if 1) If GT then 3) If f [ ] GT f ( ) = 2) If GT , , , and = then f () T and S then f () = {} T P( T ) reduces T to T

Theorem 120. If f reduces T to T then it reduces T1 to T1

Proof. For t1 , t2 T , assume t1 t2 ; by (2), f [t1 ] GT , so f [t1 ] f [t1 ] (n)

f [t2 ] = and by (1)

f [t1

t2 ]

f [t2 ]. It follows from the denition of (n) that if t1 (n) t2 then T1 = T.

f [t2 ]. (1) & (2) in the denition of reduction follow immediately from this.

(3) continues to hold because

Theorem 121. If f reduces T to T then it reduces TN to TN Proof. By Thm 120, f reduces Tn to Tn .

If GT N then for some n, GT n , and so (1) and (2) hold. (3) hold because T.

TN =

Theorem 122. If (X, T ) is a t-algebra, S X , and there exists an f that reduces T to TS


S then TN = TS N .

48

S Proof. In all cases, TS N TN S Take any t TN and any GT N s.t. t . By Thm 121 f reduces TN to TS N , so by

(1) and (3) in the denition of a reduction, t


S TN TS N .

f [ ] GTS N ; therefore t TS N and so

C.

Nearly Compatible Sets

Now to return to dpss and, in particular, the (XN , TN , PN ) construction for dpss. Because the only additional requirement imposed on dpss is that all GT must be ips, and because this only places an indirect restriction on the makeup of X in dps (X, T, P ), dpss of the form ({S }, T, P ) will be of greatest interest. Naturally, any dps may be decomposed into a set such dpss, one for each S X , and the results of this section may be applied to those component parts. Denition 123. A set of companionable sets, A, is nearly compatible if it is pairwise disjoint and for every , A, every (, p) Uni( (,p) (,p) = . A) either (,p) = (,p) or

All compatible sets are nearly compatible, but the converse does not hold. The basic result of this section is that for a dps, ({S }, T, P ), while elements of GT N need not be compatible, they must be nearly compatible. Denition 124. A is a nearly compatible collection (ncc ) if it is pairwise disjoint and is nearly compatible. A is an S -nccp if it is an ncc and A is a partition of S . A

Theorem 125. If S is a dynamic set and {t1 , t2 }, {t2 , t3 }, and {t3 , t4 } are S -nccps, then so is {t1 , t4 }. Proof. Take t1 , t4 and assume (,p) (,p) s [, ] (,p) . Since t1 (,p) = (,p) = (,p) . Thm 125 is the key property of nearly compatible sets. Essentially, if Y is an attribute s.t. all ips are Y , and Y is transitive in the sense of Thm 125, then all GT N are Y . Near compatibility is a close approximation of compatibility that possesses this property. 49 t2 , t2 t3 , and t3 (,p) = . With s [, ]

(,p) , there must be a t2 and a t3 s.t.

s [, ] (,p) and

t4 are nearly compatible, (,p) =

Theorem 126. If ({S }, T, P ) is a dps, t1 T , and t2 [t1 ] then {t1 , t2 } is an S -nccp. Proof. If t1 t2 then t1 and t2 are disjoint and t1 125. Note that if t1 [t2 ] there are no grounds to conclude that t1 general it wont be. Theorem 127. If ({S }, T, P ) is a dps, x1 Tn , and x2 n [x1 ]n then {x1 , x2 } is a S -nccp. Proof. Holds for n = 0 by Thm 126. Assume it holds for n = m. For n = m + 1, if x2 m+1 x1 then x1 and y1 y2 = x1 x2 GT (m+1) ; therefore there must be a y1 , y2 Tm s.t. y2 m [y1 ]m x2 = y1 y2 is nearly compatible, so t2 is compatible, and in t2 is a compatible (and so nearly

compatible) partition, so {t1 , t2 } is an T -nccp. The result is now immediate from Thm

x2 . By assumption on n = m, x1

{x1 , x2 } is a S -nccp. From Thm 125 it then follows that if x2 m+1 [x1 ]m+1 then {x1 , x2 } is an S -nccp. Theorem 128. If ({S }, T, P ) is a dps then 1) If GT N then for some nite S -nccp A T , = 2) If t TN then for some nite ncc, A T , t = Proof. Immediate from Thm 127. The richer a dpss t-algebra is, the more information the dps carries. A t-algebra contains minimal information if GT N contains only the original ips; in that case, no information can be derived from the dps as a whole that isnt known by considering the individual (, T , P ) in isolation. As the t-algebra grows richer, more information can be derived from it. Thm 128 indicates the outer limit on the information that can be derived from a dps on a single dynamic set. Denition 129. If ({S }, T, P ) is a dps, it is maximal if GT N i theres a nite S -nccp, A T , s.t. A = . A A

If ({S }, T, P ) and ({S }, T , P ) are dpss, T T , and P = P |T , then if T is maximal, ({S }, T , P ) will give no further information about ({S }, T, P ). However, if ({S }, T, P ) is not maximal, but ({S }, T , P ) is, this immediately yields a wealth of information about

50

({S }, T, P ), indeed more information than ({S }, TN , PN ) yields; it means that the following construct forms a simple gps: GT e { : f or some f inite S -nccp, A T, Pe : Te [0, 1] s.t. if A T is a nite ncc and
t A

A} A }
A t Te then Pe (t) = PN (t) =

Te {t : f or some f inite ncc, A T, some GT e , t = P (t ).

Note that Te = TN only if ({S }, T, P ) is maximal. In all other cases, ({S }, Te , Pe ) contains more information than ({S }, TN , PN ). For -convergent dpss, this can be pushed further. Denition 130. If ({S }, T, P ) is a dps, it is -maximal if for every countable S -nccp, A T , every nite B A, B TN . ({S }, T, P ) is -maximal if it is -maximal and -convergent. For an -maximal dps, every countable S -nccp composed of elements of T is a c-set in TN . This means that for a -maximal dps, GT i theres a countable S -nccp, A T , s.t. A = . It was mentioned previously that, in general, ({S }, T , P ) is not a gps; however it can be seen that for -maximal dpss, ({S }, T , P ) is not only a gps, its a simple gps. Returning to the case where ({S }, T, P ) and ({S }, T , P ) are dpss, T T , and P = P |T , if ({S }, T , P ) is -maximal then the following construct forms a simple gps: GT { : f or some countable S -nccp, A T, P : T [0, 1] s.t. if A is a countable ncc and
t A

A} A }
A t T then P (t) = P (t) =

T {t : f or some countable ncc, A T, some GT e , t = P (t ).

When its applicable, this is, of course, a very useful construct.

D.

Deterministic & Herodotistic Spaces

A system is deterministic if a complete knowledge of the present yields a complete knowledge of the future; it is herodotistic if complete knowledge of the present yields a complete knowledge of the past. More formally: Denition 131. A dynamic set, S , is deterministic if for every (, p) Uni(S ), S(,p) is a singleton (that is, S(,p) has only one element). 51

S is herodotistic if for every (, p) Uni(S ), S(,p) is a singleton. In this section deterministic and herodotistic dpss will be investigated, as will deterministic/herodotistic universes, where the e-automata as whole is deterministic/herodotistic though the system being measured might not be. It will follow from the various results that for the probabilities seen in quantum physics to hold, neither the systems being investigated nor the universe as a whole can be either deterministic or herodotistic. No other material in this article will depend on the material in this section.

1.

DPSs on Deterministic & Herodotistic Spaces

Theorem 132. If a dynamic set, S , is deterministic or herodotistic then S is a dynamic space all its subsets are subspaces. Proof. For A S , p 1 , p 2 A, p 1 () = p 2 (), if S is herodotistic then p 1 [, ] = p 2 [, ] so p 1 [, ] p 2[, ] = p 2 A. Similarly, if S is deterministic then p 1 [, ] p 2[, ] = p 1 A Theorem 133. 1) If D is deterministic then for any X D , D , X = +X [, ] 2) If D is herodotistic then for any X D , D , X = +X [, ] Proof. 1) For any p [, ] X [, ] there exists only one p D s.t. p [, ] = p [, ]. 2) Similar Theorem 134. 1) If D is deterministic then A, B D are compatible i they are disjoint and bounded from below 2) If D is herodotistic then A, B D are compatible i they are disjoint and bounded from above Proof. 1) If A and B are disjoint then A[, ] 2) If p A() B [, ] = for all .

B () then since (, p) has only one element, A(,p) = B(,p) .

Denition 135. If ({S }, T, P ) is a dps, it is closed under combination if for all pairwise disjoint A T s.t. A is an ip, A GT .

52

Theorem 136. If (X, T, P ) is a dps and for all S X , S is either deterministic or herodotistic and ({S }, TS , Ps ) is closed under combination, then for all t1 , t2 T s.t 1) If GT and t1 then ( t1 ) 2) P (t1 ) = P (t2 ) Proof. (1) follows from Thm 134 and (2) follows from (1). Note the importance of closure under combination. One can map any dps, (X, T, P ), it onto a herodotistic dps as follows. For every S X , p S , dene ph by ph () = S }, Sh is herodotistic. One can then readily construct ( p( ) , p [0, ]). With Sh {ph : p (Xh , Th , Ph ), which will be a dps, but in most cases will not be closed under combination; indeed, since th = th i t= t and P (t) = Ph (th ), the conclusion of above theorem will generally not apply to (Xh , Th , Ph ). t2 GT t1 = t2

2.

DPSs in Deterministic Universes

Theorem 137. If partition is decided by an idea e-automata, Z = (D, I, F ) s.t. D is deterministic, then for some 0 , all , is sbb0 and wba0 . Proof. A: For every (s, e), (s, e ) I , (s,e) = (s,e ) - With s [0, 0] s.t. s [0, 0](0) = s, s [0, 0] (s,e) result follows from Thm 39. B: Taking S , for any s S (0), (OF )(0,s) is either a singleton or empty (s,e ) (since (s, e), (s, e ) I ), so the

- Follows from denition of deterministic and (A) C: For some 0 , all , theres a , s.t. is sbb0 , wba , and for all s (0 ), s [0 , ] is a singleton - Follows from (B) and the denition of being decided by an e-automata D: For all , , if = then (0 ) (0 ) = - Follows from (B) and the denition of being decided by an e-automata The theorem is immediate from (C) & (D). Denition 138. If Y is a collection subsets of dynamic set S then Y is cross-section (at ) if it is pairwise disjoint and for all y Y , y is a subspace, sbb, and wba. Theorem 139. 1) If Y is a cross-section then its compatible. 2) If Y is cross-section at then { Y } is a cross-section at 53

Proof. Clear Denition 140. A t-algebra is deterministically decidable if for all t T , t is a cross-section. It is deterministically normal if it is deterministically decidable and 1) If t T then { t} T 2) If A T is nite and A is a partition & a cross-section then A GT .

(X, T, P ) is a deterministically decidable dps if (X, T ) is deterministically decidable & deterministically normal if (X, T ) is deterministically normal. Theorem 141. If (X, T, P ) is deterministically normal dps 1) For any t, t T , if { t [t]. 2) For all t1 , t2 T s.t. t1 = t2 , P (t1 ) = P (t2 ) { ( t)} is a cross-section. Since {t }, {t} t = ( t), t, t } is a cross-section and a partition of some S X then

Proof. 1) For any GT s.t. t , t t {t } is a cross-section. Therefore t of GT .

t , and {t} {t } are all elements

2) Take any GT s.t. t1 . It follows from (1) that t1 [t2 ], so t1 [t2 ] If X = {S } then (1) becomes - For any t, t T , t [t] i { t, and a partition of S . t } is a cross-section

3.

DPSs in Herodotistic Universes

Denition 142. If t is a compatible set, it is herodotistically decidable if for some 0 , all t, theres a s.t. = +[0 , ], and for all s ( ), s [0 , ] is a singleton. Under these conditions, t is said to be born at 0 A t-algebra (X.T ) is herodotistically decidable if every t T is herodotistically decidable (which holds i every GT is herodotistically decidable). Theorem 143. If ip is decided by an ideal e-automata, (D, I, F ), and D is herodotistic, then is herodotistically decidable. Proof. If D is herodotistic then for any e F , any (s, e ) e ((Oe )), (s,e ) is a singleton, and by Thm 55.3 for any (s, e ), (s, e ) e ((Oe )), (s,e ) = (s,e ) . The theorem then follows immediately from the denition of being decided by an ideal e-automata. 54

Theorem 144. If (t1 )

(t2 ) = , both t1 and t2 are herodotistically decidable, and there

exists a s.t. both t1 and t2 are born at , then t1 and t2 are nearly compatible Proof. Immediate from the denitions of herodotistically decidable and nearly compatible.

Lets say that for herodotistically decidable t-algebra, (X, T ), t1 , t2 T are sympathetic if t1 = t2 and there exists a t t1 , s.t. both t2 and t are born at . Thm 144 means that for a maximal dps (or a dps that can be embedded in a maximal dps), if t1 and t2 are sympathetic then P (t1 ) = P (t2 ). It will soon be seen that this is incompatible with quantum probabilities. Note that being herodotistic has a less dramatic eect on an ideal e-automata than being deterministic. This is because, for ideal e-automata, the environment is assumed to remember something about the past (due being all-reet), but know little about the future (due to being unbiased).

V.

APPLICATION TO QUANTUM MEASUREMENT A. Preliminary Matters

In this part, results from the prior sections will be applied to quantum physics as described by the Hilbert Space formalism. In order to do this, a few preliminary matters need to be addressed.

1.

A Note On Paths

Quantum physics is often viewed in terms of transitions between states, rather than in terms of system paths; an obvious exception being the path integration formalism. Before proceeding, it may be helpful to start by describing conditions under which state transitions can be re-represented as paths. Start by taking (1 , s1 ) (2 , s2 ) to mean that state s1 at time 1 can transition to state s2 at time 2 . Paths may then be dened as any parametrized function, s , s.t. for all 1 < 2 , (1 , s (1 )) (2 , s (2 )). To show that this set of paths is equivalent to the

55

transition relation, it must be shown that if (1 , s1 ) (2 , s2 ) then there exists a path, s , s.t. s (1 ) = s1 and s (2 ) = s2 (the converse clearly holds). In quantum mechanics, the relation has the following properties: 1) If (1 , s1 ) (2 , s2 ) then 1 < 2 2) For all (, s), all < , theres a s s.t. ( , s ) (, s) 3) For all (, s), all > , theres a s s.t. (, s) ( , s ) 4) For all (1 , s1 ), (2 , s2 ) s.t. (1 , s1 ) (2 , s2 ), all 1 < < 2 , theres a s s.t. (1 , s1 ) ( , s ) and ( , s ) (2 , s2 ). If the measurements are strongly unbiased, then the following holds: 5) If (1 , s1 ) (2 , s2 ), and (2 , s2 ) (3 , s3 ), then (1 , s1 ) (3 , s3 ).[6] If measurements are unbiased, but not strongly unbiased, there still exists a covering of s.t. within each element of the covering 1-5 hold. To establish (1 , s1 ) (2 , s2 ) i there exists a path, s , s.t. s (1 ) = s1 and s (2 ) = s2 , it is sucient to establish it within each element of the covering. As it turns out, statements 1-5 are sucient for accomplishing this. The key step is to show that if (0 , s0 ) (1 , s1 ) then theres a partial path, s [0 , 1 ] s.t. s [0 , 1 ](0 ) = s0 and s [0 , 1 ](1 ) = s1 . If is discrete this can be readily shown using statement 5 and induction. The proof when a continuum will now be briey sketched. First, for each 0 < < 1 , form the set of all s s.t. (0 , s0 ) (, s) and (, s) (1 , s1 ). For 1/2
1 (0 2

+ 1 ) choose a s1/2 from 1/2 s set. Now for each 0 < < 1/2 form

the set of all s s.t. (0 , s0 ) (, s) and (, s) (1/2 , s1/2 ), and for each 1/2 < < 1 (0 + 1/2 ) form the set of all s s.t. (1/2 , s1/2 ) (, s) and (, s) (1 , s1 ). For 1/4 1 2 (1/2 + 1 ), choose a s1/4 and a s3/4 from the newly formed sets and repeat and 3/4 1 2 the process. When this has been done for all = m/2n , at each of the other s take the intersection of all the formed sets, and select one element (the intersection must be non-empty). The set of all the selected (m/2n , sm/2n ) and all the (, s)s chosen from the intersections is the graph of a partial path running from (0 , s0 ) to (1 , s1 ). Just as as one can start with the transition relation and use it to dene a set of paths, one can also start with a set of paths, and from it derive the transition relation: If S is a dynamic set, (1 , s1 ) (2 , s2 ) if S(1 ,s1 )(2 ,s2 ) = . With the relation dened in this way, 1-4 above will always hold; if S is a dynamic space, 5 will also hold. However, if the dynamic set is not a dynamic space, the two representations may not be equivalent, paths can contain more information. To see this, consider the case where (1 , s1 ) (2 , s2 ), (2 , s2 ) (3 , s3 ), 56

and (1 , s1 ) (3 , s3 ); this says that a path runs from (1 , s1 ) to (2 , s2 ), a path runs from (2 , s2 ) to (3 , s3 ), and a path run from (1 , s1 ) to (3 , s3 ), but it doesnt guarantee that any individual path runs through all three points. However, when constructing paths from the transition relation, it might be possible to construct such a path. Therefore, starting with a dynamic set S , using S to construct the relation, then using to construct the set of paths, S , you can have S S , the elements of S S being non-existent paths that cant

be ruled out based on the transition relation alone. Form here on out we will revert to the path formalism.

2.

Discretely Determined Partitions

The mathematical framework employed in quantum mechanics limits the types of measurements that the theory can talk about. These limitations are of essentially two types. First, because the probabilities are dened directly on the outcomes, rather than on the sets of outcomes, partitions have to be at most countable. Second, because measurements are described by projection operators on the state space, experimental outcomes correspond to sequences of measurements of S () at discrete values of S (S being the systems dynamic set). The following denitions will allow us to work within these constraints. Denition 145. For dynamic set, S , and L S : S is determined on L if =
L

S(,()) = {p S : f or all L, p () ()}

(essentially, = S(1 ,(1 ))...(i ,(i ))... ) If is a partition of S , is determined on L if all are determined on L. is discretely determined if it is determined on some discrete L; similarly for partitions. Note that if is determined on L, and L L , then is determined on L . Quantum probabilities are calculated using equations of the form P (A1 , A2 , ...|S ) =< S, 0 |P(A1 ; 1 )P(A2 ; 2 )...P(An ; n )P(An ; n )...P(A2 ; 2 )P(A1 ; 1 )|S, 0 > where S is the initial system state and P(Ai ; i ) is the projection operator onto state space region Ai at i . This manner of calculation necessarily limits the formalism to partitions composed of discretely determined measurements. Before moving on, lets briey take a closer look at the nature of this discreteness. To be able to calculate (or even represent) | > ...P(Ai ; i )...P(A2 ; 2 )P(A1 ; 1 )|S, 0 >, 57

L {1 , 2 , ...i , ...} must have a least element (which should not be less than 0 ), and for every L, the set of elements of L that are greater than must have a least element; in other words, L must be well ordered. Under these conditions, we have |1 > P(A1 ; 1 )|S, 0 >, |2 > P(A2 ; 2 )|1 >, etc. If the sequence is nite, terminating at n, then | >= |n >. Otherwise, | > is the limit of the sequence. The required probability is then < | >. Similarly, to be able to calculate O P(A1 ; 1 )P(A2 ; 2 )...P(Ai ; i )...P(Ai ; i )...P(A2 ; 2 )P(A1 ; 1 ), L {1 , 2 , ...i , ...} must have a greatest element, and for every L, the set of elements of L that are less than must have a greatest element; in other words, L must be upwardly well ordered. Under these conditions, the probability is < S, 0 |O |S, 0 >. Because we are interested in partitions with total probability thats guaranteed to be 1 based only on the structure of the measurements, and independent of the details of the innerproducts, we are limited to partitions that are determined on parameter sets of this second type. It should be noted, however, that this further qualication is of little consequence, because all nitely determined partitions are allowed, and countable cases can be taken as the limit of a sequence of nite cases. Denition 146. If L is the subset of a parameter, it is upwardly well-ordered if every non-empty subset of L has a greatest element. If is a partition of S , If
S

if is countable and for some upwardly well-ordered,

bounded from below L S , is determined on L.


S,

L( ) is the set of upwardly well-ordered, bounded from below sets, l , s.t.

is determined on l. From here on out, discretely determined will refer to being determined on an L thats upwardly well-ordered and bounded from below. Regardless of how discrete is interpreted, being limited to discretely determined measurements is a surprisingly strong constraint. To see this, consider the following simple type of measurement: Denition 147. If S is a dynamic space and S , is a moment if for all S , is either bb or ba. Assume that is bb and for all > , is ba . This makes a moment. If is ba then is a measurement of the system state at . Otherwise, if is not ba, may be 58

thought of as a measurement of the systems state and its rate of change. Colloquially, it may be thought of as a measurement of position and velocity[7]. One limitation of the Hilbert Space formulation of quantum physics is that it can only be used to describe measurements of system state alone. Theorem 148. If X is a pairwise-disjoint collection of moments of a dynamic space, then X is compatible. Proof. Moments of a dynamic space are clearly companionable. Since X is pairwise-disjoint, it only remains to show that for all X , if ()X and p () (,p) . A: If ()X = {} then is bb. - Assume [, ] [, ] = and = . If is ba then = which contradicts X being pairwise-disjoint. Since is a moment, and is not ba, it is bb. X Assume ()X and = , then by (A), is bb. Similarly, because ( ) , is

() then (,p) =

bb. Therefore, for any p ()

(), (,p) = (,p) .

It follows that any partition composed of moments is an ip. We can reasonably assume that all moment measurements can be performed; indeed, they appear to be quite useful. However, they can not all be represented using the quantum formalism. This represents a rather severe limitation in the mathematical language of quantum physics. In spite of such limitations, there is one way in which
S

may be considered overly

inclusive. Dynamic sets are a very broad concept, which can make them unwieldy to use. To reign in their unruliness, we generally assume that they can be decomposed into ips, and that these ips are related to the possible measurement on the set. The subset of containing elements that support to such decompositions will prove useful. Denition 149. If S is a dynamic set, S is the set of an ip of S , , s.t. for some , . If D is a dynamic space then S =
S. S S

s.t. for every theres

In the quantum formalism, when S

S,

it is

S that is of interest, for if is an outcome of a quantum measurement, then it must be an element of an ip. To see why, start with the a time ordered product of projection operators corresponding to the measurement, ...P(Ai ; i )...P(A2 ; 2 )P(A1 ; 1 ). Dene P(Aj ; j )

59

I P(Aj ; j ); with Xj equal to either Aj or Aj , form the set of all time ordered products of the form ...P(Xi ; i )...P(X2 ; 2 )P(X1 ; 1 ). This set corresponds to an ip. Theorem 150. If S is a dynamic set and S then is companionable Proof. Take to be an ip and . If p 1 , p 2 and p 1 () = p 2 () then p = p 1 [, ] p 2 [, ] . Take any L L(). For all L, p () (), so p , and so is a subspace. With 0 = glb(L), there must be a 0 s.t. is sbb. Since is determined on L, is then also sbb. Taking 1 to be Ls greatest element, is wba1 . Theorem 151. If S is a dynamic set then given any , S s.t. (,p) (,p) = ,

if p 1 [, ] (,p) and p 2 [, ] (,p) then p 1 [, ] p 2 [, ] S . Proof. Take p [, ] (,p) (,p) (,p) , p p [, ] p 2 [, ]. Since is a subspace,

p S . Take to be an ip and . For some , p . p [, ] (,p) , so (,p) = (,p) , and so p 1 [, ] (,p) . Since is a subspace and p [, ] = p 2 [, ] (,p) , p 1 [, ] p 2 [, ] S .

3.

Interconnected Dynamic Sets

An all-reet e-automata can not forget anything its ever known about the system. Under the right conditions, however, an e-automata can discover something about the systems past that is not implied by the current state. This is most readily seen if the system parameter is discrete. For an e-automata with a discrete parameter, assume that at the e-automata is in state (s , e ) and at + 1 its in state (s+1 , e+1 ). It would be reasonable (though not necessary) to assume that the system state of s is not reected in e , and doesnt get reected in the environment until e+1 . In this case, it takes one step in time for the environment to learn about the system. More generally, of course, the set of allowed environmental states at + 1 will be a function of both s and s+1 , as well as e , but this still asserts that it is possible to learn something about the systems past thats not reected in the current state of the system. If the parameter is continuous and the measurements are at discrete times we wouldnt expect this to happen, though it can; the environment may gain information at 2 about 60

the state of the system at 1 that isnt implied by the state of the system at 2 . In order for this to occur the system itself would have to remember something about its state at 1 until 2 , at which point the knowledge is simultaneously passed to the environment and forgotten by the system. This is clearly an edge case, but a pernicious one, as it allows for anomalous ips with discretely determined measurements; in order to regularize this set of ips, this edge case needs to be eliminated. In order to be eliminated, the system dynamics need to be interconnected, a property that will dened rst for dynamic spaces, and then for dynamic sets. Denition 152. A dynamic space, D , is interconnected if for all 1 < 2 , every p1 , p1 D (1), p2 , p2 D (2 ) s.t. (1 , p1 ) (2 , p2 ) = , (1 , p1 ) (2 , p2 ) = , (1 , p1 ) (2 , p2 ) = , and (1 , p1 ) (2 , p2 ) = , there exists a [1 , 2 ] and a p D () s.t. (1 , p1 ) (, p) = , (1 , p1 ) (, p) = , (, p) (2 , p2 ) = , and (, p) (2 , p2 ) = . Being interconnected is equivalent to saying that if p1 = p1 then {(1 , p1 ), (1 , p1 )} (2 , p2 ) , (1 , p1 ) (2 , p2 ) , and (1 , p1 ) (2 , p2 ) can not be mutually compatible. Essentially, for any p [, 1 ] (1 , p1 ), p [, 1 ] (1 , p1 ), if p [, 1 ] and p [, 1 ] have not been distinguished by 1 , and there is no measurement between 1 and 2 , then they can not be distinguished at 2 because the point (, p) destroys the ability to distinguish them. Nearly all actively studied dynamic systems are interconnected, and quantum systems always are. Now to generalize interconnectedness for dynamic sets. Denition 153. If S is a dynamic set and p 11 , p 21 , p 12 , p 22 S , ( p11 , p 21 , p 12 , p 22 ) is an interconnect on [1 , 2 ] if p 11 [, 1 ] = p 12 [, 1 ] p 21 [, 1 ] = p 22 [, 1 ] p 11 [2 , ] = p 21 [2 , ] p 12 [2 , ] = p 22 [2 , ]. The paths p 11 , p 21 , p 12 , p 22 can be thought of as sample paths from the sets (1 , p1 ) (2 , p2 ) , etc., that were used in the dynamic space denition of interconnectedness. Because e-automata are unbiased, if there are no measurements on these paths between 1 and 2 then +{p 11 [, 1 ], p 21 [, 1 ]} should behave like a dynamic space in [1 , 2 ]. 61

Denition 154. If A is a dynamic set, [1 , 2 ] is a space-segment of A if for all p A, [, ] [1 , 2 ], p [, ] A(,p [, ] p [, ] p [ , ] = A. ())( ,p ( )) , p If an experiment has not distinguished between the paths of an interconnect on [1 , 2 ] by 1 , and there is no measurement between 1 and 2 , then interconnectedness will insure that they can not be distinguished at 2 : Denition 155. A dynamic set, S , is interconnected if for every 1 < 2 , every interconnect on [1 , 2 ], ( p11 , p 21 , p 12 , p 22 ), s.t. [1 , 2 ] is a space-segment of +{p 11 [, 1 ], p 21 [, 1 ]} there exists a [1 , 2 ] and an interconnect on [, 2 ], ( p11 , p 21 , p 12 , p 22 ), s.t. mn (). ij [2 , ], and all p ij () = p ij [, 1 ], p ij [2 , ] = p p ij [, 1 ] = p [1 , 2 ] being a space-segment of +{p 11 [, 1 ], p 21 [, 1 ]} helps to insure that there 22 ), rather than just four paths that share the same 12 , p exists an interconnect, ( p11 , p 21 , p point. If S is a dynamic space, this denition is equivalent to the prior one. Finally, to make interconnectedness more readily applicable to S for the case when the system is not a dynamic space. Denition 156. If L is upwardly well-ordered, and is not a lower-bound of L, predL () lub(L [, )).

predL () is short for predecessor of on L. Denition 157. For S , L( ) L( ) s.t. if L L( ) then for all , L {glb(L)}, [predL (), ] is a space-segment of +[, predL ()]. s S if S and L( ) = . For quantum systems, the dynamics between successive measurements is always a spacesegment, and so when discussing measurements only the elements of s S are of interest. For dynamic spaces, s S = S =
S.

B.

Partitions of Unity for Quantum Systems

A partition with total probability of 1 will be given the fancy title of being a partition of unity. In this section, we consider the quantum partitions of unity. When discussing quantum probabilities there are two cases to be considered: the conditional case, where an initial state is known, and the non-conditional case, where no initial 62

state is known & everything about the system is discovered via the experiment. We will start by considering the conditional case.

1.

The Conditional Case

Denition 158. If is a partition of dynamic set S and p 1 , p 2 S , p 1 and p 2 are co-located on if theres a s.t. p 1 , p 2 ; p 1 and p 2 converge at if p 1 [, ] = p 2 [, ] The central object of study in this section will be the set of partitions qS : Denition 159. For dynamic set S , qS if s S and for some L L( ), all 1 , 2 L s.t. 1 predL (2 ), all p 1 [, 1 ], p 2 [, 1 ] S [, 1 ], either all p + p1 [, 1 ], p + p2 [, 1 ] that converge at 2 are co-located on , or none are. In the following claim, total probabilities of a set of outcomes are guaranteed to to equal 1 if its known to be 1 based only on the structure of the outcomes, and independent of the details of the transition probabilities, including the choice of initial state. Claim 160. For quantum probabilities in the conditional case, the sum over probabilities of a set of outcomes is guaranteed to equal 1 i the the set of outcomes is an element of qS . Remark. It has already been argued that quantum outcomes are elements of of the claim may be justied as follows. Quantum outcomes are represented by time ordered products of projection operators. Lets take L to be the set of times at which a projection operator is applied, and dene 1 to be the largest element of L, 2 to be the largest element of L {0 }, etc. (Note that as the subscript increases, the time decreases.) An outcome is then represented by i = ...P(Aij ; j )...P(Ai1 ; 1 ), where Aij S (j ), the i subscript identies the outcome, and the j subscript identies the time. If the set of outcomes form a partition then
i

s S . The rest

i = I (I being the identity operator). This yields a total probability of 1 irregardless of


i

the choice of the initial state i which holds i


i

< , 0 |i i |, 0 >= 1 for all initial states |, 0 >,

i i = I.

For any j L, dene j by (..., sk , ..., sj ) j if for all q < r j (q , r L) the transition (sq , q ) (sr , r ) is allowed; Further, dene j by (..., sk , sk , ..., sj +1 , sj +1, sj ) j if 63

1) (..., sk , ..., sj +1 , sj ) j and (..., sk , ..., sj +1, sj ) j 2) For some outcome, i , sj Aij and for all k > j (s.t. k , L) sk , sk Aik .
i

i i = I can then be expanded to

Condition 1: ds1 ds2 ds2 ...|s2 , 2 >< s2 , 2 |s1 , 1 > < s1 , 1 |s1 , 1 >< s2 , 2 |... = I
1

holds. For s = s , the only way that we could have an s1 s.t. ( s, s1 ) 1 , and ( s , s1 ) 1 , or < s2 , 2 |s1 , 1 >= 0 (this makes the claim of ( s, s1 ) 1 , and ( s , s1 ) 1 uncomfortable, but the situation can not be ruled out). Since we are interested in the conditions under which the probabilities are guaranteed to sum to 1 regardless of the details of inner-products, we eliminate this last case and are lead to:

Start by integrating over s1 ; the above identity can not hold unless for every (..., s2 , s2 ) = ( s, s ) Dom(1 ) either < s2 , 2 |( ( ds1 |s1 , 1 >< s1 , 1 |)|s2, 2 >= 1 or < s ,s ,sn )1 s2 , 2 |( ( ds1 |s1 , 1 >< s1 , 1 |)|s1 , 2 >= 0. When s =s the rst equality clearly s,s ,sn )1

but ( s, s , s1 ) / 1 , and still have one of these equalities hold is if either < s2 , 2 |s1 , 1 >= 0

Condition 2: If ( s, s ) Dom(1 ) then for all s1 s.t. ( s, s1 ) 1 and ( s , s1 ) 1 , ( s, s , s1 ) 1 . Which is equivalent to applying the qS condition at 1 . Apply Condition 2 to Condition 1, Condition 1 is reduced to: Condition 3: ds2 ds3 ds3 ...|s3 , 3 >< s3 , 3 |s2 , 2 > < s2 , 2 |s3 , 3 >< s3 , 3 |... = I
1

Which is identical in form to the Condition 1. Therefore, to satisfy Condition 1, it is sucient to satisfy Condition 1, for all elements of L to satisfy Condition 2 (with the 1 subscript replaced with i). Note that Condition 2 was imposed as a necessary condition for Condition 1 to hold; we now see that its a necessary and sucient condition. Finally, note that requiring Condition 2 on all i is equivalent to requiring that the set of outcomes is an element of qS . Theorem 161. For dynamic set S 1) If qS then is nearly compatible 2) If S is interconnected, s S , and is nearly compatible then qS 64

Proof. By Thm 150, for all s S , all are companionable 1) For , assume (,p) p 2 [, ] (,p) . p 1 p [, ] p 1 [, ] p 1 p [, ] p 1 [, ] p 2 p [, ] p 2 [, ] p 2 p [, ] p 2 [, ] (Exists by Thm 151) Applying the denition of qS to any L s.t. > , it follows that p 2 . Therefore (,p) = (,p) . 2) Assume s S is nearly compatible. Take any L L( ), any 1 L s.t. 1 = glb(L), and dene 0 predL (). Take any and any p 1 , p 2 that converge at 1 (if they exist). Take p 3 , p 4 S such that p 3 [, 0 ] = p 1 [, 0 ], p 4 [, 0 ] = p 2 [, 0 ], and p 3 and p 4 converge at 1 . Further take p 3 . The theorem is proved if p 4 . Because ( p1 , p 2 , p 3 , p 4 ) is an interconnect on [0 , 1 ], and S is interconnected, theres a [0 , 1 ] and p 1 , p 2 , p 3 , p 4 S s.t. p i [, 0 ] = p i [, 0 ], p i [1 , ] = p i [1 , ], p 1 [, ] = p 3 [, ], p 2 [, ] = p 4 [, ], and all p i ( ) = p j ( ) = p. Since p i and p i are equal on L, and all elements of are determined on L, they are co-located on . Therefore p 1 , p 2 and p 3 . Since p 1 [, ] = p 3 [, ] ( ,p) ( ,p) and is nearly compatible, ( ,p) = ( ,p) . Since p 4 [, ] = p 2 [, ] ( ,p) , p 4 [, ] ( ,p) . Since p 4 [1 , ] = p 4 [1 , ] = p 3 [1 , ], for all L, p 4 () (), so p 4 . Since p 4 and p 4 are equal on all L, p 4 . (,p) = , and = . can not be an upper-bound (,p) , p [, ] (,p) , p 1 [, ] (,p) , on any L( ). Take any p [, ] (,p)

2.

The Non-Conditional Case

For the non-conditional case, no initial state is assumed. Once again representing each outcome, i , as a product of projection operators, i = ...P(Aij ; j )...P(Ai1 ; 1 ), the probability of a given outcome is Pi =
1 T r (i i) T r (I)

(ignoring complications that arise if T r (I) is

innite). This can be arrived at by assuming that, if the initial state is unknown, then the probability is the average for all possible initial states: Given any initial state, (s, 0 ), the probability for obtaining outcome i is Pi|(s,0 ) =< s, 0 |i i |s, 0 >. With V the volume of S (0 ), the average probability is then
1 V

sS (0 ) < s, 0 |i i |s, 0 >. Since V = T r (I) 65

and sS (0 ) < s, 0 |i i |s, 0 >= T r (i i ), this average probability is equal to Pi given

above. In the non-conditional case, the total probability is therefore guaranteed to be 1 i


i

T r (i i ) = T r (I ).

Note that the assumption that the non-conditional probability is equal to the average conditional probabilities need not hold for a dps; there is nothing in the nature of the dps that demands it. The assumption is equivalent to a certain kind of additivity: Take to be an outcome thats bb, dene x (), and select x1 , x2 s.t. x1 dene 1 S(,x1 ) and 2 such circumstances, P () = P (1 ) + P (2 ). We would expect this to hold if is bounded from above by , but not if its bounded from below. To see that it will hold when bounded from above, assume is ba (so the measurement of {x1 , x2 } would be the nal measurement, rather than the initial one), take to be any ip s.t. , and dene A {}. A {1 , 2 } is an ip. If P (A) is unchanged by whether A is paired with or {1 , 2 } then P () = P (1) + P (2). Call this property additivity of nal state; call the bounded from below case additivity of initial state. Achieving additivity of initial state is not quite as straightforward as additivity nal of state. One obvious way to do so is to assume additivity of initial state and reversibility. There are, however, other ways for the behavior to be realized. (See Appendix C for the denition of reversibility on a DPS.) x2 = x and x1 x2 = ;

S(,x2 ) ; the assumption being made is that under

C.

Maximal Quantum Systems

The foundations of quantum physics may be thought of as being composed of three interconnected parts: measurement theory, probability theory, and probability dynamics. This article has largely been concerned with creating a mathematical language sucient for utilization in the measurement theory & that portion of the probability theory implied by the measurement theory. One does not expect a scientic theory to follow immediately from the mathematical language that it uses, unless the theory is fairly trivial. For this reason, Thm 161 is quite evocative. Thm 161 says an interconnected dynamic system will posses the basic character of a quantum system if the nite, discretely-determined portion of its dps can be embedded in a 66

-maximal dps. The question now is, under what conditions is such an embedding possible? All countable, discretely-determined ips will have consistent probabilities if the system is countably additive in its nal state. As mentioned earlier, it may be assumed that a dynamic system has this property. To see why it is sucient, start by choosing any , and any countable partition of S (). By assumption, the total probability of the associated outcomes will be 1. For any of these outcome, optionally choose any > , and any countable partition of S ( ); by assumption, the total probability of these new outcomes will equal the probability of the original outcome. Continuing in this way, one can create all countable, discretely-determined ips of the dynamic space. To understand when such a dps could be maximal, consider a simple case of a set of outcomes that are nearly compatible, but not compatible. Assume there are p1,0 , p1,1, p1,2 S (1 ) and p2,0 , p2,1, p2,2 S (2 ) (1 < 2 ) s.t. each of the p2,j s can be reached from (1 , p1,j ) and (1 , p1,(j +1) mod 3 ), but not (1 , p1,(j +2) mod 3 ). For p2,0 and p2,1 form outcomes consisting of both of the p1,i that can reach them: O0 = (1 , {p1,0 , p1,1 }) (2 , p2,0 ) and O1 = (1 , {p1,1 , p1,2 }) (2 , p2,1 ). For p2,2 form outcomes from p1,2 and p1,0 individually: O21 = (1 , p1,2 ) (2 , p2,2 ) and O22 = (1 , p1,0 ) (2 , p2,2 ). The collection of these 4 outcomes is nearly compatible, but not compatible. That they are nearly compatible can be seen from the fact that they satisfy the qS condition; that they are not compatible can be seen from the fact that while the combination of the rst two outcomes imply that only {p1,0 , p1,1, p1,2 } was measured at 1 , the second two imply that p1,2 was distinguished from p1,0 at 1 . The inclusion of such sets of outcomes into a t-algebra would make probabilities more highly additive; in particular, it would weaken the non-additivity of double-slit experiments. To see this, take the simplest case of S (i ) = {pi,0, pi,1, pi,2 }; because {O0, O1 , O21 is compatible, and would certainly be on the t-algebra, wed then have P (O21 O22 } O22 ) =

P (O21 )+ P (O22 ), rendering the probabilities of this particular double-slit experiment entirely additive. However, if there exists a p S (2 ) that can be reached by p1,0 , p1,1 , and p1,2 , then the qS condition entails that there can be no element of qS that contains O0 , O1 , O21 , and O22 ; in this case a maximal t-algebra can not include {O0 , O1 , O21 , O22 }, and so will not demand P (O21 O22 ) = P (O21) + P (O22 ). This illustrates how the nite & nitely determined portion of a dps can be maximal even if its probabilities are highly non-additive. If the paths of a dynamic system form a richly 67

interlocking network, then the elements of qS that are not ips will be limited, making it more likely that the system will be maximal. A rich network of paths will not limit the kinds of countable, discretely determined experiments that can be performed, it simply limits the amount of probabilistic information that can be extracted from them. A similar eect was seen previously when considering interconnectedness. For non-deterministic systems, it is dicult to distinguish sets of paths that can not occur from sets of paths that occur with probability 0. If the approach to system dynamics is to include paths unless they can be ruled out in principal and let the probability function handle the rest, then the network of interlocking paths will be enriched. This will generally cause the discretely determined portion of such systems to be maximal. Lets see this in detail. Denition 162. If 1 and 2 are partitions of some set, S , 1 2 if for all 1 theres a 2 s.t. As mentioned above, partitions with total probability of 1 can be iteratively created by taking any s S with probability known to be 1, and for each , select some s.t. is ba, and slice up by partitioning (). However, not all partitions can be formed in this manner. If the partition is not compatible, and so can not be decided by an ideal e-automata, it is allowed to forget at some of what happened prior to . In such cases, to form the new partition, you wouldnt simply take the elements of and append measurements at , a further step would also be possible: for each measurement outcome at , multiple elements of may be combined into a single outcome. More precisely, select some countable set of i such that each i s S . For every p S (n ), select a i . For every i , every i , create a countable partition of the set of p S (n+1) that selected i , Xi, , and for every A Xi, form the new outcome (n ,A) . The set of all such outcomes forms a new element of s S . Under these general conditions, we can not comfortably assume that our new partition has a total probability of 1. The qS condition may be seen as a constraint on this formation process; whenever a i is selected for one p S (), the qS condition constrains what may happen at the other elements of S (). If this constraint implies that for all qS , all s.t. is ba, all p S () must select the same i , and the selected i must itself be an element of qS , then we can expect all elements of qS to have a total probability of 1. 68

To understand the eect that combining outcomes at one p S () has at the other points in S (), it is sucient to consider the combination of two outcomes. Since the two outcomes, , , must form a discretely-determined set when combined into chosen so that for all but one element of L, = (,()
())

, they must be
()) ;

and = (,()

if = , then () and () must be disjoint at the one remaining L. Now to see the eect of the qS condition. Lets start by considering a pair of outcomes determined at a single , S(1 ,A) and S(1 ,B) , and assume that at p S (2 ) these two are combined to form S(1 ,A
B )(2 ,p) .

The qS condition demands that A and B must

then also be combined for the outcomes at other elements of S (). To see which ones, some further denitions will be helpful. (Note that S(1 ,A)(2 ,p) (1 ) is the set of elements of A that can reach p.) Denition 163. If S is a dynamic set, 1 2 , A S (1 ), p S (2 ) then (A, 1 ) (p, 2 ) S(1 ,A)(2 ,p) (1 ) For X S (2 ) , (A, 1 ) (X, 2 ) S(1 ,A)(2 ,X ) (1 ). (A, 1 ) (p, 2 ) may be thought of as ps footprint in A. Denition 164. p [p, 2 ; A, 1 ] if (A, 1 ) (p , 2 ) (A, 1 ) (p, 2 ) = .

p [p, 2 ; A, 1 ] if p and p s footprints overlap, so p [p, 2 ; A, 1 ] if there exists an element of A that can reach both p and p . According to the qS condition, if outcomes A and B combine at p, and p [p, 2 ; A, 1 ] [p, 2 ; B, 1 ], then they must combine at p . If p [p , 2 ; A, 1] [p , 2 ; B, 1], they then must also combine at p . This leads to:

Denition 165. If S is a dynamic set, 1 2 , p S (2 ), and A and B are disjoint subsets of S (1 ): p, 2 ; A, B, 1 p, 2 ; A, B, 1


0

[2 , p; 1, A]

[2 , p; 1 , B ]
n

n+1

p p,2 ;A,B,1

p , 2 ; A, B, 1
n

p, 2 ; A, B, 1

nN

p, 2 ; A, B, 1

If A and B combine at p, then the qS condition demands that they must combine at all elements of p, 2 ; A, B, 1 . There may be cases of p S (2 ) s.t. ( p , 2 ) for some p1 p, 2 ; A, B, 1 , (A, 1 ) p, 2 ; A, B, 1 , (B, 1 ) (A, 1 ) (p1 , 2 ) = , and for some p2 69

( p , 2 )

(B, 1 ) (p2 , 2 ) = , but there are no elements of p, 2 ; A, B, 1

for which

both hold, and so p is not an element of p, 2 ; A, B, 1 . This is a generalization of the example seen earlier. If such cases occur, the qS condition will be insucient; they are eliminated if paths form a densely interlocking network. More precisely: 1) Form the footprints of p, 2 ; A, B, 1 in A and B : x (A, 1 ) ( p, 2 ; A, B, 1 , 2 ) and y (B, 1 ) ( p, 2 ; A, B, 1 , 2 ) 2) Close the footprints with all intersecting footprints. That is, dene: X0 x Xn+1 X {(A, 1 ) (p , 2 ) : (A, 1 ) (p , 2 ) Xn Xn = }

nN

Similarly, starting with y , construct Y . 3) For the qS condition to be sucient: For any p / p, 2 ; A, B, 1 , if (A, 1 ) (p , 2 ) is in X then (B, 1 ) (p , 2 ) must be disjoint from Y , and if (B, 1 ) (p , 2 ) is in Y then (A, 1 ) (p , 2 ) must be disjoint from X . With a = A X and b = B Y , combining S(1 ,A)(2 ,p) and S(1 ,B)(2 ,p) under the qS condition then has the eect of replacing {A, B } with {a, b, X Y }; the resulting partition will therefore have a total probability of 1. It is interesting to note that statement (3) will be satised if the paths are either quite dense or quite sparse; only the intermediate case may cause diculty. Combining pairs of outcomes that are determined at multiple lead to similar conclusions. Once again, in order for the combination to form a discretely-determined set, the two outcomes can only disagree at a single . If the two outcomes disagree on the last measurement then the analysis is little changed from above, except that we need only consider the subset of points in A & B that can be reached from the prior measurements. The trickier case is when a sequence of further measurements come after the two being combined; for example, the case where S(1 ,A)(2 ,C )(3 ,p) and S(1 ,B)(2 ,C )(3 ,p) are combined. To see what the qS condition demands of the other p S (3 ) in such cases, it will be helpful to expand some the above denitions: Denition 166. If S is a dynamic set, 1 2 , A S (1 ), Z is a set of subsets of S (2 ), and Z Z : Z [Z, Z , 2; A, 1 ] if Z Z and (A, 1 ) (Z , 2 ) If, further, A and B are disjoint subsets of S (0 ): 70 (A, 1 ) (Z, 2 ) =

Z, Z , 2 ; A, B, 1 Z, Z , 2 ; A, B, 1

[2 , Z , Z ; 1, A]

[2 , Z , Z ; 1 , A]
n

n+1

Z Z,Z ,2 ;A,B,1

Z , Z , 2; A, B, 1
n

Z, Z , 2 ; A, B, 1

nN+

Z, Z , 2 ; A, B, 1

If for each p S (3 ), Zp S(1 ,A)(3 ,p) (2 ), and Z {Zp : p S (3 )} then Zp [Zp , Z , 2; A, 1 ] i p [p, 3 ; A, 1 ]; from this it follows that p p, 3 ; A, B, 1 i Zp Zp , Z , 2 ; A, B, 1 . This allows us to take our earlier analysis on combining S(1 ,A)(3 ,p) and S(1 ,B)(3 ,p) , and project it onto any 2 (1 , 3 ): if S(1 ,A)(3 ,p) and S(10 ,B)(3 ,p) are combined, then S(1 ,A)(3 ,p) and S(1 ,B)(3 ,p) must be combined if Zp Zp , Z , 2; A, B, 1 . To extend this to combining S(1 ,A)(2 ,C )(3 ,p) C and Z with ZC and S(1 ,B)(2 ,C )(3 ,p) , simply replace Zp with Zp,C Zp

{Zp,C : p S (3 )}. If S(1 ,A)(2 ,C )(3 ,p) and S(1 ,B)(2 ,C )(3 ,p) are combined then S(1 ,A)(2 ,C )(3 ,p ) and S(1 ,B)(2 ,C )(3 ,p ) must be combined if Zp,C Zp,C , ZC , 2 ; A, B, 1 . This leads by the same reasoning to the same conclusion: if the network of paths is suciently dense (or sparse), the nite and nitely determined partitions that satisfy the qS condition can be expected to have a total probability of 1. It follows that if the dynamic set in a dps follows the rule that paths are included unless they can be excluded in principle, then we can reasonably expect the discretely-determined portion of the dps to be maximal.

D.

Terminus & Exordium

A word or two is in order about dpss, (X, T, P ), for which X is not a singleton. In such
S cases TN may be larger than TS N , and so admit partitions that are not nearly-compatible

(and therefore not elements of qS ). If the various ({S }, TS ) are copies of one another, in the
S sense of the reductions introduced in Sec. IV B 4, then TN = TS N . For quantum systems,

if we consider the various Hilbert space bases as generating the various elements of X , we can partially assert that this is the case. From these bases, each S () is equipped with a -algebra, and outcomes are of the form S(1 ,1 )...(n,n ) where the s are elements of their respective -algebras. For any S, S X , , the -algebras on S () and S () are isomorphic. This gets us part of the way there. One further piece is missing: while the outcomes are isomorphic, the dynamics may not be. In particular, its possible to have a case where two points, (, p1 ) and (, p2 ), can transition to ( , p ), but under isomorphisms from 71

(S (), ) to (S (), ) only one of them can. It would then be possible to have GT S ,
S but its isomorphic image not be an element of GT S ; TN may then be larger than TS N .[8]

However, in quantum mechanics we would not expect any such diering dynamics to eect the existence of a reduction between bases; for a given element of qS we would expect to be able to select sequences of projection operators s.t., under a given basis transformation, the transformed representation will also be an element of qS . More generally, the discretely determined probabilities ought to be sucient to determine all inner-products in a given bases, which then entail the discretely determined probabilities in all other bases. Dpss in the various bases are therefore, in a very real sense, copies of each other. Moreover, its not clear that all bases are experimentally relevant; naturally, if a basis is
S not experimentally relevant, then it cant contribute any information to TN . Unsurprisingly,

given the subject matter of physics, when a basis is given an experimental interpretation, its generally in terms of paths though space. For example, when the conjugate-coordinate basis is interpreted as momentum, it is necessarily being given a spacial path interpretation; an interpretation predicated on the conjugate-coordinates mathematical relation to aspects of spacial path probabilities such as the probability ux. This connection between the conjugate-coordinate basis and the particles spacial path is put on display when the value of the conjugate-coordinate is determined by tracking particle paths in a detector.
S However, because quantum systems satisfy all the dps axioms, even if TN is larger than

TS N , this fact will have to be reected in the quantum probabilities. We dont need results like Thm 161 to conclude that quantum systems are dpss, we know that without them. What such results do tell us is that, at least with regard to their measurement theory and a central core of their probability theory, quantum systems appear to be nothing more than fairly generic dpss. Indeed, one may start to suspect that quantum mechanics, as it currently stands, is a phenomenological theory. It certainly is not holy writ, handed down from on high, which we cant possibly hope to truly understand, but which we must none the less accept in all detail. Rather, it evolved over time by trial and error as a means of calculating results to match newly discovered experimental phenomenon. It has been quite successful in achieving that goal; however it has also historically yielded little understanding of why its calculational methods work. This lack of understanding has often led to claims that we dont understand these matters because we cant understand them; they lie outside the sphere of human 72

comprehension. While we can not state with certainty that such bold claims of necessary ignorance are false, we can say that they are scientically unfounded, and so ought to be approached with skepticism. With all humility, it is hoped that this article has lent strength to such skepticism, and shed light on some matters that have heretofore been allowed to lie in darkness.

Appendix A: -Additivity

Quantum probabilities posses an interesting algebraic property that is not a direct consequence of their t-algebra. For any t T s.t. { t} is also an element of T , dene (t) P (t) P ({ t}). The function measures the additivity of a gps; if a gps has additive probabilities, then (t) = 0 for all t. Quantum probabilities are not additive, but their function is. For any t T s.t. { t} T and for all pairs , t, {, }, { (t) =
pairs of , t

} T , ({, })

This is an immediate consequence of quantum probabilities being calculated from expressions of the form < , 0 | |, 0 >, where is a product of projection operators. Any gps that obeys this rule is -additive. It follows from the denition of that for disjoint t1 and t2 , (t1 t2 ) = (t1 )+({ t1 } t2 ). Applying this to the above formula, we get ({1 2 , 3 }) = ({1 , 3 })+({2 , 3 }), which is directly analogous to the additivity seen in classic probabilities. If t has more than 3 elements then ({1 2 , 3 , ...}) = ({1 , 3 , ...}) + ({2 , 3 , ...}) ({3 , ...}). In the orthodox interpretation, -additivity is generally viewed as being due to path interference possessing wave-like properties . An interesting question is, under what conditions will a system obeying the intuitive interpretation display -additivity? Start by dening a moment partition to be any partition, , s.t. for any , either all are bb, or all are ba. (See Defn 147 for the denition of moment, and Thm 148 for proof that a partition composed of moments is an ip). Moment partitions have some very useful properties. First, for any , , ( {, }) ( ) is also a moment partition. Second, if is a moment partition, and , then {, S } is also a moment partition. Indeed if is an ip, and for all , {, S } is also an ip, then is 73

a moment partition. Now take any t T , any moment partition, , and dene / { and occurs. For / take Po ( ) to be the probability that occurs if all interactions required to measure are vanishingly small (while all the interactions for measuring are unchanged). Po ( ) may be thought of as the omniscient probability: we simply know which element of occurs without having to perform a measurement. (In the intuitive interpretation, some occurs even if the measurement does not take place.) It follows that P () = / Po ( ). Po is, of course, simply a conceptual construct; it can only be experimentally determined if Po = P . Now dene ( ) P ( ) Po ( ); ( ) represents the amount of deection into/out of due to the measurement of . Since P () = / Po ( ), (/ ) = for / , dene . Theorem 167. 1) For any outcome, , any countable moment partition, , / is additive i
/ /

= }. (/ ) is the eect the measurement has on the probability that

( ). Finally,

( ) =

( ).

2) A discretely determined dps is -additive i for every outcome, , every moment partition, , s.t. / T , / is -additive. Proof. 1) Assume / has N 3 elements (N = 2 is trivial). Note that since probabilities Po are fully additive, (N 1) Adding (N 1)
/ / /

Po ( ) =

/ { } /

Po ( ) =

Po ( ).

Po ( ) to the left side of

( ) =
/

( ), and

Po ( ) to the right, yields (N 2)P () +


/

P ( ) =

P ( ). Now note P ( ) from the


/

that (N 1) left side, and

P ( ) =

P (/ { }). Subtracting (N 1)

P (/ { }) from the left, yields (N 2)(/ ) =


pairs of ,/

(/

{ }). It remains to show that this is equivalent to (/ ) =

({, }).

They are clearly equivalent when N = 3. Assume they are equivalent for N = M , for N =M +1 (/ ) = = = =
1 N 2 1 N 2 1 N 2 /t

(/ { }) ({, })

/t /

(( )/ )
pairs of ,/ { }

pairs of ,/

({, }) 74

2) Lets say that T is discretely determined, and for t T , all , t, { all T correspond to the same outcome.

} T .

This means that for some L s.t t is determined on L, at all elements of L except one, Take to be the element of L at which the various t correspond to dierent outcomes. Taking = t, there exists a moment partition at , , s.t. t = / .

So the probabilities of an intuitive system will have the algebraic properties of quantum theory if for all outcomes, , all countable moment partitions, ,
/ /

( ) =

( ). Lets now see how this requirement may hold.

First note that, since measures the eect of the environment on the system, experimental methods should be chosen so as to minimize . One way to do this is to perform passive measurements, a type of measurement that corresponds to how double-slit experiments are generally pictured. In a passive measurement, to determine the probabilities for elements of / , start with some 0 GT such that 0 . Perform the measurement of 0 as before, but for block from occurring; if would have occurred, you get a null result. Now if occurs it means that null results) is then P ( / . Imagine that position measurements on particles are performed in this manner. In the intuitive interpretation, if ( ) = 0 its because the blocking o of creates some minimum but non-vanishing ambient eld in , which deects the particles. The probabilities on 0 / will sum to 1 if these ambient elds cause the particles path to be deected among the outcomes of 0 , but not among outcomes of . Because is a moment partition, this means that paths are not deected prior to the measurement taking place; this is also a sucient condition for additivity of nal state to hold. For these passive measurements,
/

occurred. The proportion of s to all trials run (including

). Doing this in turn for each yields the probabilities for

( ) =

( ) if the deections in

due to blocking o are equal to the sum of the deections in caused by blocking o each element of { } individually. So the dps describing particle position will be additive if the ambient elds in caused by blocking is equal to the superposition of the ambient elds in caused by blocking the individual { }. There are other ways for -additivity to hold, but this one is conceptually simple, as well as plausible.

75

Appendix B: Conditional Probabilities & Probability Dynamics

Conditional probabilities are a central concept of probability theory. They are particularly interesting for dpss because probability dynamics are expressed in terms of conditional probabilities. There are two ways in which a dpss probabilities may be considered to be dynamic. First, the probabilities of what will happen changes as our knowledge of what has happened continues to unfold. Second, given that we measured the system to be in state s at time 0 , we may be interested in the probability of measuring the system in state as being in state x at time as varies. The Schrodinger equation deals with dynamics of this second sort. Conditional probabilities are required for exploring both kinds of dynamics. The notion of conditional probability is the same in a gps as it is in a classic probability space; essentially, if P (B ) = 0, P (A|B ) = P (A B )/P (B ). For a dps, the conditional probability of particular interest is the probability that t occurs given that, as of , the measurement is consistent with t. In order to delineate this, a few preliminary denitions will prove helpful. Denition 168. If (X, T, P ) is a dps, t T , and t GT , |t[, ]| {|[, ]| : t} (t)
t ()

(X, T, P ) is -complete if for all t T , GT s.t. t , , (t) T Because (t) , -completeness places no restrictions on the make-up of GT , it only places a restriction on the -algebras of the constituent ip probability spaces. It is therefore a fairly weak condition. P (t|(t) ) is the probability that t occurs given that, as of , the measurement is consistent with t. Because probabilities of this type are of interest, its useful to extend the denition of consistent probabilities (Defn 84) to insure that the they are independent of . Denition 169. A dps, (X, T, P ), is conditionally consistent if it is -complete and for all t, t T , all , all t GT , t GT s.t. |t[, ]| = |t [, ]| ,
P ((t) ) = P ((t ) ).

Conceptually, this demand is met if probabilities are consistent at all times as the experiments unfold. Note that since |t[, ]| = |t [, ]| , the all-reet nots (introduced in 76

Sec. III E) of (t) and (t ) are the same; as a result, if a t-algebra is suciently rich, it

ought to be conditionally consistent. Denition 170. If (X, T, P ) is a conditionally consistent dps, t, t T , and t GT , then with y |t [, ]| , P (t||y ) P (t|(t ) ). This is somewhat more intuitive notation for conditional probabilities. The P (t||y ) allow us to see how dps probabilities unfold with time. Note that y does not have to be an element of T in order for P (t||y ) to be dened. A common assumption with regard to conditional probabilities on stochastic processes is that they satisfy the Markov property. The equivalent property for dps is: Denition 171. A dps, (X, T, P ), is point-Markovian if for all t T , (, p) Uni( t = t(,p) t(,p) then +t(,p) , +t(,p) T and P (t) = P (+t(,p)) P (+t(,p) ). (In the above denition, notation that has been used on sets of dynamic paths have been applied to collections of sets. For example, t(,p) , which is understood to mean {(,p) : t}, and t(,p) t(,p) , which is understood to mean {(,p) (,p) : , t, (,p) (,p) = }. It is hoped that this notation has not caused confusion.) When the probability function is not additive, this property looses much of its power. None-the-less, the property can generally be assumed to hold, and does have some interesting consequences. One interesting property of point-Markovian dpss with discrete parameters is that, if the t-algebra is suciently rich, then their probabilities tend to be additive. This can be seen for the case of t T s.t. for some 0 < , t is sbb0 , wba, and ( t)[0 , ] is nite.[9] Start with any such t T and dene t1 {(1,q)(,p) : t, q ( 1), and p ()}; since t is wba, for any GT s.t. t , ( t) t1 is an ip, so if it is an element of GT then P (t) = P (t1 ). If, further, the dps is point-Markovian, then P ({(1,q)(,p) }) = P ({+(1,q) }) P ({+(1,q)(,p) }). The same manipulations can now be performed on the +(1,q) s. Dening X yield P (t) = P (X P (t ) = P (X
t) t ). t

t), if

{+{s [0 , ]} : s

t}, these iterations eventually t = t,

Therefore, for any t s.t. t is bounded by 0 & , and

= P (t).

77

Appendix C: Invariance On Dynamic Sets and DPSs 1. Invariance On Dynamic Sets

Invariance is among the most fundamental concepts in the study of dynamic systems. We start by establishing the concept for dynamic sets. (In what follows, if f & g are functions, f g is the function s.t. f g (x) = f (g (x))) Denition 172. A global invariant, I , on a dynamic set, S , is a pair of functions, I : S S and IP : PS PS s.t. {IP p I : p S} = S. For any p S , I ( p) IP p I ; for A S , I [A] = {I ( p) : p A} (so I is an invariant i I [S ] = S ). Global invariants are overly restrictive when is bounded by 0. To handle that case equitably, the following denition of invariance will be employed. Denition 173. If S is a dynamic set and S is unbounded from below, then I is an invariant on S if it is a global invariant on S . If S is bounded from below then I is an invariant on S if there exists a dynamic set, S s.t. S is unbounded from below, S = S [0, ], and I is an invariant on S . Unless stated otherwise, for the remainder of this section dynamic sets will be assumed to be unbounded from below. Theorem 174. If I is an invariant on S 1) IP is a surjection 2) I and IP are invertible i I is an injection and IP is a bijection
1 1 3) I and IP are invertible and I 1 (I , IP ) is an invariant on S i I and IP are

bijections Proof. 1) If it is not then PI [S ] Ran(IP ) PS and so I [S ] = S .

2) Follows from (1) and that functions are invertible i they are injections.
1 3) Follows from (2) and that for I 1 to be an invariant, the domain of I needs to

be .
1 1 Because I and IP are bijections, for all p S I 1 (I ( p)) = IP IP p I I =p .

Therefore I 1 [I [S ]] = S . Since I [S ] = S , I 1 [S ] = S . 78

Denition 175. Dynamic set S is weakly -invariant if for all theres an invariant,
L , s.t. for all S , L ( ) = + .

S is strongly -invariant if it is weakly -invariant and for all , L P is the identity on PS . Theorem 176. A dynamic space D is strongly -invariant i it homogeneous and all of PS is homogeneously realized Proof. When D is unbounded from below, this is fairly clear. If D is bounded from below: follows immediately from the unbounded case. For , its necessary to construct a homogeneous, homogeneously realized, unbounded from below D s.t. D [0, ] = D . This is relatively easy. Note that if D is homogeneous & homogeneously realized then for any > 0, any , D , D [, + ] and D [ , + ] are copies of each other, in that if you move D [ , + ] to , the two are equal. So to construct D , take any D [0, ], append it to the beginning of D , then append it to the beginning of the resulting set, and so forth. Only one set, D , will equal this construction for all intervals, [n, ]; D is a homogeneous, homogeneously dynamic space, and D is unbounded from below, so D is strongly -invariant. Denition 177. S is reversible if for all theres and invariant, R , s.t. for all S ,
R ( ) = , and RP RP is the identity on PS .

Theorem 178. 1) If S is reversible then it is weakly -invariant.


1 2 2) If S is reversible and for all 1 , 2 , RP = RP then it is strongly -invariant.

Proof. 1) For every S , L = R R0 is an invariant (because the composite of any


two invariants is an invariant; R0 being R with = 0), and for all , L ( ) = 0 R (R ( )) = + . 1 2 0 0 0 2) Again with L = R R0 , if for all 1 , 2 , RP = RP then RP RP = RP RP ,

which is the identity on PS .

2.

Invariance on DPSs

To apply the notion of invariance to dpss, well start by expanding the denition to cover invariance on a collection of dynamic sets. (In the interest of concision, previous 79

considerations for the case where is bounded by 0 will not be explicitly mentioned, but they should be assumed to continue to apply.) Denition 179. If X is a collection of dynamic sets, I is an invariant on X if it is a pair of functions I :
S X

S X

S and IP :

S X

PS

S X

PS s.t. for some bijection,

BI : X X , all S X , I [S ] {p : f or some p S, p = IP p I } = BI (S ). In the case where X = {S } this reduces to the prior denition of invariance. When BI is simply the identity on X , I represents a collection of dynamic set invariants, one for each element of X . In the more general case, where the dynamic sets in the collection are allowed to map onto each other under the transformation, the collection can be invariant under the transformation even when none of the individual dynamic sets are. Generalizing types of invariance is straightforward. For example: Denition 180. X is reversible if for all S, S X , S = S = , and for all theres
and invariant on X , R , s.t. for each S X , RP RP is the identity on PS , and for all , R ( ) = .

Invariance on a dps is now: Denition 181. If (X, T, P ) is a dps, and I is an invariant on X , I is an invariant on (X, T, P ) if 1) If GT and I [ ] is an ip on some S X then I [ ] GT 2) If t T and for some GT , I [t] then I [t] T 3) If t T and I [t] T then P (t) = P (I [t]) The dps versions of invariants such as reversibility follow immediately.

Appendix D: Parameter Theory

Denition 182. A set, , together with a binary relation on , <, a binary function on , +, and a constant 0 , is an open parameter if the structure, (, <, +, 0), satises the following: Total Ordering: 1) For all , 2) For all 1 , 2 , 3 , if 1 < 2 and 2 < 3 then 1 < 3 80

3) For all 1 , 2 , either 1 = 2 or 1 < 2 or 2 < 1 Addition: 4) For all , + 0 = 5) For all 1 , 2 , 1 + 2 = 2 + 1 6) For all 1 , 2 , 3 , (1 + 2 ) + 3 = 1 + (2 + 3 ) The standard interrelationship between ordering and addition: 7) For all 1 , 2 , 3 , 1 < 2 i 1 + 3 < 2 + 3 8) For all 1 , 2 , if 1 < 2 then theres a 3 s.t. 1 + 3 = 2 Possesses a positive element: 9) There exists a s.t > 0 The enumerated statements in this denition will be referred to as the open parameter axioms, and the individual statements will be referred to by number: opa 1 referring to For all , , etc. Because in most cases <, +, and 0 will be immediately apparent given the set , parameters will often be referred to by simply referring to . Denition 183. For , is an immediate successor to if > and there does not exist a s.t. > > . A parameter is discrete if every has an immediate successor. A parameter is dense if no has an immediate successor. Theorem 184. An open parameter is either discrete or dense Proof. Follows from opa 8 and opa 7. Denition 185. For a discrete open parameter, the immediate successor to 0 is 1. Theorem 186. If is discrete, then for every , the immediate successor to is + 1. Proof. This too follows from opa 8 and opa 7. Denition 187. If , is an upper-bound of if for every , ; is the least upper-bound if it is an upper-bound, and given any other upper-bound, + , + . If has no upper-bound, then is unbounded from above.

81

Similarly, is an lower-bound of if for every , ; is the greatest lower-bound if it is a lower-bound, and given any other lower-bound, , . If has no lower-bound, then is unbounded from below. is the additive inverse of if + = 0 Theorem 188. If is an open parameter 1) is unbounded from above. 2) If is bounded from below, its greatest lower bound is 0. 3) If does not have a least element, every has an additive inverse; if it does have a least element, only 0 has an additive inverse Proof. 1) Follows from opa 9 and opa 7 (with help from opa 4) 2) Also follows from opa 7 with help from opa 4 3) First take the case where does not have a least element. If < 0 then by opa 8 theres a s.t. + = 0. If > 0 take any s.t + < 0 (such a must exist because is unbounded from below). As just established, there must be a s.t. + + = 0, so + is the additive inverse of . Now assume does have a least element. By opa 7 and pt. 2 of this theorem, for any , s.t. = 0, + > 0. Open parameters admit models which, under most interpretations of parameter, would not be considered admissible. For example, innite ordinals (with the expected interpretations of <, +, and 0) are open parameters, as are the extended reals. Rational numbers are also open parameters, as are numbers which, in decimal notation, have only a nite number of non-zero digits. To eliminate these less-than-standard models, parameters will be dened as open parameters that are nite (sometimes called Archimedean), and either discrete or continuous (that is, parameters have the crucial property that all limits which tend toward xed, nite values exist). This will be accomplished through the well known method of adding a completeness axiom. Denition 189. An open parameter, , is a parameter if every non-empty subset of that is bounded from above has a least upper bound. From here on out, it will be assumed that refers to a parameter. 82

Denition 190. added to itself n times will be denoted n (for example 3 + + ). 0 0. 0 if is bounded by 0 and <

Theorem 191. If is a parameter then for every 1 , 2 , 0 < 1 < 2 , there exists an n N s.t. n1 > 2 . Proof. Assume that for all n N, n1 < 2 . Then the set = {x : x = n1 }, is bounded from above. Therefore it has a least upper bound, M . Since 1 > 0, M 1 < M , and so M 1 cant an upper bound of . Therefore for some i N, i1 > M 1 . But then (i + 2)1 > M , so M can not be an upper bound. Therefore is unbounded, and so for some n N, n1 > 2 . Thm 191 is equivalent to saying that for all , is nite. Theorem 192. For any n N+ 1) 1 > 2 i n1 > n2 2) 1 = 2 i n1 = n2 Proof. 1) Follows from opa 7, opa 2 2) Follows from (1) and opa 3. Theorem 193. If is a discrete parameter then given any , > 0 theres an n N s.t. = n1. Proof. Follows from Thms 186 and 191 Theorem 194. For any n, m N 1) n1 + m1 = (n + m)1 2) If n > m then n1 > m1 Proof. 1) Follows from opa 6 2) Take k = n m; By (1) and opa 7, (2) holds i k 1 > 0, which follows from Thm 192.1. Thms 193 and 194, together with Thm 188.3, create a complete characterization of discrete parameters. A similar characterization for dense parameters will now be sketched. 83

where + = , otherwise

Denition 195. For , , m N, n N+ , =

m n

if n = m.

Theorem 196. If is dense then for all , m N, n N+ 1)


m n

m n

2) If =

and =

m n

then =

Proof. 1) Take { : n m}. is bounded from above, so take to be the least upper bound. If n is either greater or less than m then density provides an example that contradicts being the least upper bound, so by opa 3 n = m. 2) n = m and n = m, so = by Thm 192.2. Theorem 197. If is dense then for all 1) If m1 , m2 N, n1 , n2 N+ and
m1 n1

m2 n2

then

m1 n1

m2 n2

2) For q1 , q2 Q, q1 + q2 = (q1 + q2 ) 3) For q1 , q2 Q, > 0, if q1 > q2 then q1 > q2 Proof. 1) If


m1 n1

m2 n2

then either there exist a m, n, k1 , k2 s.t. m1 = k1 m, n1 = k1 n,


m1 n1

m2 = k2 m, and n2 = k2 n. It is sucient to show that 2) For some m1 , m2 N, n N+ , q1 =


(m1 +m2 ) . n m1 n

m . n

Taking
m1 n

m1 , n1

(k1 n) = (k1 m). By Thm 194.1, k1 (n ) = k1 (m). By Thm 192.2 n = m. and q2 =


m2 . n

With 1

and 2

m2 , n

m1 + m2 = n1 + n2 and so by opa 6, n(1 + 2 ) = (m1 + m2 ), which mean 1 + 2 = 3) Follows from (2) and opa 7, and the fact that q3 q2 q1 is a positive rational number (note that if > 0 and q > 0 then q > 0). Denition 198. A parameter sequence, (n )nN , is convergent if there exists a s.t. for any > 0 theres an n N s.t. for all i > n, i ( , + ). In this case we say = lim n . (n )nN , is Cauchy-convergent if for any > 0 theres an n N s.t. for all i, j > n, j (i , i + ). (n )nN is monotonic if either for all an i N, i+1 i+1 , or for all an i N, i+1 i+1 . Theorem 199. If is a dense parameter and (qn )nN is a monotonic, Cauchy-convergent sequence of rational numbers, then (qn )nN is a convergent parameter sequence (with the understanding that if is bounded from below then all qi are non-negative). 84

Proof. A: If (qn )nN is a Cauchy-convergent sequence of rational numbers, then (qn )nN is a Cauchy-convergent. - Follows from Thm 197 and the fact that the set of rational numbers is dense. B: If is a parameter, and A is bounded from below, then A must have a greatest lower bond - Take B to be the set of lower bounds of A. B is bounded from above by every element of A, so take b to be the least upper bound of B . b must be a lower bound of A because if for any a A, a < b then b can not be the least upper bound of B . It also must be the greatest lower bound, because if any x > b is a lower bound of A, then x B , in which case b would not be an upper bound of B . Assume (qn )nN is monotonically increasing. Take to be the least upper bound of Ran((qn )nN ). Take any > 0. By (A) there exists an n N s.t. for all i, j > n, qj (qi , qi + ). If follows that for all i > n, qi ( , + ). By (B), the proof for monotonically decreasing sequences is similar. It is a foundational result of real analysis that all Cauchy-convergent sequences of rational numbers converge to a real number, and for all real numbers there exist monotonic, Cauchyconvergent sequences of rational numbers that converge to it.
Theorem 200. If (qn )nN and (qn )nN are two monotonic sequences of rational numbers that converge to the same real number then lim(qn ) = lim(qn ). Proof. Assume (qn )nN and (qn )nN are monotonically increasing. Because they converge to the same real number, they have the same least upper bound. (qn )nN and (qn )nN must

then also have the same least upper bound. By the proof of Thm 199, they have the same limit. All other cases are similar. Denition 201. If (qn )nN is a monotonic, Cauchy-convergent sequence of rational numbers and lim qn = r then for any , r lim(qn ). By Thm 200 the above denition uniquely denes multiplication by a real number. Theorem 202. If is a dense parameter and 1 any element of that greater than 0 1) For any real number r , r 1 2) For any , there exists a real number, r , s.t. r 1 = 85

Proof. 1) Follow immediately from Thm 199. 2) Well take the case of > 0; < 0 is similar and = 0 is trivial. A: If > 0 then there exists a rational number q s.t 0 < q 1 < - The greatest lower bound of the set of i s.t.
1 1 2i 1 1, 2n

n N+ , is 0; since > 0 there must be an

<-

Take q to be any rational number s.t. q 1 < . By Thm 191 theres an n N+ s.t. nq 1 > ; take m0 to be the smallest such element of N+ . Take 0 = (m0 1)q 1; note that 0 and 0 q 1. Similarly for each i N take mi to be the smallest element of N+
q q q s.t. mi ( 2 i )1 > . With i (mi 1)( 2i )1 it follows that i i+1 and i ( 2i )1. q q Consider the sequence ((mi 1)( 2 i )1)nN ; for any theres an n s.t. ( 2n ) < . For q all j > n, (mj 1)( 2qj )1 < ( 2qn )1 < , so ((mi 1)( 2 i )1)nN converges to . with q r = lim(mi 1)( 2 i ) , = r 1.

Theorem 203. If is a dense parameter and 1 any element of that greater than 0 1) For r1 , r2 R, r1 + r2 = (r1 + r2 ) 2) For r1 , r2 R, > 0, if r1 > r2 then r1 > r2
Proof. 1) If (qn )nN and (qn )nN are monotonically increasing sequences of rational numbers and lim qn = r1 and lim qn = r2 then lim(qn + qn ) = r1 + r2 .

2) Follows from (1) and opa 7, and the fact that r3 r2 r1 is a positive real number. Thms 202 & 203 create a complete characterization of dense parameters.

[1] It should be pointed that, while the analogy between experiments and automata is useful for creating a quick sketch of the theory, it is no more than an analogy. In the automata studied by computer scientists, time is assumed to be discrete and the number of automata states is assumed to be nite. These assumptions must be made in order to assert that the automata are performing calculations, and they have signicant impact in deriving classes of calculable functions, but such assumptions would be out of place when discussing experiments. [2] The mathematical notion of a model is a basic concept from the eld of mathematical logic. Since mathematical logic is not generally a part of the scientic curriculum, heres a brief description of model theory. A mathematical theory is a set of formal statements, generally

86

taken to be closed under logical implication. A model for the theory is a world in which all the statements are true. For example, group theory starts with three statements involving a binary function, , and a constant, I . They are: For all x, y, z , (x y ) z = x (y z ); For all x, x I = x; For all x there exists a y s.t. x y = I . One model for this theory is the set of integers, with meaning + and I meaning 0. Another is the set of non-zero real numbers, with meaning and I meaning 1. It is generally the case that a theory will have a more than one model. A model may be thought of as a reality that underlies the theory (in which case the fact that a theory has many dierent models simply means that it holds under many dierent circumstances). It is this sense of models referring to a theorys underlying reality that leads to the correspondence between scientic interpretations and mathematical models. [3] Had any part of PS needed to be homogeneously realized it could have presented a problem with viewing S as playing out on the stage of space-time, because every point in space-time can only be realized at a single . [4] In the Introduction two models were for non-determinism were given. One of them, type-m nondeterminism, encounters a well known diculty at this point: in order for the statistical view of probabilities to be applicable, every individual run of an experiment must result in an individual outcome being obtained. However, if an e-automata displays pure type-m non-determinism, it will simultaneously take all paths for all possible outcomes, not just paths which cross some particular [e], and this will result in multiple outcomes. In this controversy, experimental results have decided in favor of individual outcomes; experimental apparatus always end up in a single state, and as outcomes have been dened, each individual nal environmental state corresponds to an individual outcome. [5] (X, Tc , Pc ) does, however, meet all the other requirements for being a gps; that is, if for all pairs of c-sets, A and B , ( A) ( B ) Tc then (X, Tc , Pc ) is a gps

[6] If < 1 , s1 |2 , s2 >= 0, then its clear that (1 , s1 ) (2 , s2 ). However, there is some ambiguity as to whether < 1 , s1 |2 , s2 >= 0 necessarily implies that the transition can not take place, or if it simply demands that the transition occurs with probability 0. This question grows acute if theres a (, s) s.t. 1 < < 2 , < 1 , s1 |, s >= 0, and < , s|2 , s2 >= 0. In that case, in order for (5) to hold, we would have to allow the transition (1 , s1 ) (2 , s2 ), but say that it occurs with probability 0. Theres no necessity to dene the relation in this manner, but doing so is in keeping both with the path integral formalism and the orthodox interpretations

87

of quantum mechanics. There one would say that there exist possible paths from (1 , s1 ) to (2 , s2 ), but they interfere with each other in such a way as to keep the total amplitude of the transition 0; however, if a further measurement caused only a subset of these paths to be taken, then the probability can become non-zero. [7] The ||+ encountered in Section III E are examples of measurements of rate of change; the ||+ contain paths that are in some compatible set, t, as of , but whose velocities ensure that they will exit t immediately after [8] Then again it may not. For example, if the transitions are dierent because the states are deterministic, or conserved, in S1 but not in S2 , this will not eect the existence of a reduction. That is because, when states are deterministic, the order in which measurements occur is irrelevant, so many dierent measurement sequences will result in the same ip. [9] Finite spin systems are point-Markovian, and all their t T satisfy these restrictions; however their parameters are generally assumed to be non-discrete, which does allow them to have non-additive probabilities.

88

You might also like