Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Lecture 5

Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

Interpretation of Quantum Theory J. Emerson and R.

Laamme (Instructors) Problems for the Orthodox Interpretation

U. Waterloo PHY773, Winter 2005 Lecture 5

Problems for the Orthodox Interpretation


Lecturer: Joseph Emerson What does it mean to interpret a theory? Operational vs Ontological Bridge-Principles Operational Bridge-Principles Operational bridge-principles are operational rules that relate elements of the formalism to measurements that may be performed. These rules provide the adequate information to use quantum theory in the lab, i.e., to explain experimental outcomes (observations). Operational rules do not give insight into the nature of the underlying physical reality of the systems described by quantum theory. An important example of an operational bridge-principle in quantum theory is the Born rule, which tells us the relative frequency (probability) with which outcome k is observed given the same measurement repeated on an ensemble of identically prepared systems: Prob(k|) = tr(|k k |)

Ontological Bridge-Principles Ontological bridge-principles are a set of correspondence rules that relate elements of the mathematical formalism to elements of physical reality. Bohrs Copenhagen Interpretation can be considered ontological, in spite of his denial of the meaningfulness of making statements about an independent reality, An independent reality in the ordinary physical sense can neither be ascribed to the phenomena nor to the agencies of observation. because he also insists that deducing additional information about what properties a system may have is impossible in principle. The more complete analysis Einstein seeks is in principle excluded.

The Orthodox (Dirac-von Neumann) Interpretation

Postulate 1. Eigenvalue-eigenstate link. An observable has a determinate value if and only if the state is an eigenstate of that observable. This is an ontological bridge-principle: it tells what properties a system possesses independent of observation.

The eigenvalue-eigenstate link implies that the quantum state provides a complete description of a systems objective physical properties, or put more boldly, of the objective elements of physical reality.

The completeness assumption implies that the unavoidable non-vanishing dispersion of outcomes for some observables (as demanded by Robertsons uncertainty principle) is due to a fundamental randomness (or stochasticity) in nature.

Note that von Neumann was well aware that the existence of additional hidden coordinates provided another possible explanation for the non-vanishing dispersion associated with quantum states. However, he rejected this possibility because of an impossibility proof that he devised against the existence of dispersion-free assignments to all observables (via hidden variables). However, von Neumanns proof was discredited much later by Bohm (1952), who constructed an explicit hidden variable model for quantum theory. As we will see in the next lecture, the problem with von Neumanns proof was explicitly exposed by Bell (1966), who showed that one of von Neumanns assumptions on hidden variables was unreasonably strong in the sense that it constituted a no go theorem against only a trivial classical of hidden variable extensions of quantum theory.

Postulate 2. The projection postulate. After an ideal measurement of an observable, the system state is transformed into [i.e., must be updated to] the eigenstate associated with the eigenvalue observed.

This postulate, also known as the collapse of the wavefunction, is operationally demanded for consistency with experiments involving sequential (ideal) measurements.

The eigenvalue-eigenstate link implies that the projection is a physical process since it involves a transformation of the systems physical properties.

In contrast, if we reject the eigenvalue-eigenstate link, and if we reject that the quantum state is a complete description of a systems physical properties, then the projection after measurement does not correspond to not a physical process.

The projection is then just an update rule involving a change to an abstract theoretical construct, such as a (subjective) probability assignment, which must be updated when new information is obtained.

Projection Postulate with and without Post-Selection: k |k k |.

Consider the ideal measurement of a non-degenerate observable R =

If the measurement outcome is ignored, then the following transformation, (t) (t) =
k

k |(t)|k |k k |.

(1)

is required to describe the state after measurement.

If, on the other hand, the outcome is recorded, then consistency with subsequent measurements demands the following transformation: (t) (t) = |k k |. (2)

Can we model the projection postulate describing an ideal measurement using a unitary transformation?

While the transformation (1) may be modeled by a unitary acting on the system combined with an additional system when the additional system is ignored, the transformation (2) may not be modeled in this way and thus must be an independent process.

Proof that projection (under post-selection) is not a unitary process

Conceptually it is clear that unitary evolution evolves any given state to a xed nal state. This is deterministic (in the sense of reproducible). In contrast, collapse is fundamentally stochastic: applying the same measurement to the same preparation produces different (apparently random) nal states (depending on the outcome).

Is it possible that the nal state outcome is not random but dependent on the quantum state associated with some additional degrees of freedom, and the whole process may be described by a unitary transformation?

Consider an atom described by a pure state corresponding to a coherent superposition of moving along two distinct trajectories. We arrange so that both trajectories pass through a detector such that a macroscopic pointer is moved to the left if the atom is on the up trajectory and to the right if the atom is on the down trajectory. We want to model the measurement process with a unitary transformation and for complete generality we extend the quantum system to include additional degrees of freedom denoted by a state | . If we demand faithful measurements this means that we must have, for any | , U |up |ready | U |down |ready | where | and | = |up |left | = |down |right |

are allowed to be independent of | .

Now if we prepare a coherent superposition over atomic trajectories, and allow for both possible outcomes, then by linearity it follows that, for any , U (|up + |down ) |ready | = |up |left | +|down |right |

so it is impossible that after the interaction the state is driven to one or the other outcome. Hence the transformation (2) can not be modeled by a unitary transformation.

Proof that projection (without post-selection) can be represented by a unitary process

If we describe only the state of the system and pointer, then we must take a partial trace over the ancillary degrees of freedom represent by | . This partial trace produces the following state (after measurement), = ||2 |up up| + ||2 |down down| + | |up down| + | |down up|

If the ancillary states are orthogonal | = 0, then we recover the projection postulate describing the nal state when the outcome is ignored or unknown: = ||2 |up up| + ||2 |down down|. Hence the projection transformation (1) (without post-selection) can indeed be modeled by a unitary transformation.

This important process is called decoherence. It shows us that the non-classical features of a coherent superposition, such as interference, are eliminated if the system or apparatus is allowed to interact with ancillary degrees of freedom which are either ignored or unknown.

If one takes into account the microscopic degrees of freedom associated with ever-present environment systems, such as dust particles, or the cosmic microwave background, these can be included via the ancillary states | . Since these environment systems are unavoidably interacting with all macroscopic systems, such as pointers on measurement devices, these environment systems will continuously couples to the pointer states, producing coherently entangled superpositions as described above. Moreover, it is reasonable to infer that the environment states couple to macroscopically distinct pointer positions will become orthogonal after interacting with (reecting off of) the macroscopically distinct pointer states. Hence for generic macroscopic systems, where the environment system state are not recorded, the pointer + atom states will reduce to the mixture described above.

These consideration explain why it is difcult to observe interference effects in practice in macroscopic systems possessing many degrees of freedom. Recall that one never observes interference effects with an individual system. Interference effects can only be observed by collecting statistics over an ensemble of identically prepared systems. Moreover, the interference fringes that may be observed in the probability distributions appear on an increasingly short-wavelength as the mass of the system increases. Thus ner and ner resolution is required as we approach the macroscopic world. So there are many more reasons than just decoherence available for explaining why we are not readily observing interference effects in our everyday observations of the world.

Does decoherence solve the measurement problem? Can we interpret the nal mixed state as an ordinary classical mixture of the two possible pointer positions?

A rst problem with this approach is the ambiguity of mixtures (discussed last week). While the state describing the nal pointer state may be interpreted as a classical mixture of the two possible pointer positions, this is a non-unique decomposition of the mixed state. It is possible also to reexpress the state as a mixture of two very non-classical states that have nothing to do with well-dened pointer positions. This is called the preferred basis problem.

A second problem is that the total system is still in a pure state (coherent entangled superposition). The state for the combined system clearly does not allow the assignment of denite position properties for elements of the combined system (consisting of the pointer and the atom and the ancillary degrees of freedom ). Is it self-consistent to deny denite properties for the combined system while asserting denite properties for a subsystem? Can we conclude that, because of decoherence, there is no longer a conict between the orthodox interpretation and the existence of denite position property for the macroscopic pointer?

There is a common misconception that the practical consideration of the environment, which generically produces the decoherence transition of the form described as transformation (1) earlier (because of the ubiquitous coupling of macroscopic devices to these microscopic environment degrees of freedom), solves the measurement problem. However, as shown above, decoherence can not explain the occurrence of transformation (2), which lies at the heart of the measurement problem.

Recall that the eigenvalue-eigenstate link of the orthodox interpretation tells us that denite properties for the positions of the atom and pointer should be assigned if and only if the combined state is a factorable state of the form, | = |up |left . A mixed state obtained by partial tracing over the environment (or over the environment and the atom) is not in this form and therefore can not be assigned a denite property.

Hence, even if decoherence effects are taken into account, the orthodox interpretation still needs the post-selected projection postulate (transformation (2)) to explain the existence of macroscopic facts, and the measurement problem remains unsolved.

Contemporary Interpretations A consistent description of macroscopic facts requires either expanding upon or rejecting the interpretative postulates of the orthodox interpretation.

Dynamical collapse interpretations specify the exact conditions under which collapse occurs by adding a non-linear term to the Schrodinger equation. Strictly speaking these interpretations are actual modications of the mathematical formalism and not just interpretations in the sense of specifying ontological bridge-principles. P. Pearle will describe spontaneous collapse models to us in March.

The many-worlds interpretation developed by Everett (1957) rejects the projection postulate and imagines reality dividing into alternate but equally valid branches. This interpretation will be described to us next week by D. Wallace.

In many contemporary interpretations the effects of decoherence play a pivotal role in dening the ontology. One example is the existential interpretation advocated by W. Zurek (1993), which is a variation of the many-worlds interpretation. Another example is the decoherent/consistent histories interpretation, developed by R. Grifths (1984) and extended by Gell-Mann and Hartle (1990), which will be described to us by R. Grifths in March.

Last but not least we have interpretations which reject the assumption that quantum states provide a complete specication of systems properties. On the one hand there is the statistical interpretation, developed by Ballentine (1970), which, following Einstein, merely reject the completeness assumption and emphasizes the statistical/epistemic nature of the quantum state. This perspective will be explained by Ballentine in February, and further developed by myself and Rob Spekkens in March. On the other hand there are interpretations which seek to explicitly identify the additional hidden variables needed for a complete specication of the systems properties. The most important example of this kind of interpretation is the de Broglie-Bohm (1927/1952) pilot wave theory, which will be introduced to us by S. Goldstein in February, and further elaborated by A. Valentini in March.

On Thursday we will spend our last introductory lecture discussing the constraints on hidden variables.

You might also like