1. Introduction
It is well known that the violation of Bell’s inequalities is incompatible with the intuitive ideas of Locality and Realism. During the decades-long discussion on the experimental observations of that violation, it was argued that technical imperfections left space for the existence of some conspiratorial mechanisms, which received the general name of
loopholes. These mechanisms were able to reproduce the observed results without contradicting Locality and Realism. However, loophole-free experiments [
1,
2,
3,
4,
5,
6] were performed and confirmed the violation of Bell’s inequalities. Therefore, at least one of the hypotheses necessary to derive and observe the violation must be false.
At this point, interpretations diverge. Some say that Locality and Realism mean essentially a single hypothesis, and what is false is “Local Realism” [
7,
8]. Others claim that only Realism is falsified, and that Locality plays no role in the problem [
9]. In opposition, the expression
quantum non-locality has become part of “popular knowledge”. Others argue that Quantum Mechanics (QM) is strictly Local [
10,
11,
12], and that the violation of Bell’s inequalities is a consequence of the wavy nature of matter [
13].
There is an additional twist: Locality and Realism suffice to derive Bell’s inequalities, but, in order to test them experimentally, an additional hypothesis is necessary. This was originally shown by V. Buonomano in 1978, who named it
Ergodicity [
14]. The necessity of the additional hypothesis was rediscovered during the years with different names: homogeneous dynamics, uniform complexity, experiments’ exchangeability, counter-factual stability [
15,
16,
17,
18], and there are probably more that escaped my attention. The many versions of this hypothesis have subtle differences, but, at the end of the day, they all mean the following: that it is possible to insert data measured with different angle settings (see
Figure 1), which are unavoidably recorded at different values of time, into a single theoretically derived expression (i.e., the Bell’s inequality). Details on the necessity and meaning of this hypothesis are discussed in
Section 2. I stress that it is not necessary to derive Bell’s inequalities, but that it is unavoidable to insert measured data into them.
In order to avoid confusion, the meanings of “Locality” and “Realism” as they are understood
in this paper are also defined in
Section 2. I do not claim they are the “correct” or “best” definitions. They are just the ones used in this paper. According to these definitions, Locality, Realism, and Ergodicity are separate hypotheses, all of them necessary to derive and observe Bell’s inequalities.
Strictly speaking, two other hypotheses are also necessary:
freedom of choice (which is relevant here in choosing the angle settings in
Figure 1 by the observer) and
non-signaling (i.e., the impossibility of sending signals that propagate faster than light). I assume these two hypotheses are valid. I also assume that all loopholes are closed.
In these conditions, a relevant question is: which one of the three remaining fundamental hypotheses involved (Locality, Realism, and Ergodicity) is false when Bell’s inequalities are experimentally violated? Of course, more than one can be false. The cases where only one of them is false are considered here. Therefore, in the situation the falsity of (f.ex.) “Locality” is considered, it is implicitly assumed that “Realism” and “Ergodicity” are true. The aim of this paper is to propose an experiment to reveal (or, at least, to obtain some evidence to indicate) the false hypothesis. The key is the relationship between
falsity of each one of the three hypotheses and
randomness of the series of outcomes produced in a Bell’s setup. The problem of the definition and testing of randomness is reviewed at the end of
Section 2. In
Section 3, it is reviewed that the falsity of Locality implies the series must be “truly” random, and that the falsity of Realism leaves the series’ randomness undecided. Also in that Section, I claim that the falsity of Ergodicity implies the series must be non-random. In consequence, an experiment testing the series’ randomness is, in principle, able to reveal the false feature. However, several problems must be considered. An attainable experiment is described in
Section 4.
To avoid any confusion, the proposed experiment does not affect the validity of QM. It is not the proposal of a new test of QM. What is proposed to test here is which hypothesis among Locality, Realism, or Ergodicity (or some hypothesis with the same consequences as Ergodicity) is false. The results of the proposed experiment may affect the interpretation of QM (f.ex., the Copenhagen interpretation assumes that Realism is false), but not the validity of QM or its predictions.
4. Proposed Experiment
4.1. Basic Scheme
As discussed in the previous sections, the falsity of each of the three main hypotheses (as they are defined in
Section 2) implies different randomness of the series produced in the setup of
Figure 1. Locality false implies the series must be algorithmically random, Ergodicity false implies that they must be non uniform (hence, not random), and Realism false leaves the series’ randomness undecided. This result opens a way to decide which hypothesis is false. However, nothing can be assumed about the validity of any of the three involved hypotheses (otherwise one would fall into a logical inconsistency) so that the rate of rejected series (Ville’s principle) appears as the available method to evaluate randomness.
In practice, the rejection rate can be affected by many “technical” causes. The challenge is to find an experimental approach that gets rid of these causes, leaving only the effect of the falsity of one of the hypotheses. As always, the relative variation of a magnitude (in this case, randomness) is much easier to measure than its absolute value.
Suppose then that the source in
Figure 1 emits maximally entangled states during square pulses of total duration twice longer than
L/c, where
L is the distance between stations and
c is the speed of light. Time between pulses is adjusted to be much longer than the pulses’ duration. Intensity is adjusted such that much less than one photon per pulse is recorded in average. Trigger signals are sent to each station to indicate the start of each pulse and synchronize the clocks. Angle settings {α,β} in each station are “randomly” (see
Section 4.5) changed (as in the loophole-free experiments) just before the arrival of each pulse, and then they are left fixed during each pulse. Time-to-digital converters record the time elapsed from the start of each pulse until the detection of each photon. This is repeated for many (typically, tens of millions) pulses during an experimental run. After the run has ended, data processing identifies the coincidences between A and B. Single detections are discarded. Binary series for each time interval within the (say, stroboscopically reconstructed) pulse are obtained in this way. The size of the time intervals depends in practice on the number of recorded coincidences (see the end of
Section 4.3). In the discussion that follows, only two time intervals are considered: the pulses’ first half and the pulses’ second half.
Suppose now that the violation of Bell’s inequalities is constant during the pulse duration, as predicted by QM and also observed [
31,
32]. During the pulses’ first half, detections at A and B are spatially isolated. Therefore, during the pulses’ first half, the violation of Bell’s inequalities is possible only because Locality,
or Realism,
or Ergodicity is false. We
know that one of them must be false. During the pulses’ second half instead, there has been enough time for classical information to propagate between the stations, and Bell’s inequalities can be violated even if the three hypotheses are true. This experimental feature implies that the level of randomness during the first half may vary with respect to the one in the second half, and that this occurs in a time typically too short for other perturbations (mechanical or thermal) to affect the results. The type of variation to be expected depends on which hypothesis is false, as it is discussed next.
4.2. If Locality Is False
Let us suppose that the violation of Bell’s inequalities observed during the pulses’ first half occurs because Locality is false. Therefore, as reviewed in
Section 3.1, series recorded during the first half
must be algorithmically random. Instead, series recorded during the pulses’ second half may be random, or maybe not. It is natural to expect the level of randomness, averaged over a large set of data, to decrease when passing from an enforced random regime to a non-enforced one. Therefore, the rejection rate averaged over large statistical samples should
increase from a value near to zero for the sample recorded during the pulses’ first half (loophole-free enforced), to a non-negligible value for the sample recorded during the pulses’ second half (loophole-free not enforced). Note that only a coarse relationship between the rejection rate and “actual” randomness (as defined by the unknown universal algorithmic test) is assumed, that is, that they either increase or decrease together (not necessarily in the same amount) in the average.
4.3. If Ergodicity or Realism Are False
Let us suppose now that the violation of Bell’s inequalities observed during the pulses’ first half occurs because Ergodicity is false. Series recorded during the first half must be non-uniform now. The rejection rate should be close to 100%. Instead, series recorded during the second half may be uniform, or maybe not. Following the same reasoning than in the previous Section, the rejection rate in the sample of series recorded during the pulses’ second half should now decrease.
Finally, let us suppose that the violation of Bell’s inequalities observed during the pulses’ first half occurs because Realism is false. Series recorded in the pulses’ first half may be random or maybe not. The same applies to the series recorded in the second half. Therefore, the rejection rate averaged over large samples should remain constant during the pulse duration.
Usual sources of non-randomness, like different detectors’ efficiencies, are of course constant during the pulse duration. If the pulses are short enough (see
Section 4.5), any thermal or mechanical perturbation will affect the rejection rate in the same way during the whole pulse duration. The variation of the rejection rate between the first and second halves, caused exclusively by the falsity of one of the hypotheses, should then be detectable in a statistically meaningful sample of series.
In summary, the consistent observation of an increasing (decreasing) rejection rate during the pulse duration suggests Locality (Ergodicity) is false. A constant rate suggests Realism is false instead. Conceivably, the latter result can also be caused in practice by a high level of noise masking the actual trend. In the case that the trend is in fact observed to be constant within statistical deviation, the influence of sources of noise existing in the actual setup should be carefully analyzed. In order to help estimate the statistical deviation, the rejection rate should be calculated for different choices of the sets of tests (see next Section).
If sufficient data are available, the pulses can be sliced in more than two parts (i.e., more than two time intervals, see the end of
Section 4.1) and a
curve of evolution of the rejection rate during the pulse duration can be plotted. This would allow the study of statistical correlation in a complete way and the reaching of more reliable conclusions.
4.4. Tests of Randomness and a Practical Consequence
A usual choice to apply Ville’s principle is the NIST battery of 16 statistical tests. As said, it is convenient using a set of tests as large and diverse as possible. It is possible to add estimators of Kolmogorov’s complexity [
33] and tools of nonlinear analysis to identify a compact object in phase space (Takens’ theorem) [
34]. Entropies can be calculated. This is just an example of the set of tests that can be used.
For the aims of the proposed study, evaluating randomness according to the Ville’ principle has the crucial advantage that no assumption about the validity of Locality, Realism, or Ergodicity is made. On the other hand, the measured rejection rate depends on the set of tests chosen, which is arbitrary. For this reason, I claim the consistent observation of a trend in the time variation of the rejection rate to provide some evidence about the falsity of one of the hypotheses, not a proof.
In spite of this limitation, the result of the proposed experiment has an immediate practical impact. Pulsed sources are useful in QKD to reduce signal-to-noise ratio and to synchronize the clocks, which is a technical problem of main concern. If the rejection rate was shown to increase with time, then QKD using entangled states would be safer if pulses shorter than L/c were used to generate the key. If the rejection rate was shown to decrease instead, the final part of long pulses (duration > L/c) should be preferred. Finally, if the rejection rate was shown to be constant, then both the pulse duration and the pulse’s part used would be irrelevant. Similar advice would apply to the most efficient way (i.e., with the lowest number of non-random series delivered) to operate a pulsed quantum RNG. Note that this practical advice would be valid even if the foundational issue remained not fully decided.
4.5. Conditions for an Attainable Experiment
Unfortunately, the experiment as described is unattainable nowadays. Due to detectors’ efficiency, the loophole-free violation of Bell’s inequalities can be reached with photons only by using Eberhardt’s states, which produce non-uniform series. Extractors of randomness are applied [
35,
36], but their use in this case may mask the trend that it is intended to reveal. Setups exploiting entanglement swapping between photons and matter do use Bell states, but produce a rate of detections too low to be suitable.
A simple solution at hand is to accept the
fair sampling assumption [
37] as valid. This means that the set of recorded coincidences is an unbiased statistical sample of the whole set of detected and non-detected photons. Under this assumption, Bell states and existing photon detectors can be used.
Other problems are achieving fast and random setting changes between the pulses. In addition to the technical difficulty of fastness, there is the logical problem (a sort of infinite regress) of performing
random setting changes. Both problems can be circumvented by assuming that any hypothetical correlation between A and B vanishes when the source of entangled states is turned off. This assumption is supported by the following observation: in a pulsed Bell’s setup, the S
CHSH parameter is observed to decay following a certain curve if the time coincidence window is increased beyond the pulse duration. This curve fits the one that is predicted if the detections outside the pulse are assumed to be fully uncorrelated [
31]. Assuming non-correlation implies the curve but, of course, observing the curve does not necessarily imply non-correlation. Nevertheless, if the latter implication (which is most reasonable) is accepted as true, then random settings’ changes become unnecessary. Only pulses well separated in time are required.
Under these two assumptions (“fair sampling” and, say, “uncorrelated when the source is turned off”) the proposed experiment is at hand even with limited means. The results obtained in these conditions may not be considered definitive, but they may still give a clue about the answer to the main question. They may also help to decide whether or not it is worth the effort to make the complete experiment. Also important, they would have an immediate practical impact (see the end of
Section 4.4 above).
In order to keep the rate of accidental coincidences low, source intensity or pumping power must be adjusted such that the probability
p of detection per pulse is
p << 1 [
38]. The photon down-conversion process is usually so weak, and collection efficiency of radiation so limited, that this condition is easily reached in practice for the pulse duration of interest here. In other words, most pulses are “naturally empty”. Choosing
p = 0.1 and pulse repetition rate 1 MHz, series 6 Mbits long are then recorded at each station in a run lasting 300 s. It is not convenient to increase the repetition rate beyond that value, as ≈1 MHz is the typical threshold of saturation of available single-photon detectors (silicon avalanche photodiodes). If the stations are separated by 20 m, then the pulse duration is ≈120 ns and duty cycle is ≈12%. These numbers are easily achievable by pumping the nonlinear crystals that generate the entangled states with a pulsed diode laser, which typically has a bandwidth of 20 MHz. Samples of the laser pulses can be sent to each station and recorded with fast photodiodes to synchronize the clocks of the recording devices. Detectors’ blind time has been identified to cause non-uniform series in some quantum RNG. But this is not important in the proposed experiment, because the average number of detections per pulse is, as said, adjusted to be small (
p << 1) and pulses are well separated.
5. Summary
By recording the coarse time evolution of the rate of non-random series obtained in a suitable pulsed Bell’s experiment, it is possible to obtain some evidence about which one among three fundamental hypotheses (Locality, Realism, or Ergodicity) is false when the violation of Bell’s inequalities is observed. This is of obvious interest for the foundations of QM.
The proposed experiment requires some additional assumptions to be technically achievable nowadays. This may weaken its impact from the foundational point of view. Nevertheless, measuring the variation in the rate of non-random series within the pulses would have immediate practical impact on the best use of quantum RNG and of device-independent QKD. Depending on the experiment’s result, it may be advisable to use sources with pulses shorter than L/c, or instead use the end of long pulses (>L/c), or it may also turn out that the pulses’ duration and selected section are irrelevant.