## Einstein's reality criterion

Sorry for the delay, this week I want to have a short post continuing to showcase the results of late Asher Peres which unfortunately are not well known and this is a shame.

In the famous EPR paper, Einstein introduced a reality criterion:

"If, without in any way disturbing a system, we can predict with certainty ... the value of a physical quantity, then there exists an element of physical reality corresponding to this physical quantity"

Now it is generally accepted that the difference between classical and quantum mechanics is noncomutativity. While there are some subtle points to be made about this assertion (by the mathematical community), from the physics point of view the distinction is rock solid and we can build in confidence upon it.

Now consider again the  EPR-B experiment with its singlet state. Suppose the x and y components of the spin exists independent of measurement and let's call the measurement values: $$m_{1x}, m_{1y}, m_{2x}, m_{2y}$$. From experimental results we know:

$$m_{1x} = - m_{2x}$$
and
$$m_{1y} = - m_{2y}$$

And now for the singlet state $$\psi$$ let's compute:

$$(\sigma_{1x} \sigma_{2y} + \sigma_{1y} \sigma_{2x})\psi$$

which turns out to be zero. The beauty of this is that $$\sigma_{1x} \sigma_{2y}$$ commutes with $$\sigma_{1y} \sigma_{2x}$$ and by Einstein's reality criterion extended to commuting operators it implies that $$m_{1x} m_{2y} = - m_{1y} m_{2x}$$ which contradicts $$m_{1x} = - m_{2x}$$ and $$m_{1y} = - m_{2y}$$

This contradiction is in the same vein as the GHZ result, but it is not well known. The catch of this result is that measuring S1xS2y cannot be done at the same time with a measurement of  S1yS2x and so we are reasoning counterfactually. However, counterfactual reasoning is allowed in a noncontextual setting (in classical mechanics and in quantum mechanics for commuting operators) and the result is valid.

## von Neumann  and Gleason vs. Bell

Returning to physics topics today I want to talk about an important contention point between von Neuman and Gleason on one had, and Bell on the other. I had a series of posts about Bell in which I discussed his major achievement. However I do not subscribe to his ontic point of view and today I will attempt to explain why and perhaps persuade the reader with what what I consider to be a solid argument.

Before Bell wrote his famous paper he had another one in which he criticized von Neumann, Jauch and Piron, and Gleason. The key of the criticism was that additivity of orthogonal projection operators not necessarily implies the additivity of expectation values:

$$\langle P_u + P_v \rangle = \langle P_{u}\rangle + \langle P_{v}\rangle$$

The actual technical requirements in von Neumann and Gleason case were slightly different, but they can be reduced to the statement above and more importantly this requirement is the nontrivial one in a particular proof of Gleason's theorem

 Andrew Gleason

To Bell, additivity of expectation values is a non-natural requirement because he was able to construct hidden variable models violating this requirement. And this was the basis for his criticism of von Neumann and his theorem of the impossibility of hidden variables. But is this additivity requirement unnatural? What can happen when it is violated? I will show that violation on additivity of expectation values can allow instantaneous communication at a distance.

The experimental setting is simple and involves spin 1 particles. The example which I will present is given in late Asher Peres book: Quantum Theory: Concepts and Methods at page 191. (This book is one of my main sources of inspiration for how we should understand and interpret quantum mechanics. )

The mathematical identity we need is:

$$J_{z}^{2} = {(J_{x}^{2} - J_{y}^{2})}^2$$

and the experiment is as follows: a beam of spin 1 particles is sent through a beam splitter which sends to the left particles of eigenvalue zero for $$J_{z}^{2}$$ and to the right particles of eigenvalue one for $$J_{z}^{2}$$.

Now a lab on the right decides to measure either if $$J_z = 1$$ or if $$J_{x}^{2} - J_{y}^{2} = 1$$

For the laboratory on the right let's call the projectors in the first case $$P_u$$ and $$P_v$$ and in the second case $$P_x$$ and $$P_y$$

For the lab on the left let's call the projectors in the first case $$P_{w1}$$ and in the second case$$P_{w2}$$.

Because of the mathematical identity: $$P_u + P_v = P_x +P_y$$ the issues becomes: should the expectation value requirement hold as well?

$$\langle P_{u}\rangle + \langle P_{v}\rangle = \langle P_{x}\rangle + \langle P_{y}\rangle$$

For the punch line we have the following identities:

$$\langle P_{w1}\rangle = 1 - \langle P_{u}\rangle - \langle P_{v}\rangle$$
and
$$\langle P_{w2}\rangle = 1 - \langle P_{x}\rangle - \langle P_{y}\rangle$$

and as such if the additivity requirement is violated we have:

$$\langle P_{w1}\rangle \neq \langle P_{w2}\rangle$$

Therefore regardless of the actual spatial separation, the lab on the left can figure out which experiment the lab on the right decided to perform!!!

With this experimental setup, if additivity of expectation values is false, you can even violate causality!!!

Back to Bell: just because von Neumann and Gleason did not provide a justification for their requirements, this does not invalidate their arguments. The justification was found at a later time.

But what about the Bohmian interpretation of quantum mechanics? Although there are superluminal speeds in the theory, superluminal signaling is not possible in it. This is because Bohmian interpretation respects Born rule which is a consequence of Gleason't theorem and it respects the additivity of  expectation values as well. Bohmian interpretation suffers from other issues however.

## A US Presidential Election Analysis

Once in a while, important events deserve to be discussed and they dislodge physics topics. I wrote in the past about Donald Trump, and today I want to revisit the topic and present some analysis on what is currently going on in US election politics. By now the election outcome is all but certain: Trump will lose, and Clinton will win, but what is the basis for this prediction?

If you never heard of it, there is an amazing site by Nate Silverhttp://projects.fivethirtyeight.com/2016-election-forecast/

Nate Silver has a huge well deserved prediction credibility and he performs in-depth analysis of the elections way more than what you find on the usual media outlets like CNN.

In the image below you see the daily graph of the winning chances for Trump (the red line) and Clinton (the blue line).

Mid July Trump got a post Republican convention boost and he was on the rise until Clinton had the democratic convention.The sharp Trump decline after that convention was due to his attack on the Khan family, whose son died for America. When that scandal faded, mid August, Trump's odds began improving following Clinton's erosion of trust due to the email server scandal, and also due to concerns about her health. Then came the first debate in which Trump had a very good first half an hour but was ill prepared for the long haul of the debate. That started a turn-off reaction for the independent voters who only now got the first serious look at him.

Still the slide was temporary and the fluctuations were comparable with the prior two weeks and for two days he was climbing back in the polls. At this point the famous tape of him bragging about grabbing women by their genitals surfaced and this started a a chain reaction mostly inside the Republican party. The tape reversed the trend, but what killed his election chances was his performance in the second debate. Trump made two strategic mistakes:

• he attacked Hillary (and Bill Clinton) instead of sincerely apologizing
• he dismissed the tape as locker room talk and claimed he did not do anything physical
Let's see what those were fatal mistakes for him. By going on the offensive when people expected genuine contrition made Trump appear as a rabid dog and people were hugely disgusted by his behavior. The general consensus of the independent people who watched the second debate was that they themselves felt dirty and in need of a shower. The second debate reduced Trump's changes into low teen numbers. If you look at the two prior cycles: June-August and August-October you notice the bouncing back rate for Trump and that there is not enough time for him to close the gap before election day.

Now even if the election is postponed a few months, Trump will never recover due to his second strategic mistake. For all his playboy behavior, it is impossible that he never did anything real as he was bragging on the tape. But by claiming it was all "only talk" as opposed to Bill Clinton's actions encouraged women to come forward to tell their stories. Once this started it cannot be stopped. Just ask Bill Cosby on how it happened in his case: the same pattern will repeat here.

When the tape was released, republicans running for reelection started deserting Trump out of fear that he will negatively affect their changes of reelection due to the backlash in the women's vote. But by now is is clear Trump's chances of election are virtually zero and this has the potential to split the Republican party.

After the election loss, the finger-pointing will begin. Rience Priebus has no real vision or power and will most likely lose his job. The power vacuum will start a chaotic period for the Republican party which will end either by a victory of anti-Trump forces, or a party split. My bet is that the party will remain intact since politicians tend to act as a pack: there is strength in numbers and it is hard to survive alone.

## Local Causality in a Friedmann-Robertson-Walker Spacetime

A few days ago I learned about a controversy regarding Joy Christian's paper:
Local Causality in a Friedmann-Robertson-Walker Spacetime which got published in Annals of Physics and was recently withdrawn: http://retractionwatch.com/2016/09/30/physicist-threatens-legal-action-after-journal-mysteriously-removed-study/

The paper repeats the same mathematically incorrect arguments of Joy Christian against Bell's theorem and has nothing to do with Friedmann-Robertson-Walker spacetime. The FRW space was only used as a trick to get the wrong referees which are not experts on Bell theorem. In particular the argument is the same as in this incorrect Joy's one-pager preprint

The mistake happens in two steps:
• a unification of two algebras into the same equation
• a subtle transition from a variable to an index in a computation mixing apples with oranges
I will run the explanation in parallel between the one-pager and the withdrawn paper because it is easier to see the mistake in the one-pager.

Step 1: One-pager Eq. 3 is the same as FRW paper Eq. 49:

$$\beta_j \beta_k = -\delta_{jk} - \epsilon_{jkl} \beta_l$$
$$L(a, \lambda) L(b, \lambda) = - a\cdot b - L(a\times b, \lambda)$$

In the FRW paper $$L(a, \lambda) = \lambda I\cdot a$$ while in the 1-pager: $$\beta_j (\lambda) = \lambda \beta_j$$ where $$\lambda$$ is a choice of orientation. This make look as an innocuous unification but in fact it describes two distinct algebras with distinct representations.

This means that Eqs. 3/49 describe two multiplication rules (and let's call them A for apples and O for oranges). Unpacked, the multiplication rules are:

$$A_i A_j = -\delta_{jk} + \epsilon_{jkl} A_l$$
$$O_i O_j = -\delta_{jk} - \epsilon_{jkl} O_l$$

The matrix representations are:

$$A_1 = \left( \begin{array}{cc} i & 0 \\ 0 & -i \end{array}\right) = i\sigma_3$$
$$A_2 = \left( \begin{array}{cc} 0 & -1 \\ 1 & 0 \end{array}\right) = -i \sigma_2$$
$$A_3 = \left( \begin{array}{cc} 0 & -i \\ -i & 0 \end{array}\right)= -i \sigma_1$$

and $$O_i = - A_i = {A_i}^{\dagger}$$

Try multiplying the above matrices to convince yourself that they are indeed a valid representation of the multiplication rule.

There is even a ket and bra or column and row vector representation of the two distinct algebras, but I won't go into details since it requires a math detour which will takes the focus away from Joy's mistake.

Step 2: summing apples with oranges (or column vectors with row vectors)

The summation is done in steps 5-7 and 67-75. The problem is that the sum from 1 to n contains two kinds of objects apples and oranges and should be in fact broken up in two sums. If this needs to be combined into a single sum then we need to convert apples and oranges to orientation independent objects. Since $$L(a, \lambda) = \lambda I\cdot a$$ and  $$\beta_j (\lambda) = \lambda \beta_j$$ with $$I \cdot a$$ and $$\beta_j$$ orientation independent objects, when we convert the two kinds of objects to a single unified kind there is an additional missing factor of lambda.

Since $$O_j=\beta_j (\lambda^k) = \lambda^k \beta_j$$ with $$\lambda^k = +1$$ and $$A_j=-\beta_j (\lambda^k) = \lambda^k \beta_j$$ with $$\lambda^k = -1$$ where $$\lambda^k$$ is the orientation of the k-th pair of particles, in the transition from 6 to 7 and 72 to 73  in an unified sum we are missing a $$\lambda^k$$  factor.

Again, either break up the sum into apples and oranges (where the index k tells you which kinds of objects you are dealing with) or unify the sum and adjust it by converting it into orientation-free objects and this is done by multiplication by $$\lambda^k$$. If we separate the sums, they will not cancel each other out because there is -1 a conversion factor from apples to oranges  $$O = - A$$, and if we unify the sum as Joy does in Eq. 74 the sum is not of $$\lambda^k$$ but of $${(\lambda^k)}^2$$ which does not vanish.

As it happens Joy's research program is plagued by this -1 (or missing lambda) mistake in his attempt to vanquish a cross product term. But even if his proposal were mathematically valid it would not represent a genuine challenge to Bell's theorem. Inspired by Joy's program, James Weatherall found a mathematically valid example very similar with Joy's proposal but one which does not use quaternions/Clifford algebras.

The lesson of Weatherall is that correlations must be computed using actual experimental results and the computation (like the one Joy is doing at steps 67-75) must not be made in a hypothetical space of "beables".

Now back to the paper withdrawal, the journal did not acted properly: it should have notified Joy before taking action. However Joy did not act in good faith by masquerading the title to sneak it past imperfect peer review and his attempt at victimization in the comments section has no merit. In the end the paper is mathematically incorrect, has nothing to do with FRW spacetime, and (as proven by Weatherall) Joy's program is fatally flawed and cannot get off the ground even if there were no mathematical mistakes in it.

## The whole is greater than the sum of its parts

The tile of today's post is a quote from Aristotle, but I want to illustrate this in the quantum formalism. Here I will refer to a famous Hardy paper: Quantum Theory From Five Reasonable Axioms. In there one finds the following definitions:

• The number of degrees of freedom, K, is defined as the minimum number of probability measurements needed to determine the state, or, more roughly, as the number of real parameters required to specify the state.
• The dimension, N, is defined as the maximum number of states that can be reliably distinguished from one another in a single shot measurement.
Quantum mechanics obeys $$K=N^2$$ while classical physics obeys $$K=N$$.

Now suppose nature is realistic and the electron spin does exist independent of measurement. From Stern-Gerlach experiments we know what happens when we pass a beam of electrons through two such devices rotates by an angle $$\alpha$$: suppose we pick only the spin up electrons, on the second device the electrons are still deflected up $$\cos^2 (\alpha /2)$$ percent of time and are deflected down $$\sin^2 (\alpha /2)$$ percent of time . This is an experimental fact.

Now suppose we have a source of electron pairs prepared in a singlet state. This means that the total spin of the system is zero. There is no reason to distinguish a particular direction in the universe and with the assumption of the existence of the spin independent of measurement we can very naturally assume that our singlet state electron source produces an isotropic distribution of particles with opposite spins. Now we ask: in an EPR-B experiment, what kind of correlation would Alice and Bob get under the above assumptions?

We can go about finding the answer to this in three ways. First we can cheat and look the answer up in a 1957 paper by Bohm and Aharonov who first made the computation, This paper (and the answer) is cited by Bell in his famous "On the Einstein-Podolsky-Rosen paradox". But we can do better than that. We can play with the simulation software from last time. Here is what you need to do:

-replace the generating functions with:

function GenerateAliceOutputFromSharedRandomness(direction, sharedRandomness3DVector) {
var cosAngle= Dot(direction, sharedRandomness3DVector);
var cosHalfAngleSquared = (1+cosAngle)/2;
if (Math.random() < cosHalfAngleSquared )
return +1;
else
return -1;
};

function GenerateBobOutputFromSharedRandomness(direction, sharedRandomness3DVector) {
var cosAngle= Dot(direction, sharedRandomness3DVector);
var cosHalfAngleSquared = (1+cosAngle)/2;
if (Math.random() < cosHalfAngleSquared )
return -1;
else
return +1;
};

-replace the -cosine curve drawing with  a -0.3333333 cosine curve:

boardCorrelations.create('functiongraph', [function(t){ return -0.3333333*Math.cos(t); }, -Math.PI*10, Math.PI*10],{strokeColor:  "#66ff66", strokeWidth:2,highlightStrokeColor: "#66ff66", highlightStrokeWidth:2});

replace the fit test for the cosine curve with one for with 0.3333333 cosine curve:

var diffCosine = epsilon + 0.3333333*Math.cos(angle);

and the result of the program (for 1000 directions and 1000 experiments) is:

So how does the program work? The sharedRandomness3DVector is the direction on which the spins are randomly generated. The dot product compute the cosine of the angle between the measurement direction and the spin, and from it we can compute the cosine of the half angle. The square of the cosine of the half angle is used to determine the random outcome. The resulting curve is 1/3 of the experimental correlation curve. Notice that the output generation for Alice and Bob are completely independent (locality).

But the actual analytical computation is not that hard to do either. We proceed in two steps.

Step 1: Let $$\beta$$ be the angle between one spin $$x$$ and a measurement device direction $$a$$. We have: $$\cos (\beta) = a\cdot x$$ and:

$${(\cos \frac{\beta}{2})}^2 = \frac{1+\cos\beta}{2} = \frac{1+a\cdot x}{2}$$

Keeping the direction $$x$$ constant, the measurement outcomes for Alice and Bob measuring on the directions $$a$$ and $$b$$ respectively are:

++ $$\frac{1+a\cdot x}{2} \frac{1+b\cdot (-x)}{2}$$ percent of the time
-- $$\frac{1-a\cdot x}{2} \frac{1-b\cdot (-x)}{2}$$ percent of the time
+-$$\frac{1+a\cdot x}{2} \frac{1-b\cdot (-x)}{2}$$ percent of the time
-+$$\frac{1-a\cdot x}{2} \frac{1+b\cdot (-x)}{2}$$ percent of the time

which yields the correlation: $$-(a\cdot x) (b \cdot x)$$

Step 2: integrate $$-(a\cdot x) (b \cdot x)$$ for all directions $$x$$. To this aim align $$a$$ on the z axis and have $$b$$ in the y-z plane:

$$a=(0,0,a)$$
$$b=(0, b_y , b_z)$$

then go to spherical coordinates integrating using:

$$\frac{1}{4\pi}\int_{0}^{2\pi} d\theta \int_{0}^{\pi} \sin\phi d\phi$$

$$a\cdot x = \cos\phi$$
$$b\cdot x = b(0, \sin\alpha, -\cos\alpha)\cdot(\sin\phi \cos\theta, \sin\phi\sin\theta, \cos\phi)$$

where $$\alpha$$ is the angle between $$a$$ and $$b$$.

Plugging all back in and doing the trivial integration yields: $$-\frac{\cos\alpha}{3}$$

So now for the moral of the story. the quantum mechanics prediction and the experimentally observed  correlation is  $$-\cos\alpha$$ and not $$-\frac{1}{3} \cos\alpha$$

The 1/3 incorrect correlation factor comes from demanding (1) the experimentally proven behavior from two consecutive S-G device measurements, (2) the hypothesis that the electron spins exist before measurement, and (3) and isotropic distribution of spins originating from a total spin zero state.

(1) and (3) cannot be discarded because (1) is an experimental behavior, and (3) is a very natural demand of isotropy. It is (2) which is the faulty assumption.

If (2) is true then circling back on Hardy's result, we are under the classical physics condition: $$K=N$$ which means that the whole is the sum of the parts.

Bell considered both the 1/3 result and the one from his inequality and decided to showcase his inequality for experimental purposes reasons: "It is probably less easy, experimentally, to distinguish (10) from (3), then (11) from (3).". Both hidden variable models:

if (Dot(direction, sharedRandomness3DVector) < 0)
return +1;
else
return -1;

and

var cosAngle= Dot(direction, sharedRandomness3DVector);
var cosHalfAngleSquared = (1+cosAngle)/2;
if (Math.random() < cosHalfAngleSquared )
return -1;
else
return +1;

are at odds with quantum mechanics and experimental results. The difference between them is on the correlation behavior for 0 and 180 degrees. If we allow information transfer between Alice generating function and Bob generating function (nonlocality) then it is easy to generate whatever correlation curve we want under both scenarios (play with the computer model to see how it can be done).

So from realism point of view, which hidden variable model is better? Should we insist on perfect anti-correlations at 0 degrees, or should we demand the two consecutive S-G results along with realism? It does not matter since both are wrong. In the end local realism is dead.