It’s new paper time once again here at CS! Today’s topic is an entropic form of the uncertainty principle, a joint effort by Mario Berta, Matthias Christandl, Roger Colbeck, Renato Renner, and yours truly, detailed here. [The paper is new to publication, but not to the arxiv, whose slightly older version (due to Nature Physics rules) is here.]
Putting a bit of spin on our result (i.e. do not quote this part), what we show is that the usual uncertainty principle in quantum mechanics is wrong and how to fix it!
Now that I’ve got your attention, I should tell you the actual result: With the assistance of quantum memory, the uncertainty principle as normally formulated does not apply in all circumstances. Instead it has to be modified to take into account the “quantumness” of the memory, and in particular the possibility of entanglement between the memory and the thing you’re trying to measure.
Because the “uncertainty principle” is really a whole host of related issues, we specify a concrete setting in which the original entropic form of the uncertainty principle is no longer valid. It’s a game involving two players, named, as usual, Alice and Bob. In the game Alice and Bob agree on two possible measurements for Alice to make on a physical system she’s going to get from Bob. To be more specific, let’s say the two measurements are angular momentum along two different spatial directions and the physical system is an electron. The angular momentum of electrons can only take on two possible values, or . Bob goes first in the game and prepares the electron in any way he likes, giving it to Alice when he’s done. Now it’s her turn, and she performs one of the two measurements and tells Bob which one. Finally, Bob attempts to guess her result. If he does, he wins the game. If not, he loses. The point of thinking in terms of this game is not that it’s fun to play, but rather to very clearly fix our attention on the issue of “knowing” (being able to predict) the outcome of either of two possible measurements, only one of which is going to be performed. So we’re not immediately interested in questions like how much measurement disturbs the state of if both measurements could be performed at once (though they are related).
In the world of classical physics, Bob can always win the game. That’s because physical properties are always in principle well-defined; “classical electrons” (think of a spinning ball) have definite values of angular momentum along every given axis, including the two chosen at the beginning of the game. So Bob simply prepares the electron so that he knows the two values and gives it to Alice.
But in the world of quantum physics, physical properties are not always well-defined. In particular the angular momenta of an electron along two
different axes is not well-defined; if one is known then the other must be at least partially uncertain. Thus, it initially appears that the uncertainty principle implies that Bob cannot always win the game. However, there’s a catch: He’s allowed to use any means at his disposal, and since he prepared the state in the first place, he could have entangled it with a “quantum memory”, i.e. some other quantum system. This possibility is not considered in usual formulations of the uncertainty principle.
In fact, given the possibility of entanglement it’s easy to see that he can always win by preparing the electron and quantum memory, which for our purposes here is just another electron, in a state of total angular momentum zero. This is precisely the state used in Bohm’s version of the EPR paradox, and it has the peculiar property that, even though angular momenta along different axes are not simultaneously well-defined, if the two electrons are each measured along the same axis individually, opposite outcomes are nevertheless always obtained. Therefore, Bob can always win the game in quantum mechanics by waiting for Alice to tell him which measurement she made, and then just making the same measurement on his system.
The above analysis is straightforward and fairly elementary in the subject of quantum information theory. So what do we add to this in the paper? Answer: a version of the uncertainty principle that works in the case Bob has a quantum memory, and a few applications to boot. Read on for more; warning, it’s technical.
In the setting of the game, it’s somewhat natural to quantify the uncertainty Bob has about either outcome using the entropy . If we call the measurement outcomes and and treat them as random variables (since there’s a probability distribution for them once we specify the state Alice has), then an entropic version of the uncertainty principle due to Maassen and Uffink states that the sum of the entropies, is always bigger than some constant which depends on the nature of the observables in question. For incompatible measurements, is always positive, so in that case there’s always some uncertainty about one or other of the observables. Thus, the entropic version is morally equivalent to the usual, variance-based version. Somewhat more technically, we really have the statement , where the subscript means we take the quantum state which describes the electron, compute the probability distribution (or , respectively), and then compute the entropy using the formula .
This result can be extended fairly easily to the case that Bob possesses a classical memory, for instance for the purpose of remembering how he prepared the quantum state! If his classical information is denoted by , then what this means is that the classical information (at best) specifies the quantum state of the electron. (It could also be useless, like the tomorrow’s weather forecast.) Then one imagines applying the above uncertainty principle in each case and taking the average over the information . All this leads one to use the conditional entropy, the entropy of the outcome given the classical information, and obtain . The magic of Maassen and Uffink’s result is that doesn’t depend on the state , so this doesn’t interfere with our averaging procedure.
Now, to really tell you what’s in the paper. If we naively replace classical information with quantum information (in the form of another electron, say), then we’d get . Ok, really we’d get , where now the quantum state describes both the system sent to Alice, and the quantum memory, . This equation is wrong, as the above analysis shows. So what to do? How can we include the effects of the quantum memory?
It turns out that in the presence of quantum memory, the entropic uncertainty principle becomes . Doesn’t look like much of an improvement, adding terms to the lower bound when we’re trying to move the inequality in the other direction. But, because quantum mechanics is weird, the conditional entropy can be negative. (This entropy is defined in analogy with the classical version using the von Neumann entropy instead of the usual Shannon entropy.) If you’re wondering what it means for the entropy—the uncertainty—to be negative, you’re not alone. One way to interpret it is that it means you know less than nothing! Another way to look at it is that it means the state is entangled; the conditional entropy can be used as a measure of entanglement. So entanglement between the system going to Alice, the electron, and the quantum memory reduces the uncertainty burden quantitatively in this way.
It was already clear from our example that entanglement was somehow related to this issue, so it’s satisfying to see it show up in the quantitative form as well. Moreover, one of the applications promised above and mentioned in the paper is that we can turn the connection between winning the game and entanglement on its head and use the probability to win the game to determine the amount of entanglement between the system and the memory. In this sense, being able to win the game beyond a certain threshold is essentially a witness to the presence of entanglement.