Entropic Uncertainty Principle

It’s new paper time once again here at CS! Today’s topic is an entropic form of the uncertainty principle, a joint effort by Mario Berta, Matthias Christandl, Roger Colbeck, Renato Renner, and yours truly, detailed here. [The paper is new to publication, but not to the arxiv, whose slightly older version (due to Nature Physics rules) is here.]

Putting a bit of spin on our result (i.e. do not quote this part), what we show is that the usual uncertainty principle in quantum mechanics is wrong and how to fix it!

Now that I’ve got your attention, I should tell you the actual result: With the assistance of quantum memory, the uncertainty principle as normally formulated does not apply in all circumstances. Instead it has to be modified to take into account the “quantumness” of the memory, and in particular the possibility of entanglement between the memory and the thing you’re trying to measure.

Because the “uncertainty principle” is really a whole host of related issues, we specify a concrete setting in which the original entropic form of the uncertainty principle is no longer valid. It’s a game involving two players, named, as usual, Alice and Bob. In the game Alice and Bob agree on two possible measurements for Alice to make on a physical system she’s going to get from Bob. To be more specific, let’s say the two measurements are angular momentum along two different spatial directions and the physical system is an electron. The angular momentum of electrons can only take on two possible values, +\hbar/2 or -\hbar/2. Bob goes first in the game and prepares the electron in any way he likes, giving it to Alice when he’s done. Now it’s her turn, and she performs one of the two measurements and tells Bob which one. Finally, Bob attempts to guess her result. If he does, he wins the game. If not, he loses. The point of thinking in terms of this game is not that it’s fun to play, but rather to very clearly fix our attention on the issue of “knowing” (being able to predict) the outcome of either of two possible measurements, only one of which is going to be performed. So we’re not immediately interested in questions like how much measurement disturbs the state of if both measurements could be performed at once (though they are related).

In the world of classical physics, Bob can always win the game. That’s because physical properties are always in principle well-defined; “classical electrons” (think of a spinning ball) have definite values of angular momentum along every given axis, including the two chosen at the beginning of the game. So Bob simply prepares the electron so that he knows the two values and gives it to Alice.

But in the world of quantum physics, physical properties are not always well-defined. In particular the angular momenta of an electron along two
different axes is not well-defined; if one is known then the other must be at least partially uncertain. Thus, it initially appears that the uncertainty principle implies that Bob cannot always win the game. However, there’s a catch: He’s allowed to use any means at his disposal, and since he prepared the state in the first place, he could have entangled it with a “quantum memory”, i.e. some other quantum system. This possibility is not considered in usual formulations of the uncertainty principle.

In fact, given the possibility of entanglement it’s easy to see that he can always win by preparing the electron and quantum memory, which for our purposes here is just another electron, in a state of total angular momentum zero. This is precisely the state used in Bohm’s version of the EPR paradox, and it has the peculiar property that, even though angular momenta along different axes are not simultaneously well-defined, if the two electrons are each measured along the same axis individually, opposite outcomes are nevertheless always obtained. Therefore, Bob can always win the game in quantum mechanics by waiting for Alice to tell him which measurement she made, and then just making the same measurement on his system.

The above analysis is straightforward and fairly elementary in the subject of quantum information theory. So what do we add to this in the paper? Answer: a version of the uncertainty principle that works in the case Bob has a quantum memory, and a few applications to boot. Read on for more; warning, it’s technical.

In the setting of the game, it’s somewhat natural to quantify the uncertainty Bob has about either outcome using the entropy H. If we call the measurement outcomes M_1 and M_2 and treat them as random variables (since there’s a probability distribution for them once we specify the state Alice has), then an entropic version of the uncertainty principle due to Maassen and Uffink states that the sum of the entropies, H(M_1)+H(M_2) is always bigger than some constant K which depends on the nature of the observables in question. For incompatible measurements, K is always positive, so in that case there’s always some uncertainty about one or other of the observables. Thus, the entropic version is morally equivalent to the usual, variance-based version. Somewhat more technically, we really have the statement H(M_1)_\psi+H(M_2)_\psi\geq K, where the subscript \psi means we take the quantum state \psi which describes the electron, compute the probability distribution P_{M_1} (or p_{M_2}, respectively), and then compute the entropy using the formula H(M_1)=-\sum_{m_1} p_{m_1}\log p_{m_1}.

This result can be extended fairly easily to the case that Bob possesses a classical memory, for instance for the purpose of remembering how he prepared the quantum state! If his classical information is denoted by C, then what this means is that the classical information (at best) specifies the quantum state of the electron. (It could also be useless, like the tomorrow’s weather forecast.) Then one imagines applying the above uncertainty principle in each case and taking the average over the information C. All this leads one to use the conditional entropy, the entropy of the outcome given the classical information, and obtain H(M_1|C)+H(M_2|C)\geq K. The magic of Maassen and Uffink’s result is that K doesn’t depend on the state \psi, so this doesn’t interfere with our averaging procedure.

Now, to really tell you what’s in the paper. If we naively replace classical information C with quantum information Q (in the form of another electron, say), then we’d get H(M_1|Q)+H(M_2|Q)\geq K. Ok, really we’d get H(M_1|Q)_{\psi^{AQ}}+H(M_2|Q)_{\psi^{AQ}}\geq K, where now the quantum state \psi describes both the system sent to Alice, A and the quantum memory, Q. This equation is wrong, as the above analysis shows. So what to do? How can we include the effects of the quantum memory?

It turns out that in the presence of quantum memory, the entropic uncertainty principle becomes H(M_1|Q)_{\psi^{AQ}}+H(M_2|Q)_{\psi^{AQ}}\geq K+H(A|Q)_{\psi^{AQ}}. Doesn’t look like much of an improvement, adding terms to the lower bound when we’re trying to move the inequality in the other direction. But, because quantum mechanics is weird, the conditional entropy H(A|Q) can be negative. (This entropy is defined in analogy with the classical version using the von Neumann entropy instead of the usual Shannon entropy.) If you’re wondering what it means for the entropy—the uncertainty—to be negative, you’re not alone. One way to interpret it is that it means you know less than nothing! Another way to look at it is that it means the state \psi^{AQ} is entangled; the conditional entropy can be used as a measure of entanglement. So entanglement between the system going to Alice, the electron, and the quantum memory reduces the uncertainty burden quantitatively in this way.

It was already clear from our example that entanglement was somehow related to this issue, so it’s satisfying to see it show up in the quantitative form as well. Moreover, one of the applications promised above and mentioned in the paper is that we can turn the connection between winning the game and entanglement on its head and use the probability to win the game to determine the amount of entanglement between the system and the memory. In this sense, being able to win the game beyond a certain threshold is essentially a witness to the presence of entanglement.

9 Comments

Filed under science, technical

Windicated!

One the first posts here at complementary slackness was on the apparent phenomenon of a machine that can go downwind faster than the wind (DWFTTW). Now it’s been convincingly demonstrated by Rick Cavallaro and co at fasterthanthewind.org and verified by the North American Land Sailing Association that DWFTTW is possible; their cart traveled directly downwind at 2.8 times the wind speed.

Does anyone know of scientific articles on this phenomenon?

[Edit: after further thought, my earlier arguments on how it might work don't seem to have been "windicated". So I updated the post DWFTTW accordingly.]

(via)

1 Comment

Filed under science

Martin Wolf and Monopoly

I enjoy Martin Wolf’s columns. Thursday’s would seem to be calling for a land value tax, which, strangely enough, I was just reading about on Wikipedia. Did you know it inspired the monopoly board game? And how come I’ve never heard of Henry George before (note: answer is not “benefits of rural education”)?

2 Comments

Filed under economics

Quantum Computation: A Phase of Matter?

When I was in elementary school we learned that there are three phases of matter: solid, liquid, and gas. That’s it, no more. Which is a shame, because there are lots and lots of phases and their study is quite beautiful. But we did do some fun experiments with supersaturated liquids! (Though no one thought to call it a false vacuum…)

“Quantum computation” was definitely not on the list of phases. And you might be wondering what this would even mean. If you are, I’ve got just the paper for you: “Quantum computational renormalization in the Haldane phase”, a collaborative effort by Stephen D. Bartlett, Gavin K. Brennen, Akimasa Miyake, and yours truly.

Basically what we show is that something called “measurement-based quantum computation” (MQC) is robust in a certain part of the Haldane phase of a 1D chain of spin-1 particles, using some renormalization tricks to do this. Let me first give some background; experts might want to jump to the next paragraph. Quantum computation refers to using the rules of quantum mechanics to perform normal computations, like calculating your income tax or figuring out the fastest route from LA to Miami that doesn’t involve driving through, say, Arizona. (Assuming you don’t have an Arizona driver’s license.) Using the rules of quantum mechanics you can calculate the answers to some questions faster than if you use a normal computer, which uses the rules of classical mechanics. So we’re keen to build one. Due to the peculiarities of quantum mechanics, it’s possible to perform the computation by preparing a many-particle system in a certain way, and then measuring the particles one by one. If done properly, the measurement outcomes give you the output of the computation. This shows that all the quantum weirdness is due to the state of the many-particle system you start with, which is a very interesting conceptual point, since it doesn’t work this way for “classical” computers. But it’s potentially also an important practical point, since it might be easier to prepare this state and measure it appropriately than it is to do other styles of quantum computation. For instance, it might be possible to find a system which ends up in a state we could use for MQC when cooled below a threshold temperature. Sort of like how a ferromagnet cooled below its Curie temperature ends up in a magnetic state (though this particular example would be useless for MQC). That would make life easier. However, in general we’d need to control the parameters of the system very precisely in order to get just the right state out at the end. The hope of finding a phase, then, is to find a system for which it isn’t necessary to control these parameters so precisely; a system for which the computational ability of the system is a robust property of the phase. That’s in fact what we find to be the case for the Haldane phase of a 1-dimensional array of spin-1 particles.

Now, some more details for experts. Within the Haldane phase is the so-called AKLT (Affleck-Kennedy-Lieb-Tasaki) point, a Hamiltonian whose ground state can be exactly described by means of a matrix-product state. It’s a gapped system in the thermodynamic limit (which is one of the properties of the Haldane phase), so there’s hope that the ground state could be created just by sufficiently lowering the temperature of the chain. There’s also a nice MQC model where an AKLT chain can encode one logical qubit, such that measurements on the chain realize single-qubit operations. That’s not enough for a full quantum computation, so one can take many chains and couple them appropriately (which is cheating a little since the coupling is not a single-site measurement, but nevermind that for now).

But what happens away from the AKLT point? We investigated what happens to the quality of single-qubit and coupling operations when the actual state you’ve got comes from a different, deformed Hamiltonian and found that the quality (as measured by the fidelity) does in fact decrease. In other words, the computation becomes noisy. However, the deformed Hamiltonians also reside in the Haldane phase, and there is a real-space renormalization technique which takes such Hamiltonians to the AKLT point. This works because these ground states all have a common set of long-ranged degrees of freedom—renormalized spins that come from mapping a block of, say, L physical spins to a single new spin—and at this level the Hamiltonian looks a lot more like the AKLT Hamiltonian. So to perform a less-noisy quantum computation, we’d like to make our measurements on the renormalized spins, not the physical spins.

This violates the spirit of MQC, of course, since we’d now be measuring L physical spins simultaneously, which is considerably more difficult than single-site physical spin measurements. However, we were lucky to find a way around this! It is possible to implement a sequence of single-site measurements and use postselection to effectively measure the renormalized spin, a scheme we call buffering. Importantly, this procedure doesn’t depend on how the Hamiltonian is deformed, it’s the same all over the phase, except that you’ll want to choose a larger L the farther your system is from the AKLT point. Actually, we came up with buffering first, from a different analysis, and only later realized that the connection to renormalization explains why it works. Those details I’ll leave to the paper, but the upshot is that it’s not in principle necessary to have high-precision control of the Hamiltonian to create a ground state which is useful for quantum computation—the computational ability under a fixed set of measurements is a robust property of the phase.

Leave a comment

Filed under science, technical

For a little more complementarity

It’s new paper season here at Complementary Slackness! Today’s offering is all about two conditions on when a maximally entangled state can (approximately) be recovered from a given bipartite state, using only local operations at one end. Why is this interesting, you ask? Because this operation is essentially the final step in an entanglement distillation or quantum communication procedure, so knowing when it can be done is the first step in designing such protocols. It’s best to begin with the end in mind, as they say.

The two conditions are based on the idea of breaking quantum information down into two classical pieces, such that these pieces successfully reassembled into the quantum whole. Recall the original approach to quantum error correction in which quantum errors are digitized, in particular into amplitude and phase errors. Each of these errors is effectively classical, and correcting both restores the original quantum information. It turns out, though, that this is a suboptimal way to handle quantum errors. So here we shift the focus away from making the quantum errors classical to dealing with classical information inherent in the bipartite quantum state itself.

The states we’re after, maximally entangled states, have the property that, given one system, we can predict either of two complementary measurements made on the other. This is clear just by writing down the maximally entangled state, for two qubits A and B say: |\Phi\rangle=\tfrac{1}{\sqrt{2}}(|0,0\rangle+|1,1\rangle)=\tfrac{1}{\sqrt{2}}(|+,+\rangle+|-,-\rangle). A and B are perfectly correlated in both the \{|0\rangle,|1\rangle\} and \{|+\rangle=\tfrac{1}{\sqrt{2}}(|0\rangle+|1\rangle),|-\rangle=\tfrac{1}{\sqrt{2}}(|0\rangle-|1\rangle)\} bases, so measurement in either of these bases always produces correlated outcomes. The bases are complementary because given a state prepared in one basis, measuring it in the other produces a completely random outcome. Thus, the entangled state has the advertised property: to predict the outcome of measuring A in a given basis, just perform the same measurement on B, and this will work for either complementary basis. Furthermore, this property actually defines the state |\Phi\rangle.

The first condition just generalizes this, in several ways. First, we don’t demand that the prediction be perfect, and in exchange we’re willing to accept a good approximation to the state |\Phi\rangle. This is important for designing quantum communication protocols, as the amount of approximate entanglement a channel can produce is generally higher than the amount of perfect entanglement it can produce. Second, we don’t require the prediction measurements on B to have the same form as the measurement on A; any measurements which can predict the outcomes for either basis will do. This means the state spaces of A and B can be quite different. Lastly, and most importantly, we only require that one of the measurements, say the \{|0\rangle,|1\rangle\} basis be predictable from B; the other measurement \{|+\rangle,|-\rangle\} need only be predictable using B plus a sort of copy of A in the \{|0\rangle,|1\rangle\} basis. This might sound like cheating, but it stems to the fact that the entanglement recovery operation just performs the two prediction measurements in sequence. Since the first one is good at predicting the \{|0\rangle,|1\rangle\} measurement, this information is available to predict the outcome of measuring in the other basis. And it’s useful if there are correlations between the two.

This condition was implicitly used here to construct protocols for entanglement distillation which operate at the optimal rate (though you wouldn’t have guessed this from the title), so this approach successfully avoids the pitfalls encountered by error digitization. And it can be used to perform quantum communication over noisy channels, at least if there’s an extra classical channel available: Just use the channel to distribute bipartite states, distill entanglement, and then teleport the quantum information you actually want to send using the classical channel and the entanglement. The nice thing about this approach is that the necessary recovery operation (or decoder) is actually constructed, rather than just shown to exist, which might be an advantage in exploring more efficient schemes. Ok, you might be thinking that this isn’t really constructive, since you have to supply (construct) the two prediction measurements. But these are involved in a classical task and it is presumably easier to divide the full quantum problem into two classical pieces.

The second condition is sort of the inverse of the first — the two outcomes of the two measurements on A should be completely unpredictable to someone with access to the purification of A and B, i.e. a system R such that ABR is a pure state. Actually, a caveat similar to the last point above applies here, so that one of the measurments should be unpredictable even if given R and knowledge of the other measurement outcome. This is a sort of decomposition of the often-used decoupling approach into two classical pieces. I plan to say a bit more about that in a future post, but the point is that entanglement is recoverable from A and B if A is uncorrelated with R, i.e. the joint quantum state just factors into independent states for each system. Here we’re saying essentially the same thing using classical information instead. Ultimately this condition comes from the first by using the duality explored in a previous paper, discussed here. This approach isn’t constructive, but it’s aesthetically pleasing to see that it holds as well.

Leave a comment

Filed under science, technical

Speaking of unintentionally humorous…

From Wikipedia:

The Times once published an unintentionally humorous description of a Peter Ustinov documentary, noting that “highlights of his global tour include encounters with Nelson Mandela, an 800-year-old demigod and a dildo collector”. This is ambiguous as it stands, as Mandela could then be mistaken for a demigod.

You’d think being mistaken for the latter is the worse ambiguity.

Actually, that’s what I read the first time around. But the quote is actually

The Times once published an unintentionally humorous description of a Peter Ustinov documentary, noting that “highlights of his global tour include encounters with Nelson Mandela, an 800-year-old demigod and a dildo collector”. This is ambiguous as it stands, and would still be ambiguous if a serial comma were added, as Mandela could then be mistaken for a demigod.

Leave a comment

Filed under Uncategorized

A fistful of complementarity

New paper time! Today’s entry, arXiv:1003.0703 is a fistful of complementarity, even though it’s billed as showing that two fundamental quantum information processing tasks, privacy amplification and data compression with side information, are equivalent in the sense that being able to do one means that you can do the other, too.

First let me remind you what these two tasks are all about. In privacy amplification we start with a classical random variable X which is correlated in some way with a quantum system E, and the goal is to transform X into a uniform random variable U (i.e. extract randomness) while also decorrelating U from E. Thinking of E as held by an eavesdropper, this process makes X into a secret random string, and if X is shared by two parties Alice and Bob, then they end up with a secret key. Following the maxim that “all you need is love random hashing”, U can be generated by taking the output of a random hash function applied to X. In data compression with side information we start in the same place, though now we call the classical variable Z and the quantum system B (held by Bob). The goal here is to generate a compressed version of Z, call it C, such that Bob can recover Z from B and C together. The quantum system B functions as side information about Z, and reduces the minimum size of C. As you might suspect, it turns out that C can be generated from Z again by random hashing.

These tasks show up all over the place in classical and quantum information theory as building blocks for other protocols. And in the quantum setting they are essentially the same thing! For this to work, we have to somehow get the two tasks into the same scenario (i.e. derive their settings from a single quantum state), which we can do as follows. First, we let X be generated from quantum system A, held by Alice, by measuring in some basis. Then Z is generated by measuring in a complementary basis, complementary in the sense that the overlap^2 between the elements of the two bases is 1/d, for d the dimension of Alice’s system. Second, we let the global quantum state of ABE be a pure state, so that B and E are complementary systems, at least from Alice’s point of view: each one is the purification of the other+A. Now we get to the punchline. If Alice can perform privacy amplification of X against E, then by modifying the hash function appropriately, she can instead perform data compression of Z with B as side information, and vice versa! So if you go to all the trouble of constructing a protocol to do one of these tasks, you can make a few simple modifications and get a protocol for the other task, involving the complementary measurement and the complementary system.

There are, of course, a few caveats to worry about, all of which are covered in the paper. But I should point out one nice consequence of this result: a much simplified proof of the Holevo-Schumacher-Westmoreland theorem on the rate at which classical information can be transmitted over quantum channels. Without going into all the details, that task is the sort of “dynamic” analog to the “static” data compression with side information task — Alice now gets to choose the classical inputs to the channel, but the overall input-output state is described by the pair (Z,B). Therefore, instead of trying to construct this protocol directly, we could instead just construct a privacy amplification protocol for the appropriate complementary measurement and complementary system. This is considerably easier, since all we need to find is the right type of hash function. Most of the difficulty in the HSW theorem is coming up with a decent decoding operation Bob can use to get Z from B and C. But by the magic of something called Uhlmann’s theorem, given that the privacy amplification protocol works, such a decoder must exist. This spares us the trouble of actually having to get our hands dirty and construct it.

1 Comment

Filed under science, technical