One reason why one might expect quantum computers to be more powerful than classical computers has to do with the fact that the probability of a given output is the absolute square of the sum of the amplitudes associated with each of the “computational paths” (sequences of computational states, which are themselves strings of ones and zeros), and these amplitudes are complex numbers. This is essentially Feynman’s description of quantum mechanics via the path integral, applied to computers, a point touched on by Scott Aaronson in lecture 10 of his Quantum Computing Since Democritus course. The number of paths is huge, and to get the probability of ending in any particular bit string we’ve got to add up all these complex numbers for paths ending in the target state and then take the square of the absolute value. The trouble is that knowing just a few of the path amplitudes for fixed output doesn’t really tell us anything: the others can add to what we have, or cancel it out, or anything in between.

In contrast, if the probability of a given output were just the sum of the probabilities of all the paths that end with that output, as would be the case for a probabilistic classical computer, then knowing a few path probabilities gives us at least a lower bound on the output probability. Since useful probabilistic algorithms have to land on the correct output with probability greater than one half, we can sample the computational paths randomly and we’ll come up with the likely output with high probability. It’s a little perverse to talk about simulating a probabilistic algorithm since the way to sample computational paths is to run the algorithm, but the point is this won’t work for a quantum computer.

The connection with the path integral made me realize something I hadn’t appreciated before, namely that, for a single particle at least, the absolute value of the amplitude is the same for every path. That is, the important part of the amplitude is just its complex phase, and so nature makes it as hard as it can to predict where the particle is going to go. One might have thought that the amplitude for going to nearby points would have a larger magnitude than for distant points, a sort of diffusion, but no. Ok, I do remember this from Feynman’s book on QED, his picture of particles carrying little clocks as they travel on the various paths, but in the quantum computation context it doesn’t hold. That’s because in an ordinary quantum computation a lot of the qubits are “idle” at any given time, and aren’t subjected to any quantum gate (aren’t moving in the particle language).

This got me thinking about the standard universal gate set and whether or not there exist universal sets with the property that the amplitudes of outgoing computational basis states are also “pure phase”. In terms of the matrix entries of the gate (in the computational basis), this just means that all entries have the same magnitude. The usual gate set consists of the Hadamard (H), phase (T), and CNOT gates: , , .

Of course there are! Just multiply/conjugate T and CNOT respectively by H (H on qubit 1 for CNOT). The result is and . So quantum algorithms are, in the path integral sense, purposefully trying to be difficult to simulate classically. At each step, the possible new computational basis states all have the same magnitude, frustrating efforts to calculate the ultimate output probabilities. In the same vein, it’s nice to see the quantum Fourier transform show up in almost all of the known quantum algorithms since it has the same constant magnitude property, but now on n qubits instead of just one or two.

This is a nice observation. If each gate is close to the identity — or, more generally, to a permutation of computational basis states — then it’s not “trying” very hard to be difficult. Inasmuch as a given gate transforms computational basis states to big nasty superpositions, it makes life difficult for the simulator. The right measure of nastiness is probably the computational-basis entropy of U|n> for a typical computational basis state |n>.

Two further observations. First, I note that there’s not just one QFT — there are a lot of n-qubit Hadamard matrices, including at least (d+1) different ones that transform the computational basis into a basis that’s unbiased with respect to it. Second (and this is really just an extension), if you pick a random unitary, then in high dimensions it will be darn close to maximally difficult to simulate. The computational-basis entropy of U|n> will be near-maximal.

On the other hand, a measure-1 set of unitaries are ridiculously hard to implement. But I have a suspicion that shortish random circuits are just as hard to simulate. In fact, I think the Hayden-Preskill work on scrambling black holes is based on that…

Right you are about not just one QFT; I should have phrased this differently since it’s only been a few weeks since I was trying to point out to the students in my quantum information lecture that Simon’s algorithm uses the Fourier transform on (Z_2)^n while Shor’s uses the transform on Z_(2^n)! [I hope I got that right.]

I agree that the computational-basis entropy tells us something useful, and I think it’s directly related to what Guifre Vidal and others have shown about simulating quantum algorithms which don’t produce much entanglement. Intuitively, if the possible computational paths don’t spread out too much, then it should be possible to efficiently simulate the algorithm. More concretely, if at any time the paths only traverse polynomially-many computational basis states (and so the state has a small entropy in this basis), then there are only polynomially-many paths, and we could just work out all the amplitudes and compute the sum.

Actually there is a way to make this even more path integral like (the equal amplitudes is important and I too overlooked it for many years!) One can actually define a discrete action for quantum algorithms. To do this you use the universal gate set of Hadamards plus controlled-controlled-Z gates. Using this gate set you end up being able to write down an action for a computation and the quantum computation is the exponential of an equal weighted sum of the history through this circuit. Check out the paper on Wim van Dam’s page “Analyzing Algebraic Quantum Circuits Using Exponential Sums”: http://www.cs.ucsb.edu/~vandam/publications.html

The nice thing about this is that it might be possible (or so we hope!) to define suitable classical limits by taking this picture with qubits to qudits for large d.

interesting paper! Am I right that you avoid the problem of computational paths with zero amplitude by your choice of diagonal gates plus Hadamard and only defining intermediate computational basis states (to be summed over later) at points just following/preceding a Hadamard gate? (I guess this means different qubits have intermediate states at different “times” in the computation.) Very clever! (and no mention of it in the paper?) It seems like the same approach would work here, but the action may not have any of the algebraic properties you’re looking for.