There have been a couple of very nice results recently on the (non)additivity of several important quantities in quantum information theory. I thought I’d try to explain a little of what’s going on, both for myself and for anyone reading.

At issue are various capacities of a quantum channel (any means of transmitting quantum states) $\mathcal{N}$, which quantify the usefulness of the channel for performing some sort of task. For instance, the classical capacity $C(\mathcal{N})$ tells you how much classical information can be transmitted by using the channel. Similarly, the quantum capacity $Q(\mathcal{N})$ characterizes how much quantum information can be sent, or equivalently, how much entanglement can be created between the sender and receiver. And the private capacity $P(\mathcal{N})$ gives the amount of classical information which can be secretly sent using the channel. Additivity refers to the question of whether or not combining two different channels gives a (classical/private/quantum) capacity which is the sum of the individual capacities or not.

Before diving into the quantum case, let’s look back at the classical case to see where this is all coming from. A classical channel $N$ is essentially just a conditional probability distribution $p(y|x)$ from the input $X$ to output $Y$. Claude Shannon proved that the capacity of $N$ to send (classical) information is given by the expression $C(N)=\sup_{p_x} I(X:Y)$, where $I(X:Y)=H(X)+H(Y)-H(XY)$ is the mutual information between the input and output and $H$ is the Shannon entropy. This number tells you the ratio of how many bits of noiseless information you could send relative to the number of uses of the channel, in the limit of infinitely many uses of the channel. His proof method was slick, too, not just the result — by picking an error-correcting code at random you can (almost always) achieve the capacity!

The various capacities of the quantum channel $\mathcal{N}$ are defined in the same way, as the ratio of whatever you’re trying to produce to the number of channel uses, in the limit of infinitely many channel uses. Unfortunately, the expressions for these capacities are a good deal more complicated that the classical capacity of a classical channel, and involve what’s called regularization. The classical capacity, for instance, is given by $C(\mathcal{N})=\lim_{n\rightarrow\infty}\frac{1}{n}\chi(\mathcal{N}^{\otimes n})$. Here $\chi(\mathcal{N})$, usually called the Holevo quantity after its discoverer, is the capacity when using random codes. To get the true capacity, we have to regularize by taking the limit of the Holevo quantity of $n$ parallel uses of the channel. That is, we consider making arbitrary, yes entangled inputs to $n$ channels, close our eyes and pretend that whole thing is a single use of a super-channel, and then use random coding on that.

None of this would be necessary if, like the classical mutual information, the Holevo quantity were additive, so that $\chi(\mathcal{N}\otimes\mathcal{N})=2\chi(\mathcal{N})$. The regularization would be redundant—we wouldn’t have to bother with super-channels and so on—and we could say that the classical capacity is single-letterizable, since the expression for it only involves a single use of the channel. However, per Hastings [arxiv], the Holevo quantity is not additive, so the classical capacity is not single-letterizable.

Nor are the private and quantum capacities!
They are similarly defined: $P(\mathcal{N})=\lim_{n\rightarrow\infty}\frac{1}{n}P^1(\mathcal{N}^{\otimes n})$, where $P^1(\mathcal{N})$ is the private capacity using random codes. Substitute $Q$ for $P$ and you’ve got the quantum capacity. The latter was shown not to be additive by DiVincenzo, Shor, and Smolin [arxiv, arxiv] and the former by Smith, yours truly, and Smolin [arxiv].

So we’re stuck with regularization, at least if we approach the question of capacity from the random coding angle, and nothing is additive. But this isn’t the only thing meant by nonadditivity, however—the capacities themselves may or may not be nonadditive! Since this post is already too long, I’ll consider this question in the next one.