This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Chapter 5 Quantum Information Theory Quantum information theory is a rich subject that could easily have occupied us all term. But because we are short of time (I’m anxious to move on to quantum computation), I won’t be able to cover this subject in as much depth as I would have liked. We will settle for a brisk introduction to some of the main ideas and results. The lectures will perhaps be sketchier than in the first term, with more hand waving and more details to be filled in through homework exercises. Perhaps this chapter should have been called “quantum information theory for the impatient.” Quantum information theory deals with four main topics: (1) Transmission of classical information over quantum channels (which we will discuss). (2) The tradeoff between acquisition of information about a quantum state and disturbance of the state (briefly discussed in Chapter 4 in connec tion with quantum cryptography, but given short shrift here). (3) Quantifying quantum entanglement (which we will touch on briefly). (4) Transmission of quantum information over quantum channels. (We will discuss the case of a noiseless channel, but we will postpone discus sion of the noisy channel until later, when we come to quantum error correcting codes.) These topics are united by a common recurring theme: the interpretation and applications of the Von Neumann entropy. 1 2 CHAPTER 5. QUANTUM INFORMATION THEORY 5.1 Shannon for Dummies Before we can understand Von Neumann entropy and its relevance to quan tum information, we must discuss Shannon entropy and its relevance to clas sical information. Claude Shannon established the two core results of classical information theory in his landmark 1948 paper. The two central problems that he solved were: (1) How much can a message be compressed ; i.e. , how redundant is the information? (The “noiseless coding theorem.”). (2) At what rate can we communicate reliably over a noisy channel; i.e. , how much redundancy must be incorporated into a message to protect against errors? (The “noisy channel coding theorem.”) Both questions concern redundancy – how unexpected is the next letter of the message, on the average. One of Shannon’s key insights was that entropy provides a suitable way to quantify redundancy. I call this section “Shannon for Dummies” because I will try to explain Shannon’s ideas quickly, with a minimum of ε ’s and δ ’s. That way, I can compress classical information theory to about 11 pages. 5.1.1 Shannon entropy and data compression A message is a string of letters chosen from an alphabet of k letters { a 1 ,a 2 ,... ,a k } . (5.1) Let us suppose that the letters in the message are statistically independent, and that each letter a x occurs with an a priori probability p ( a x ), where ∑ k x =1 p ( a x ) = 1. For example, the simplest case is a binary alphabet, where 0 occurs with probability 1 p and 1 with probability p (where 0 ≤ p ≤ 1)....
View
Full
Document
This note was uploaded on 01/24/2012 for the course PHYS 219 taught by Professor Johnpreskill during the Fall '11 term at Caltech.
 Fall '11
 JohnPreskill

Click to edit the document details