{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

Expanders - PCPs and Inapproximability Expander Graphs and...

Info iconThis preview shows pages 1–4. Sign up to view the full content.

View Full Document Right Arrow Icon
PCPs and Inapproximability: Expander Graphs and their Applications My T. Thai 1
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
1 Introduction Since the introduction of Expander Graphs during the 1970’s, they turn to be a significant tool both in theory and practice. Last time, we saw how to use expander graphs to show the Hardness Approximation of MAX-3SAT( k ). Indeed, they have been used in solving many problems in communication in- cluding topology design, error correcting, cryptographic hash function, worm propagation schemes. And of course, expander graphs are also used for hard- ness of approximation and gap amplification. To begin, let us consider the following two problems and see how they connect to the expander graphs. 1.1 Some Problems Error Correcting Codes. Assume that Alice has a message of k bits which she would like to deliver to Bob over some communication channels. However, a proportion p of the bits may be changed due to the noise, thus the message that Bob receives might be different than the one that Alice sent. How can Alice send Bob a message of k bits so that Bob can correctly receive it? That is, what is the smallest number of bits that Alice can send so that Bob should be able to unambiguously recover the original k bits message. During the 1940’s, Clude Elwood Shannon has developed the theory of communication (which is called Information Theory). Shannon has presented an answer to this problem. He suggested building a dictionary (or code) C ⊆ { 0 , 1 } n of size | C | = 2 k and using a bijective mapping (”an encoding”) ϕ : { 0 , 1 } k C . To send a message x ∈ { 0 , 1 } k , Alice transmits the n -bit encoded message ϕ ( x ) C . Assume that Bob receives a string y ∈ { 0 , 1 } n that is a corrupted version of ϕ ( x ). Bob will find the codeword z C that is ”closest” to y in terms of hamming distance and determines the k -bits associated with it. If the minimal distance between two words in C is greater then 2 pn , it is guaranteed that the k -bits that Bob will find is exactly the bits Alice encoded. Therefore the problem of communicating over noisy channel is reduced to the problem of finding a good dictionary (code). 2
Background image of page 2
Problem 1. (Communication Problem.) Is it possible to design a series of dictionaries { C k } such that | C k | = 2 k , the distance of each dictionary is greater than δ 0 > 0 and the rate of each code is greater than R 0 > 0 where the rate of the dictionary is defined as R = log | C | n and the distance of the code is defined as δ = min c 1 6 = c 2 C d H ( c 1 , c 2 ) n where d H is the hamming distance. Now, let’s consider another problem which seems to be unrelated: De-randomizing Algorithms Checking the primality (by Rabin in 1980): Given an integer x of k bits and a set of k random bits r , the algorithm computes a function f ( x, r ) such that if x is primal, then f ( x, r ) = 1, else, f ( x, r ) = 1 with probability smaller than 1/4. Applying this algorithm over and over again can reduce the error to arbitrary small. Clearly, this process involves the use of more and more random bits.
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Image of page 4
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}