Expanders - PCPs and Inapproximability: Expander Graphs and...

Info iconThis preview shows pages 1–4. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: PCPs and Inapproximability: Expander Graphs and their Applications My T. Thai 1 1 Introduction Since the introduction of Expander Graphs during the 1970s, they turn to be a significant tool both in theory and practice. Last time, we saw how to use expander graphs to show the Hardness Approximation of MAX-3SAT( k ). Indeed, they have been used in solving many problems in communication in- cluding topology design, error correcting, cryptographic hash function, worm propagation schemes. And of course, expander graphs are also used for hard- ness of approximation and gap amplification. To begin, let us consider the following two problems and see how they connect to the expander graphs. 1.1 Some Problems Error Correcting Codes. Assume that Alice has a message of k bits which she would like to deliver to Bob over some communication channels. However, a proportion p of the bits may be changed due to the noise, thus the message that Bob receives might be different than the one that Alice sent. How can Alice send Bob a message of k bits so that Bob can correctly receive it? That is, what is the smallest number of bits that Alice can send so that Bob should be able to unambiguously recover the original k bits message. During the 1940s, Clude Elwood Shannon has developed the theory of communication (which is called Information Theory). Shannon has presented an answer to this problem. He suggested building a dictionary (or code) C { , 1 } n of size | C | = 2 k and using a bijective mapping (an encoding) : { , 1 } k C . To send a message x { , 1 } k , Alice transmits the n-bit encoded message ( x ) C . Assume that Bob receives a string y { , 1 } n that is a corrupted version of ( x ). Bob will find the codeword z C that is closest to y in terms of hamming distance and determines the k-bits associated with it. If the minimal distance between two words in C is greater then 2 pn , it is guaranteed that the k-bits that Bob will find is exactly the bits Alice encoded. Therefore the problem of communicating over noisy channel is reduced to the problem of finding a good dictionary (code). 2 Problem 1. (Communication Problem.) Is it possible to design a series of dictionaries { C k } such that | C k | = 2 k , the distance of each dictionary is greater than > 0 and the rate of each code is greater than R > 0 where the rate of the dictionary is defined as R = log | C | n and the distance of the code is defined as = min c 1 6 = c 2 C d H ( c 1 , c 2 ) n where d H is the hamming distance. Now, lets consider another problem which seems to be unrelated: De-randomizing Algorithms Checking the primality (by Rabin in 1980): Given an integer x of k bits and a set of k random bits r , the algorithm computes a function f ( x, r ) such that if x is primal, then f ( x, r ) = 1, else, f ( x, r ) = 1 with probability smaller than 1/4. Applying this algorithm over and over again can reduce the error to arbitrary small. Clearly, this process involves the use of more and moreto arbitrary small....
View Full Document

This note was uploaded on 05/20/2011 for the course CIS 6930 taught by Professor Staff during the Spring '08 term at University of Florida.

Page1 / 20

Expanders - PCPs and Inapproximability: Expander Graphs and...

This preview shows document pages 1 - 4. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online