Register now to access 7 million high quality study materials (What's Course Hero?) Course Hero is the premier provider of high quality online educational resources. With millions of study documents, online tutors, digital flashcards and free courseware, Course Hero is helping students learn more efficiently and effectively. Whether you're interested in exploring new subjects or mastering key topics for your next exam, Course Hero has the tools you need to achieve your goals.

1 Page

pumpingstrings1

Course: CS 121, Fall 2011
School: Harvard
Rating:

Word Count: 382

Document Preview

Science Computer E-207 A Proof by Contradiction Lets prove that L = {ak : k = q 2 for some q 0} is not regular. First, some intuition. Note that L = {, a, aaaa, aaaaaaaaa, aaaaaaaaaaaaaaaa, . . .}. Your instincts should tell you that a DFA couldnt possibly accept this language, since its strings lengths dont dier by any multiple of constant factor. Not convinced? Try sketching an NFA that accepts {}; then...

Register Now

Unformatted Document Excerpt

Coursehero >> Massachusetts >> Harvard >> CS 121

Course Hero has millions of student submitted documents similar to the one
below including study guides, practice problems, reference materials, practice exams, textbook help and tutor support.

Course Hero has millions of student submitted documents similar to the one below including study guides, practice problems, reference materials, practice exams, textbook help and tutor support.
Science Computer E-207 A Proof by Contradiction Lets prove that L = {ak : k = q 2 for some q 0} is not regular. First, some intuition. Note that L = {, a, aaaa, aaaaaaaaa, aaaaaaaaaaaaaaaa, . . .}. Your instincts should tell you that a DFA couldnt possibly accept this language, since its strings lengths dont dier by any multiple of constant factor. Not convinced? Try sketching an NFA that accepts {}; then augment it to accept {, a}; then augment it to accept {, a, aaaa}; then augment it to accept {, a, aaaa, aaaaaaaaa}. I dont imagine your eorts at each step are identical to any previous eorts. In fact, continue in this fashion, and youll never converge on a single NFA that handles all of L. Now, our proof. Suppose that L is regular. Then there must exist some pumping length, p, such that any string, s, must be pumpable, provided |s| p. Well, lets just see. Lets choose 2 s = ap , which clearly is in L since s can be written as ak where k = q 2 for q = p. Lets now consider we how might choose x, y, z such that s = xyz . Clearly, xy must contain at least one a and no more than p, in light of the requirements that |xy | p and y = . Since weve supposed that L is regular, s must be pumpable, so it must be that xy 2 z L. But, wait: whats the length of xy 2 z ? Well, since |xyz | = p2 , it must be that |xy 2 z | = p2 + |y | p2 + p, since, 2 if |xy | p, it must be that y p. Now wait a minute. The length of s = ap was p2 . Shouldnt the 2 length of the next string in Ls quadratic sequence, a(p+1) , be (p + 1)2 = p2 + 2p + 1? Indeed! But note that p2 < p2 + p < p2 + 2p + 1, the implication of which is that the length of xy 2 z is strictly between two consecutive squares! Hence, it cant be that xy 2 z = ak , where k = q 2 for some q 0, in which case xy 2 z isnt in L, which contradicts the pumping lemma. Since the pumping lemma is provably true, the aw in our argument must be our assumption that L is regular. Ergo, L is not regular. QED. 1
Find millions of documents on Course Hero - Study Guides, Lecture Notes, Reference Materials, Practice Exams and more. Course Hero has millions of course specific materials providing students with the best way to expand their education.

Below is a small sample set of documents:

Harvard - CS - 121
AUsing Imported Graphics in L TEX 2Keith Reckdahlreckdahl@am-sun2.stanford.eduVersion 2.0December 15, 1997SummaryAThis document explains how to use imported graphics in L TEX 2 documents.While reading the entire document is certainly worthwhile,
Harvard - CS - 121
AAn introduction to LTEXfor CS 121By the CS 121 staff.An introduction to L EX p. 1/?ATAWhat is LTEX?It is a typesetting program.You give it text interspersed with formatting commands.E.g. The last word is in cfw_\it italics. The last word is in
Harvard - CS - 121
A LTEX 2 Cheat SheetLists\begincfw_enumerate Numbered list. \begincfw_itemize Bulleted list. \begincfw_descriptionDescription list. \item text Add an item. \item[x ] text Use x instead of normal bullet or number. Required for descriptions.JusticationE
Harvard - CS - 121
NP-completeness: A RetrospectiveChristos H. Papadimitriou?University of California, Berkeley, USA computer science's favorite paradigm, fad, punching bag, buzzword, alibi, and intellectual export. This paper is a fragmentary commentary on its origins, i
Harvard - CS - 121
Computer Science E-207A Reduction1. N P -completeness. A language L is said to be N P -complete i L is in N P ; and L is N P -hard (i.e. every language in N P is reducible to L in polynomial time).To show L is in N P : Show that L has succinct certi
Harvard - CS - 121
Computer Science E-207A Proof by Construction and Mutual InclusionLet L be a language and dene Suffix(L) = cfw_x : w s.t. wx L. Prove that, if L is regular, then so is Suffix(L).Proof Idea: Since L is regular, let M = (Q, , q0 , , F ) be a DFA recogniz
Harvard - CS - 121
yj ~)whs h f i s g s x i y pi f yv y i s v gi h v s j s j f d py i s y j f h fy y hv g i Fwhq6ufwsFDfur$8q!brnwhcfw_wws)wF'i qrbv uxF)Sahag vp gf d i s si p o v s yv y h sy s v y hf v xf p pf j i y x i jppv i g sv jp j$r6q~ghw$q@uwpn!rFufrwbF@f$iFrbruTfb
Harvard - CS - 121
Computer Science E-207A Formal Description of a Turing MachineProblem Give the seven-tuple for a Turing machine, M , that loops forever, counting up all strings in cfw_a, b, c lexicographically (i.e., in dictionary order).Solution M = (K, , , q0 , q7 ,
Harvard - CS - 121
A CFG for Strings with Equal Numbers of as and bs G = (cfw_S , cfw_a, b, R, S ) where R has rules: S S SS S aSb S bSa Claim: L(G) = cfw_x : x has the same # of as and bs (1) x L(G) x has the same # of as and bs Pf: Easy, every RHS has the same number of a
Stanford - CS - 109
Acey DeuceyHave a standard deck of 52 cardsRanks of cards: 2, 3, 4, 5, 6, 7, 8, 9, 10, J, Q, K, AThree cards drawn (without replacement)What is probability that rank of third card drawn isbetween the ranks of the first two cards, exclusive?oAcey De
Stanford - CS - 109
Acey DeuceyHave a standard deck of 52 cardsRanks of cards: 2, 3, 4, 5, 6, 7, 8, 9, 10, J, Q, K, AThree cards drawn (without replacement)What is probability that rank of third card drawn isbetween the ranks of the first two cards, exclusive?oAcey De
Stanford - CS - 109
The rand() FunctionRandom NumbersIn many applications, want to be able to generaterandom numbers, permutations, etc.In C/C+, there exists a function for generatingpseudo-random numbers: int rand(void)Part of &lt;stdlib.h&gt; librarySince computers are de
Stanford - CS - 109
The rand() FunctionRandom NumbersIn many applications, want to be able to generaterandom numbers, permutations, etc.In C/C+, there exists a function for generatingpseudo-random numbers: int rand(void)Part of &lt;stdlib.h&gt; librarySince computers are de
Stanford - CS - 109
From Data To UnderstandingIn machine learning, maintain critical perspectiveMaking predictions is only part of the storyBayesian NetworksBayesian NetworkAlso try to get some understanding of the domainGraphical representation of joint probability di
Stanford - CS - 109
From Data To UnderstandingIn machine learning, maintain critical perspectiveMaking predictions is only part of the storyBayesian NetworksBayesian NetworkAlso try to get some understanding of the domainGraphical representation of joint probability di
Stanford - CS - 109
Computing Probabilities from DataVarious probabilities you will need to compute forNaive Bayesian Classifier (using MLE here):From Naive Bayes to Logistic RegressionUse assumption that P( X | Y ) P( X 1 , X 2 ,. X m | Y ) P( X i | Y )y# instances wh
Stanford - CS - 109
Computing Probabilities from DataVarious probabilities you will need to compute forNaive Bayesian Classifier (using MLE here):From Naive Bayes to Logistic RegressionPredict Y arg max P( X , Y ) arg max P( X | Y ) P(Y )# instances in class 0P(Y 0) t
Stanford - CS - 109
What is Machine Learning?Many different forms of Machine LearningA (Very Short) List of ApplicationsWe focus on the problem of predictionStock price predictionComputational biology and medical diagnosisoWant to make a prediction based on observatio
Stanford - CS - 109
What is Machine Learning?Many different forms of Machine LearningA (Very Short) List of ApplicationsWe focus on the problem of predictionMachine learning widely used in many contextsStock price predictionComputational biology and medical diagnosiso
Stanford - CS - 109
Two Envelopes RevisitedThe two envelopes problem set-upSubjectivity of ProbabilityoBefore opening envelope, think either equally goodoHave prior belief of distribution for X (or anything for that matter)oPrior belief is a subjective probabilityo
Stanford - CS - 109
Two Envelopes RevisitedThe two envelopes problem set-upSubjectivity of ProbabilityoooBefore opening envelope, think either equally goodooooNote: there are infinitely many values of XSo, not true probability distribution over X (doesnt integrat
Stanford - CS - 109
Likelihood of DataConsider n I.I.D. random variables X1, X2, ., XnXi a sample from density function f(Xi | )oMaximum Likelihood EstimatorThe Maximum Likelihood Estimator (MLE) of ,is the value of that maximizes L()More formally: MLE arg max L( )No
Stanford - CS - 109
Likelihood of DataMaximum Likelihood EstimatorConsider n I.I.D. random variables X1, X2, ., XnoXi a sample from density function f(Xi | )The Maximum Likelihood Estimator (MLE) of ,is the value of that maximizes L()More formally: MLE arg max L( )No
Stanford - CS - 109
What Are Parameters?Consider some probability distributions:Why Do We Care?=p=l = (p1, p2, ., pm) = (a, b) = (m, 2)Ber(p)Poi(l)Multinomial(p1, p2, ., pm)Uni(a, b)Normal(m, 2)Etc.Need to estimate model parameters from dataEstimator is random
Stanford - CS - 109
What Are Parameters?Consider some probability distributions:Why Do We Care?=p=l = (p1, p2, ., pm) = (a, b) = (m, 2)Ber(p)Poi(l)Multinomial(p1, p2, ., pm)Uni(a, b)Normal(m, 2)Etc.Need to estimate model parameters from dataEstimator is random
Stanford - CS - 109
Weak Law of Large NumbersStrong Law of Large NumbersConsider I.I.D. random variables X1, X2, .Xi have distribution F with E[Xi] = m and Var(Xi) = s 2Let X Consider I.I.D. random variables X1, X2, .For any e &gt; 0:Xi have distribution F with E[Xi] = m
Stanford - CS - 109
Weak Law of Large NumbersStrong Law of Large NumbersConsider I.I.D. random variables X1, X2, .Xi have distribution F with E[Xi] = m and Var(Xi) = s 2Let X Consider I.I.D. random variables X1, X2, .For any e &gt; 0:Xi have distribution F with E[Xi] = m
Stanford - CS - 109
Markovs InequalityInequality, Probability, and JovialityIn many cases, we dont know the true form of aprobability distributionBut, we know the meanMay also have other measures/propertiesSay X is a non-negative random variableP( X a ) E.g., Midterm
Stanford - CS - 109
Markovs InequalityInequality, Probability, and JovialityIn many cases, we dont know the true form of aprobability distributionBut, we know the meanMay also have other measures/propertiesSay X is a non-negative random variableP( X a ) E.g., Midterm
Stanford - CS - 109
Great (Conditional) ExpectationsX and Y are jointly discrete random variablesRecall, conditional expectation of X given Y = y:Computing Probabilities by Conditioning1X 0X = indicator variable for event A:if A occursotherwisexSimilarly, E[X | Y
Stanford - CS - 109
Great (Conditional) ExpectationsX and Y are jointly discrete random variablesComputing Probabilities by ConditioningRecall, conditional expectation of X given Y = y:1X 0X = indicator variable for event A:if A occursotherwiseSo: E[X] = EY[ EX[X |
Stanford - CS - 109
Viva La Correlacin!Say X and Y are arbitrary random variablesFun with Indicator VariablesLet IA and IB be indicators for events A and B1IA 0Correlation of X and Y, denoted r(X, Y):Cov( X , Y )r ( X ,Y ) Var(X)Var( Y)Note: -1 r(X, Y) 1Correlati
Stanford - CS - 109
Viva La Correlacin!Say X and Y are arbitrary random variablesFun with Indicator VariablesLet IA and IB be indicators for events A and B1IA 0Correlation of X and Y, denoted r(X, Y):Cov( X , Y )r ( X ,Y ) Var(X)Var( Y)Note: -1 r(X, Y) 1Correlati
Stanford - CS - 109
Indicators: Now With Pair-wise Flavor!Recall Ii is indicator variable for event Ai when:From Event Pairs to VarianceExpected number of pairs of events:n X E E I i I j E[ I i I j ] P( Ai A j ) 2 i j i j i ji 1I i 1 if Ai occurs0 otherwiseE X
Stanford - CS - 109
Indicators: Now With Pair-wise Flavor!Recall Ii is indicator variable for event Ai when:From Event Pairs to VarianceExpected number of pairs of events:n X E E I i I j E[ I i I j ] P( Ai A j ) 2 i j i j i ji 1I i 1 if Ai occurs0 otherwiseE X
Stanford - CS - 109
Welcome Back Our Friend: ExpectationLet g(X, Y) be real-valued function of two variablesRecall expectation for discrete random variable:Generalizing ExpectationLet X and Y be discrete jointly distributed RVs:E[ X ] x p( x)xE[ g ( X , Y )] g ( x, y
Stanford - CS - 109
Welcome Back Our Friend: ExpectationLet g(X, Y) be real-valued function of two variablesRecall expectation for discrete random variable:Generalizing ExpectationLet X and Y be discrete jointly distributed RVs:E[ X ] x p( x)xE[ g ( X , Y )] g ( x, y
Stanford - CS - 109
Sum of Independent Binomial RVsLet X and Y be independent random variablesLet X and Y be independent random variablesX ~ Poi(l1) and Y ~ Poi(l2)X has n1 trials and Y has n2 trialsoX ~ Bin(n1, p) and Y ~ Bin(n2, p)X + Y ~ Bin(n1 + n2, p)Intuition:
Stanford - CS - 109
Sum of Independent Binomial RVsLet X and Y be independent random variablesX has n1 trials and Y has n2 trialsDefine Z to be n1 + n2 trials, each with success prob. pZ ~ Bin(n1 + n2, p), and also Z = X + YX ~ Poi(l1) and Y ~ Poi(l2)Intuition:oLet X
Stanford - CS - 109
Independent Discrete VariablesTwo discrete random variables X and Y arecalled independent if:Coin FlipsFlip coin with probability p of headsp( x, y) p X ( x) pY ( y) for all x, yIntuitively: knowing the value of X tells us nothingabout the distribu
Stanford - CS - 109
Independent Discrete VariablesTwo discrete random variables X and Y arecalled independent if:Coin FlipsFlip coin with probability p of headsp( x, y) p X ( x) pY ( y) for all x, yIntuitively: knowing the value of X tells us nothingabout the distribu
Stanford - CS - 109
The Questions of Our TimeLife Gives You Lemmas, Make Lemma-nade!Y is a non-negative continuous random variableE[Y ] P(Y y ) dyE[Y ] A lemma in the home or office is a good thingProbability Density Function: fY (y)Already knew that:0 y fY ( y) dy
Stanford - CS - 109
The Questions of Our TimeLife Gives You Lemmas, Make Lemma-nade!Y is a non-negative continuous random variableE[Y ] P(Y y ) dyE[Y ] A lemma in the home or office is a good thingProbability Density Function: fY (y)Already knew that:0But, did you k
Stanford - CS - 109
Normal Random VariableX is a Normal Random Variable: X ~ N(m, s 2)1e ( x m )s 2E[ X ] mVar ( X ) s 22/ 2s 2Started doing groundbreaking math as teenagerHe looked more like Martin Sheenmf ( x)Also called GaussianNote: f(x) is symmetric about
Stanford - CS - 109
Normal Random VariableX is a Normal Random Variable: X ~ N(m, s 2)221e ( x m ) / 2ss 2E[ X ] mStarted doing groundbreaking math as teenagerHe looked more like Martin Sheenmf ( x)Var ( X ) s 2Also called GaussianNote: f(x) is symmetric about
Stanford - CS - 109
Balls, Urns, and the Supreme CourtSupreme Court case: Berghuis v. SmithIf a group is underrepresented in a jury pool, how do you tell? Article by Erin Miller Friday, January 22, 2010 Thanks to Josh Falk for pointing out this articleJustice Breyer [St
Stanford - CS - 109
Balls, Urns, and the Supreme CourtSupreme Court case: Berghuis v. SmithIf a group is underrepresented in a jury pool, how do you tell? Article by Erin Miller Friday, January 22, 2010 Thanks to Josh Falk for pointing out this articleJustice Breyer [St
Stanford - CS - 109
Whither the BinomialRecall example of sending bit string over networkBinomial in the Limitn = 4 bits sent over network where each bit hadindependent probability of corruption p = 0.1X = number of bit corrupted. X ~ Bin(4, 0.1)In real networks, send
Stanford - CS - 109
Whither the BinomialRecall example of sending bit string over networkBinomial in the Limitn = 4 bits sent over network where each bit hadindependent probability of corruption p = 0.1X = number of bit corrupted. X ~ Bin(4, 0.1)In real networks, send
Stanford - CS - 109
Welcome to St. Petersburg!Game set-upBreaking VegasWe have a fair coin (come up heads with p = 0.5)Let n = number of coin flips before first tailsYou win $2nWhy doesnt everyone do this?oExpected winnings 0. Use algorithm infinitely often!218 20 Stanford - CS - 109 Welcome to St. Petersburg!Game set-upBreaking VegasWe have a fair coin (come up heads with p = 0.5)Let n = number of coin flips before first tailsYou win$2np = 18/38 you win $Y, otherwise (1 p) you lose$YConsider this algorithm for one series of
Stanford - CS - 109
From Urns to CouponsCoupon Collecting is classic probability problemRecall, two events E and F are calledindependent ifP(EF) = P(E) P(F)If E and F are independent, does that tell usanything about:P(EF | G) = P(E | G) P(F | G),where G is an arbitra
Stanford - CS - 109
From Urns to CouponsCoupon Collecting is classic probability problemRecall, two events E and F are calledindependent ifP(EF) = P(E) P(F)If E and F are independent, does that tell usanything about:P(EF | G) = P(E | G) P(F | G),where G is an arbitra
Stanford - CS - 109
The Tragedy of Conditional ProbabilityNot Everything is Equally LikelySay n balls are placed in m urnsCounts of balls in urns are not equally likely!Each ball is equally likely to be placed in any urnExample: two balls (A and B) placed with equallik
Stanford - CS - 109
The Tragedy of Conditional ProbabilityNot Everything is Equally LikelySay n balls are placed in m urnsCounts of balls in urns are not equally likely!Each ball is equally likely to be placed in any urnExample: two balls (A and B) placed with equallik
Stanford - CS - 109
Dice Our Misunderstood FriendsRoll two 6-sided dice, yielding values D1 and D2Let E be event: D1 + D2 = 4What is P(E)?|S| = 36, E = cfw_(1, 3), (2, 2), (3, 1)P(E) = 3/36 = 1/12Conditional ProbabilityConditional probability is probability that Eocc
Stanford - CS - 109
Dice Our Misunderstood FriendsRoll two 6-sided dice, yielding values D1 and D2Let E be event: D1 + D2 = 4What is P(E)?|S| = 36, E = cfw_(1, 3), (2, 2), (3, 1)P(E) = 3/36 = 1/12Conditional ProbabilityConditional probability is probability that Eocc
Stanford - CS - 109
Sample SpacesSample space, S, is set of all possible outcomesof an experimentCoin flip:Flipping two coins:Roll of 6-sided die:# emails in a day:YouTube hrs. in day:EventsEvent, E, is some subset of SS = cfw_Head, TailsS = cfw_(H, H), (H, T), (T
Stanford - CS - 109
Sample SpacesSample space, S, is set of all possible outcomesof an experimentCoin flip:Flipping two coins:Roll of 6-sided die:# emails in a day:YouTube hrs. in day:EventsEvent, E, is some subset of SS = cfw_Head, TailsS = cfw_(H, H), (H, T), (T
Stanford - CS - 109
C(n,k)nRecursive definition of k Lets write a function C(n, k)The number of ways to select k objects from a set of n objects.C(n,k)Select any one of the n points in the groupC(n,k)Separate this point from the rest1C(n,4)Lets consider specific
Stanford - CS - 109
C(n,k)nRecursive definition of k Lets write a function C(n, k)The number of ways to select k objects from a set of n objects.C(n,k)Select any one of the n points in the groupSeparate this point from the restC(n,4)Lets consider specific problem C