This preview shows pages 1–11. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: AMCS 260/CS 260 Design and Analysis of Algorithms 6. Randomized Algorithms Mikhail Moshkov Division of Mathematical and Computer Sciences and Engineering King Abdullah University of Science and Technology Spring 2010 Randomized Algorithms We will consider examples of randomized algorithms which can make random decisions during their work. Randomized algorithms are often conceptually much simpler than the deterministic ones. We will consider algorithms that are always correct, and run efficiently in expectation. We also will consider an algorithm which can give us a correct answer with high probability. This section contains examples of averagecase analysis of algorithms. Probability Let S be a sample space , which is a set whose elements are called elementary events . An event is a subset of S , S is called the certain event, and ∅ is called the null event. Two events A and B are disjoint ( mutually exclusive ) if A ∩ B = ∅ . Probability A probability distribution Pr {} on a sample space S is a mapping from events to real numbers such that the following probability axioms are satisfied: 1. Pr { A } ≥ for any event A . 2. Pr { S } = 1 . 3. Pr { A ∪ B } = Pr { A } + Pr { B } for any two disjoint events A and B . More generally, for any (finite or countably infinite) sequence of events A 1 , A 2 ,... that are pairwise disjoint, Pr ( [ i A i ) = X i Pr { A i } . Probability We have Pr {∅} = 0 , and Pr { ¯ A } = 1 Pr { A } where ¯ A = S \ A . If A ⊆ B then Pr { A } ≤ Pr { B } . For any two events A and B , Pr( A ∪ B ) = Pr { A } + Pr { B }  Pr { A ∩ B } . Probability A probability distribution is discrete if S is a finite or countable infinite sample space. In this case for any event A , Pr { A } = X s ∈ A Pr { s } . If S is finite and Pr { s } = 1  S  for any s ∈ S then we have the uniform probability distribution on S . Probability The conditional probability of an event A given that another event B occurs is defined to be Pr { A  B } = Pr { A ∩ B } Pr { B } whenever Pr { B } 6 = 0 . We read Pr { A  B } as the probability of A given B . Probability Two events are independent if Pr { A ∩ B } = Pr { A } Pr { B } . A collection of events A 1 ,..., A n is independent if, for every set of indices I ⊆ { 1 ,..., n } we have Pr ( \ i ∈ I A i ) = Y i ∈ I Pr { A i } . In the general case Pr { A 1 ∩ ... ∩ A n } = Pr { A 1 } · Pr { A 2  A 1 } · ... · Pr { A j +1  A 1 ∩ ... ∩ A j } · ... · Pr { A n  A 1 ∩ ... ∩ A n 1 } . Probability A (discrete) random variable X is a function from a finite or countably infinite sample space S to the real numbers. For a random variable X and a real number x , we define the event X = x to be { s ∈ S : X ( s ) = x } . Thus, Pr { X = x } = X s ∈ S , X ( s )= x Pr { s } . Probability The function f ( x ) = Pr { X = x } is the probability density function of the random variable X . From the probability axioms, f ( x ) ≥ and ∑ x f ( x ) = 1 ....
View Full
Document
 Spring '10
 DUNNO
 Algorithms, Machine Learning

Click to edit the document details