This preview shows pages 1–3. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.View Full Document
Unformatted text preview: 11/16/09 1 Markov Kalman Filter Localization • Markov localization localization starting from any unknown position recovers from ambiguous situation. However, to update the probability of all positions within the whole state space at any time requires a discrete representation of the space (grid). The required memory and calculation power can thus become very important if a fine grid is used. • Kalman filter localization tracks the robot and is inherently very precise and efficient. However, if the uncertainty of the robot becomes to large (e.g. collision with an object) the Kalman filter will fail and the position is definitively lost. Markov Localization (1) • Markov localization uses an explicit, discrete representation for the probability of all position in the state space . • This is usually done by representing the environment by a grid or a topological graph with a finite number of possible states (positions). • During each update, the probability for each state (element) of the entire space is updated. 11/16/09 2 Markov Localization (2): Applying probabilty theory to robot localization • P(A): Probability that A is true. e.g. p(r t = l): probability that the robot r is at position l at time t (prior) • We wish to compute the probability of each individual robot position given actions and sensor measures. • P(A | B): Conditional probability of A given that we know B. e.g. p(r t = l | i t ): probability that the robot is at position l given the sensors input i t . • Product rule: • Bayes rule: 4 Bayes rule example • Suppose a robot obtains measurement z • What is P(open|z)?...
View Full Document
This note was uploaded on 04/07/2010 for the course CS 685 taught by Professor Luke,s during the Fall '08 term at George Mason.
- Fall '08