{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

l4_intro_slam_2

# l4_intro_slam_2 - Introduction to SLAM Part II Paul...

This preview shows pages 1–9. Sign up to view the full content.

Introduction to SLAM Part II Paul Robertson

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Review • Localization Tracking , Global Localization, Kidnapping Problem. • Kalman Filter – Quadratic Linear (unless EKF) • SLAM Loop closing – Scaling: Partition space into overlapping regions, use rerouting algorithm. Not Talked About – Features – Exploration 2
Outline Topological Maps • HMM • SIFT Vision Based Localization 3

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Topological Maps Idea: Build a qualitative map where the nodes are similar sensor signatures and transitions between nodes are control actions. 4
Advantages of Topological maps Can solve the Global Location Problem. Can solve the Kidnapping Problem. Human-like maps Supports Metric Localization Can represent as a Hidden Markov Model (HMM) 5

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Hidden Markov Models (HMM) Scenario You have your domain represented as set of state variables. The states define what following state are reachable from any given state. State transitions involve action. Actions are observable, states are not. You want to be able to make sense of a sequence of actions Examples Part-of-speech tagging, natural language parsing, speech recognition, scene analysis, Location/Path estimation. 6
Overview of HMM What a Hidden Markov Model is Algorithm for finding the most likely state sequence. Algorithm for finding the probability of an action sequence (sum over all allowable state paths). Algorithm for training a HMM. Only works for problems whose state structure can be characterized as FSM in which a single action at a time is used to transition between states. Very popular because algorithms are linear on the length of the action sequence. 7

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Hidden Markov Models 0.5 s 1 s 2 s 3 s 4 s 5 s 6 “Mary” “Had” “A” “Little” “Lamb” “A” s 8 “.” s 7 “Curry” “.” “And” “Big” “Dog” “And” “Hot” 0.4 0.4 “Roger” 0.3 “Ordered” 0.3 0.5 0.5 0.4 0.1 0.5 0.5 0.5 0.3 0.4 0.3 0.5 “Cooked” 0.3 “John” 0.3 A finite state machine with probabilities on the arcs.
This is the end of the preview. Sign up to access the rest of the document.
• Fall '05
• BrianWilliams
• Markov chain, Andrey Markov, Hidden Markov model, Markov models, Variable-order Markov model, action sequence