l4_intro_slam_2

l4_intro_slam_2 - Introduction to SLAM Part II Paul...

Info iconThis preview shows pages 1–9. Sign up to view the full content.

View Full Document Right Arrow Icon
Introduction to SLAM Part II Paul Robertson
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Review • Localization Tracking , Global Localization, Kidnapping Problem. • Kalman Filter – Quadratic – Linear (unless EKF) •S LAM – Loop closing – Scaling: • Partition space into overlapping regions, use rerouting algorithm. • Not Talked About – Features – Exploration 2
Background image of page 2
Outline • Topological Maps •HMM •S IFT • Vision Based Localization 3
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Topological Maps Idea: Build a qualitative map where the nodes are similar sensor signatures and transitions between nodes are control actions. 4
Background image of page 4
Advantages of Topological maps • Can solve the Global Location Problem. • Can solve the Kidnapping Problem. • Human-like maps • Supports Metric Localization • Can represent as a Hidden Markov Model (HMM) 5
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Hidden Markov Models (HMM) Scenario – You have your domain represented as set of state variables. – The states define what following state are reachable from any given state. – State transitions involve action. – Actions are observable, states are not. – You want to be able to make sense of a sequence of actions Examples Part-of-speech tagging, natural language parsing, speech recognition, scene analysis, Location/Path estimation. 6
Background image of page 6
Overview of HMM What a Hidden Markov Model is Algorithm for finding the most likely state sequence. Algorithm for finding the probability of an action sequence (sum over all allowable state paths). Algorithm for training a HMM. Only works for problems whose state structure can be characterized as FSM in which a single action at a time is used to transition between states. Very popular because algorithms are linear on the length of the action sequence. 7
Background image of page 7

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Hidden Markov Models 0.5 s 1 s 2 s 3 s 4 s 5 s 6 “Mary” “Had” “A” “Little” “Lamb” “A” s 8 “.” s 7 “Curry” “.” “And” “Big” “Dog” “And” “Hot” 0.4 0.4 “Roger” 0.3 “Ordered” 0.3 0.5 0.5 0.4 0.1 0.5 0.5 0.5 0.3 0.4 0.3 0.5 “Cooked” 0.3 “John” 0.3 A finite state machine with probabilities on the arcs.
Background image of page 8
Image of page 9
This is the end of the preview. Sign up to access the rest of the document.

Page1 / 34

l4_intro_slam_2 - Introduction to SLAM Part II Paul...

This preview shows document pages 1 - 9. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online