spmag08 - Signal and Image Processing with Belief...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Signal and Image Processing with Belief Propagation Erik B. Sudderth and William T. Freeman Accepted to appear in IEEE Signal Processing Magazine DSP applications column Many practical signal processing applications involve large, complex collections of hidden variables and uncertain parameters. For example, modern communication systems typically couple sophisticated error correcting codes with schemes for adaptive channel equalization. Additionally, many computer vision algorithms use prior knowledge about the statistics of typ- ical surfaces to infer the threedimensional (3D) shape of a scene from ambiguous, local image measurements. Probabilistic graphical models provide a powerful, general framework for de- signing systems like these. In this approach, graphs are used to decompose joint distributions into a set of local constraints and dependencies. Such modular structure provides an intu- itive language for expressing domainspecific knowledge, and facilitates transfer of modeling advances to new applications. Once a problem has been formulated using a graphical model, a wide range of efficient algorithms for statistical learning and inference can then be directly applied. In this column, we review a particularly effective inference algorithm known as belief prop- agation (BP). After describing its messagepassing structure, we demonstrate the interplay of statistical modeling and inference in two challenging applications: denoising discrete signals transmitted over noisy channels, and dense 3D reconstruction from stereo images. 1 Graphical Models and Factor Graphs Several different formalisms have been proposed that use graphs to represent probability dis- tributions [1, 2]. For example, directed graphical models, or Bayesian networks, are widely used in artificial intelligence to deduce causal, generative processes. Special cases of interest in control and signal processing include hidden Markov models (HMMs) and continuous state space models. Alternatively, undirected graphical models, or Markov random fields (MRFs), provide popular models for the symmetric dependencies arising with spatial or image data. In what follows, we focus on models defined by factor graphs [3, 4], which are often used in communications and information theory. These bipartite graphs have two sets of nodes or vertices. Each variable node s V is associated with a random variable x s , which for now we assume takes values in some finite, discrete set X s . These hidden variables could represent signals, parameters, or other processes that influence the modeled system, but are not directly observed. Each factor node f F is then uniquely indexed by a subset f V of the hidden variables that directly interact. In particular, the joint distribution of x defines { x s | s V} is defined 1 (a) (b) (c) Figure 1: Factor graph representations of three probabilistic models. Circular nodes are random variables, which interact via square factor nodes (shaded). (a) A hidden Markov model (HMM).variables, which interact via square factor nodes (shaded)....
View Full Document

This note was uploaded on 09/27/2009 for the course CS 525 taught by Professor Rjyosy during the Winter '09 term at Central Mich..

Page1 / 11

spmag08 - Signal and Image Processing with Belief...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online