This preview shows pages 1–2. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Distributed Estimation Abstract Consider a distributed decision problem where a set of agents obtains observations, possibly at different random times, about the realization of a random vector. Each agent possesses the same prior probability distribution and objective function. Agents update their tentative decision whenever they make a new observation or receive new information from another agent. Upon computing a new tentative decision, each agent transmits its latest estimate to a randomly chosen subset of all agents. Conditions for asymptotic convergence of the decision sequence made by each agent and for asymptotic agreement among all agents are derived. This report presents a brief summary of the results derived by Borkar and Varaiya [1], and Tsitsiklis and Athans [5] on distributed decision problems. The theory is presented along with a new example. 1 Introduction A typical distributed estimation network is one where a set of geographically dispersed agents receives information about the state of nature X . Based on its observed signals and received messages, each agent produces an estimate of the state of nature. Each agent sends its current estimate of X to a random subset of agents. Upon reception of new information, agents update their respective estimates. The aim of this report is to show thorough understanding of the paper Asymptotic Agreement in Distributed Estimation by Borkar and Varaiya [1]. The report is neither a comprehensive sum mary nor a review of the paper. Rather, we choose to present alternative derivations for the two main results contained in the paper, and to substantiate the content with a new example. Specifi cally, we derive an alternative proof for the convergence of each agents estimate by constructing a discrete time system equivalent to the continuous time system treated in the original paper. Once this is achieve, we rederive the results using discrete martingale theory. We also show alternative derivations for some of the lemmas leading to the second major achievement of this paper: condi tions for asymptotic agreement among agents. Finally, we introduce an additional example. We note that some of the ideas contained herein were drawn from the companion paper Convergence and Asymptotic Agreement in Distributed Decision Problems by Tsitsiklis and Athans [5]. 2 Problem Formulation First, we study the continuous time model used by Borkar and Varaiya [1]. Specifically, we review the distributed estimation problem at hand and the flow of information among agents, we introduce a convenient notation, and we state the basic assumptions of the mathematical framework. 2.1 Continuous Time Model Consider a probability space ( , F , P ) and a random variable X F with E  X  < . Let { 1 , 2 , . . . , M } represent the set of M agents. Agent m receives observation Z m j at random time r m j , j N . Similarly, agent m receives information Z m j at random time r m j , j N . Here, Z m j denotes the tentative decision sent to agent...
View
Full
Document
This note was uploaded on 03/30/2010 for the course ECEN 689 taught by Professor Enjeti during the Fall '07 term at Texas A&M.
 Fall '07
 Enjeti

Click to edit the document details