DetectionClassificationCollaborativeTrackingNorthwestern

DetectionClassificationCollaborativeTrackingNorthwestern -...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon
F. Zhao and L. Guibas (Eds.): IPSN 2003, LNCS 2634, pp. 529–544, 2003. © Springer-Verlag Berlin Heidelberg 2003 Detection, Classification, and Collaborative Tracking of Multiple Targets Using Video Sensors P.V. Pahalawatta, D. Depalov, T.N. Pappas, and A.K. Katsaggelos ECE Dept. Northwestern University, 2145 Sheridan Rd, Evanston, IL 60208 {pesh, depalov, pappas, aggk}@ece.northwestern.edu Abstract. The study of collaborative, distributed, real-time sensor networks is an emerging research area. Such networks are expected to play an essential role in a number of applications such as, surveillance and tracking of vehicles in the battlefield of the future. This paper proposes an approach to detect and classify multiple targets, and collaboratively track their position and velocity utilizing video cameras. Arbitrarily placed cameras collaboratively perform self- calibration and provide complete battlefield coverage. If some of the cameras are equipped with a GPS system, they are able to metrically reconstruct the scene and determine the absolute coordinates of the tracked targets. A background subtraction scheme combined with a Markov random field based approach is used to detect the target even when it becomes stationary. Targets are continuously tracked using a distributed Kalman filter approach. As the targets move the coverage is handed over to the "best" neighboring cluster of sensors. This paper demonstrates the potential for the development of distributed optical sensor networks and addresses problems and tradeoffs associated with this particular implementation. 1 Introduction In the past few decades, we have seen many advances in wireless communication techniques and in microsensor technology. These advances combined with growing interest in both the military and the civilian domain in using sensor networks for remote monitoring applications have led to the concept of a wireless sensor network. A wireless sensor network can consist of a densely distributed set of sensors of various modalities (e.g., acoustic, seismic, infrared, imaging) that gather data from the physical environment and then process the data collaboratively to obtain a coherent high level description of the current state of the system. Due to their low production costs and low energy consumption, acoustic and seismic sensors are among the most commonly studied types of wireless microsensors for battlefield surveillance. However, these sensors have some weaknesses. Since acoustic sensors depend on the acoustic signature of the target, they will not be able to detect a vehicle when it becomes stationary with its engines off. They can also be distracted by acoustic changes caused by gearshifts as well as accelerations and decelerations of a vehicle. Also, these sensors can be affected by acoustic noise caused by wind. Similar problems exist with seismic sensors.
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
530 P.V. Pahalawatta et al. We propose the use of multiple video sensors to enhance the capabilities of a
Background image of page 2
Image of page 3
This is the end of the preview. Sign up to access the rest of the document.

Page1 / 16

DetectionClassificationCollaborativeTrackingNorthwestern -...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online