This bounding box is used to keep track of the head

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: by rst constructing a bounding box around the head of the subject in each frame. This bounding box is used to keep track of the head over successive frames of each sequence. We ll this bounding box with one color and assign a di erent color to the rest of the background. Hence we segment the entire scene into two regions, namely, the head of the personblack and the backgroundwhite. This was done using the COREL PHOTOHOUSE program. We compute the centroid of the head in each frame as the average of the positions of all the black pixels. Figure 1 shows the steps in the detection and segmentation of the head. In gure 1a we have a grayscale image of the subject. In gure 1b, a bounding box is placed over the head, and in gure 1c, the head is segmented from the rest of the background by assigning it a di erent color. Obviously we would like to incorporate an approach that can automatically detect the head and segment it from the rest of the scene. We are currently exploring the possibility of generalizing an algorithm based on Saad Ahmed Shiroey's thesis on human face segmentation and identi cation 13 . In this approach, pre-processing is done on edge a b c Figure 1: a grayscale intensity image and b bounding box placed over subjects head c segmenting the head from the rest of the background. detected images of the scene to nd the labeled edges that, when combined, are tted to an ellipse in a least squares sense. The head is modeled as the largest ellipse in the scene. However, this approach is geared towards human face identi cation. In images on which this algorithm has been tested, the face, occupies the largest portion of the scene. This is not true in our case, since our frames include the entire body of the person. Edge detection of our scene produces far more labeled segments than the algorithm was originally intended for, making the ellipse tting computationally very expensive. We are working to develop an algorithm that can robustly detect the head for our system as well. 4 System Implementation A CCD static camera with a wide eld of view working at 2 frames per second was used to obtain sequences of monocular grayscale images of people performing the di erent actions. The frames were taken in the front view and the lateral view. In order to train the system, 38 sequences were taken of a person walking, standing, sitting, bending down, getting up, falling, squatting, rising and bending sideways, in both the frontal and lateral views. People with diverse physical appearances were used to model the actions. Figure 2 describes the processing loop and the main functional units of our system. The system detects and tracks the subject in the scene and extracts a feature vector describing the motion and direction of the subject's head. The feature vector constitutes the input module, which is used for building a statistical model. Based on the input sequence, the model is then matched against stored models of di erent actions. Lastly, the action is classi ed as the one whose...
View Full Document

{[ snackBarMessage ]}

Ask a homework question - tutors are online