This preview shows page 1. Sign up to view the full content.
Unformatted text preview: 1 . After computing
the centroids of the head in each frame, the di erence
in the absolute coordinates in successive frames was
found. dxi ; dyi are the di erence in centroids of the
head over successive frames. dxi = xi+1 , xi
dyi = yi+1 , yi 1 X = dx1 ; dx2 ; : : : ; dxn
Y = dy1 ; dy2 ; : : : ; dyn 3 2
The feature vectors in our case are the di erence in
centroids of the head over successive frames.
4
where X and Y are the feature vectors for the di erence
in x and y coordinates of the head respectively. Since there are n + 1 frames in each sequence, each feature
vector is n elements long. Thus each feature vector is
an n dimensional vector. Next, the mean and covariance matrix for the feature vector was found. This was
repeated for all the monocular grayscale sequences. 2.4 Recognition of input sequence 2.2 Computing probability density
functions We assume independence of the feature vectors X and
Y and a multivariate normal distribution for all sequences. From the independence assumption we have: We assume in the recognition of our input sequence
that each sequence is uniquely described by the value
of its a posteriori probability. For our problem, we
assume all a priori probabilities the probability of any
of the actions occurring to be equal and, thus, nd
density functions for each of the classes where each class
is an action. Thus, twenty such densities were found,
corresponding to the ten di erent actions in the two
orientations. Having obtained these twenty values for
each of the classes, the most likely action is the class
with the highest value. pX; Y = pX pY P = max P1 ; P2 ; P3 : : : Pm 5 where pX = e
2n=2 jX j1=2 1 ,1 X ,X t X ,1 X ,X
2 6 pY = e
2n=2 jY j1=2 1 ,1 Y ,Y t Y ,1 Y ,Y 7 2 where X is the ncomponent feature vector in the x
direction, Y is the ncomponent feature vector in the
y direction, X and Y are the mean vectors of the
normal distribution and X and Y are the n , by , n
covariance matrices. Unbiased estimates for X and
Y are supplied by the sample covariance matrices 8 .
1
CX = n , 1 n
X X , X Xi , X t 8 1
CY = n , 1 Yi , Y Yi , Y t
i=1 9 i=1 n
X i 2.3 Bayesian formulation of the approach Using the feature vector obtained from the test sequence, a posteriori probabilities are calculated using
each of the training sequences. This is done using Bayes
rule, which is a fundamental formula in decision theory.
In the mathematical form it is given as 8
pX;
10
P !i =X; Y = P !ipX; YY =!i
where X , Y m the extracted feature vectors, and,
P are
X; Y = i=1 pX; Y =!i P !i . P !i =X; Y is the
a posteriori probability of observing the class !i given
the feature vectors X and Y . P !i is the a priori
probability of observing the class !i , pXi ; Yi =! is
the conditional density and m refers to the number of
classes. 11 where P is the probability of the most likely class and
P1 ; P2 ; P3 : : : Pm are the probabilities of m di erent actions.
The frontal and lateral views...
View Full
Document
 Spring '13
 coop

Click to edit the document details