This preview shows pages 1–5. Sign up to view the full content.
Arthur Kunkle
ECE 5526
HW # 3
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document Problem 1
The following commands were run to generate the 1000 values of the random variable X1 with the three covariance
matrices in the problem description:
>> N = 10000;
>> sigma_1=[8000 0; 0 8000];
>> sigma_A=[8000 0; 0 8000];
>> sigma_B=[8000 0; 0 18500];
>> sigma_C=[8000 8400; 8400 18500];
>> X1A=randn(N,2)*sqrt(sigma_A)+repmat(mu,N,1);
>> X1B=randn(N,2)*sqrt(sigma_B)+repmat(mu,N,1);
>> X1C=randn(N,2)*sqrt(sigma_C)+repmat(mu,N,1);
>> gausview(X1A,mu,sigma_A,'Sample X1_A');
>> gausview(X1B,mu,sigma_B,'Sample X1_B');
>> gausview(X1C,mu,sigma_C,'Sample X1_C');
Output 2D PDF functions using “gausview”:
The symmetry of the above PDF’s is a result of the values in each random process’
covariance matrix
.
The diagonal
values of the 2x2 matrix correspond to the independent X and Y variances.
When these are
equal
and the other
values are zero, the sample values will be equally likely to occur, varying in the X and Y directions equally.
The
second PDF increased the value of the Y variance.
This results in a wider spread of data values in the Y direction,
however, the PDF contours are still symmetrical about
both axes
.
Finally, the process with a fully populated, nonzero
matrix exhibits variation that occurs along both directions.
When both of the nondiagonal entries are equal, the
variation appears to be along the XY diagonal, as show in Figure 3.
Problem 2
The values of the third random process in Problem 1 were used to obtain the following values.
The first N values of
the sample data was used in the estimate calculations.
The original values from the process were:
Mean:
[ 730
1090]
Covariance:
[8000 8400 ; 8400 18500]
Points (N)
Est. Mean
Est. Covariance
Mean Distance
Covar. matrix norm
10000
[729.1 1087.0]
[7724 8113 ; 8113 18259]
3.1336
545.8393
1000
[730.6 1085.7]
[7282 7450 ; 7450 17229]
4.3337
1983.7
100
[729.8 1077.6]
[7294 8236 ; 8236 17103]
12.4494
1434.2
10
[725.1 1051.9]
[8241 13384 ; 13384 26446]
38.4163
10392
The most obvious and important trend is
as N decreases, the distance measures tend to go up
.
Having a greater
amount of sample data available will lead to estimated mean and covariance that will be much closer to the true values
input to the process.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document Problem 3
The following are the computed joint loglikelihoods for the X_3 and N models:
N_1
N_2
N_3
N_4
X3
Sigma Log
likelihood
1.2492e+005
1.2248e+005
1.1923e+005
8.5911e+005
This is the end of the preview. Sign up
to
access the rest of the document.
This note was uploaded on 02/11/2012 for the course ECE 5526 taught by Professor Staff during the Summer '09 term at FIT.
 Summer '09
 Staff

Click to edit the document details