{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

# hw3 - Arthur Kunkle ECE 5526 HW 3 Problem 1 The following...

This preview shows pages 1–5. Sign up to view the full content.

Arthur Kunkle ECE 5526 HW # 3

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Problem 1 The following commands were run to generate the 1000 values of the random variable X1 with the three covariance matrices in the problem description: >> N = 10000; >> sigma_1=[8000 0; 0 8000]; >> sigma_A=[8000 0; 0 8000]; >> sigma_B=[8000 0; 0 18500]; >> sigma_C=[8000 8400; 8400 18500]; >> X1A=randn(N,2)*sqrt(sigma_A)+repmat(mu,N,1); >> X1B=randn(N,2)*sqrt(sigma_B)+repmat(mu,N,1); >> X1C=randn(N,2)*sqrt(sigma_C)+repmat(mu,N,1); >> gausview(X1A,mu,sigma_A,'Sample X1_A'); >> gausview(X1B,mu,sigma_B,'Sample X1_B'); >> gausview(X1C,mu,sigma_C,'Sample X1_C'); Output 2-D PDF functions using “gausview”: The symmetry of the above PDF’s is a result of the values in each random process’ covariance matrix . The diagonal values of the 2x2 matrix correspond to the independent X and Y variances. When these are equal and the other values are zero, the sample values will be equally likely to occur, varying in the X and Y directions equally. The second PDF increased the value of the Y variance. This results in a wider spread of data values in the Y direction, however, the PDF contours are still symmetrical about both axes . Finally, the process with a fully populated, non-zero matrix exhibits variation that occurs along both directions. When both of the non-diagonal entries are equal, the variation appears to be along the X-Y diagonal, as show in Figure 3.
Problem 2 The values of the third random process in Problem 1 were used to obtain the following values. The first N values of the sample data was used in the estimate calculations. The original values from the process were: Mean: [ 730 1090] Covariance: [8000 8400 ; 8400 18500] Points (N) Est. Mean Est. Covariance Mean Distance Covar. matrix norm 10000 [729.1 1087.0] [7724 8113 ; 8113 18259] 3.1336 545.8393 1000 [730.6 1085.7] [7282 7450 ; 7450 17229] 4.3337 1983.7 100 [729.8 1077.6] [7294 8236 ; 8236 17103] 12.4494 1434.2 10 [725.1 1051.9] [8241 13384 ; 13384 26446] 38.4163 10392 The most obvious and important trend is as N decreases, the distance measures tend to go up . Having a greater amount of sample data available will lead to estimated mean and covariance that will be much closer to the true values input to the process.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Problem 3 The following are the computed joint log-likelihoods for the X_3 and N models: N_1 N_2 N_3 N_4 X3 Sigma Log- likelihood -1.2492e+005 -1.2248e+005 -1.1923e+005 -8.5911e+005 The following are the gausview outputs comparing the X_3 and N models: The above show that the N_4 does the worst job estimating the likelihood of X_3. This is due to the drastic difference in mean value.
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

### What students are saying

• As a current student on this bumpy collegiate pathway, I stumbled upon Course Hero, where I can find study resources for nearly all my courses, get online help from tutors 24/7, and even share my old projects, papers, and lecture notes with other students.

Kiran Temple University Fox School of Business ‘17, Course Hero Intern

• I cannot even describe how much Course Hero helped me this summer. It’s truly become something I can always rely on and help me. In the end, I was not only able to survive summer classes, but I was able to thrive thanks to Course Hero.

Dana University of Pennsylvania ‘17, Course Hero Intern

• The ability to access any university’s resources through Course Hero proved invaluable in my case. I was behind on Tulane coursework and actually used UCLA’s materials to help me move forward and get everything together on time.

Jill Tulane University ‘16, Course Hero Intern