midterm09

midterm09 - Name: CS573 / STAT598M Midterm: Spring 2009...

Info iconThis preview shows pages 1–6. Sign up to view the full content.

View Full Document Right Arrow Icon
Name: ————————————— CS573 / STAT598M Midterm: Spring 2009 This is a closed-book, closed-notes exam. Non-programmable calculators are allowed for probability calculations. There are 12 pages including the cover page. The total number of points for the exam is 60. Note the point value of each question and allocate your time accordingly. Read each question carefully and show your work. Question Score 1 2 3 4 5 6 7 Total 1
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
1 Data mining components (8 pts) Read the excerpts from Mining Citizen Science Data to Predict Abundance of Wild Bird Species by R. Caruana, M. Elhawary, A. Munson, M. Riedewald, D. Sorokina, D. Fink, W. Hochachka, S. Kelling, KDD, 2006 on pages 11-12. 1. Describe the data mining task. 2. Describe the data representation. 3. Describe the knowledge representation. 4. Describe the learning algorithm (both search and evaluation). 2
Background image of page 2
2 Short Questions (12 pts, 2 pts each) The following short questions should be answered with at most two sentences, and/or a picture. For the (true/false) questions, answer true or false. If you answer true, provide a short justication, if false explain why or provide a small counterexample. 1. True or False. Consider a continuous probability distribution with density f () that is nonzero everywhere. The probability of a value x is equal to f ( x ). 2. True or False. MAP estimates are less prone to overtting than MLE. 3. Consider a classification problem with two classes and n binary attributes. How many pa- rameters would you need to learn with a Naive Bayes classifier? How many parameters would you need to learn a model of the full joint distribution? 4. In n -fold cross-validation each data point belongs to exactly one test fold, so the test folds are independent. Given that the data in test folds i and j are independent, are e i and e j , the error estimates on test folds i and j , also independent? 3
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
5. Give one similarity and one difference between feature selection and PCA. 6. You are a reviewer for the International Conference on Automatically Mining Nuggets from Data, and you read papers with the following experimental setups. Would you accept or reject each paper? Provide a one sentence justication. (This conference has short reviews.) accept/reject “My algorithm is better than yours. Look at the training error rates!” accept/reject “My algorithm is better than yours. Look at the test error rates! (Foot- note: reported results for λ = 1 . 789489345672120002.)” accept/reject “My algorithm is better than yours. Look at the test error rates! (Foot- note: reported results for best value of λ .)” accept/reject “My algorithm is better than yours. Look at the test error rates! (Foot- note: reported results for best value of λ , chosen with 10-fold cross validation.)” 4
Background image of page 4
3 Decision trees (12 pts) The following data set will be used to learn a decision tree for predicting whether students are lazy (L) or diligent (D) based on their weight (Normal or Underweight), their eye color (Amber or
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Image of page 6
This is the end of the preview. Sign up to access the rest of the document.

Page1 / 12

midterm09 - Name: CS573 / STAT598M Midterm: Spring 2009...

This preview shows document pages 1 - 6. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online