# Register now to access 7 million high quality study materials (What's Course Hero?) Course Hero is the premier provider of high quality online educational resources. With millions of study documents, online tutors, digital flashcards and free courseware, Course Hero is helping students learn more efficiently and effectively. Whether you're interested in exploring new subjects or mastering key topics for your next exam, Course Hero has the tools you need to achieve your goals.

32 Pages

### chap7

Course: STATS 315a, Spring 2012
School: Stanford
Rating:

Word Count: 2578

#### Document Preview

Chapter ESL 7 -- Model Selection Trevor Hastie and Rob Tibshirani Model Selection Topics Bias variance trade-off Optimism of training error Estimates of in sample prediction error BIC VC dimension Cross-validation (chapter 3), bootstrap 1 ESL Chapter 7 -- Model Selection Trevor Hastie and Rob Tibshirani Definitions Loss functions ^ L(Y, f (X)) ^ L(G, G(X)) L(G, p(X)) ^ = = = = ^ (Y - f (X))2 squared...

Register Now

#### Unformatted Document Excerpt

Coursehero >> California >> Stanford >> STATS 315a

Course Hero has millions of student submitted documents similar to the one
below including study guides, practice problems, reference materials, practice exams, textbook help and tutor support.

Course Hero has millions of student submitted documents similar to the one below including study guides, practice problems, reference materials, practice exams, textbook help and tutor support.
Chapter ESL 7 -- Model Selection Trevor Hastie and Rob Tibshirani Model Selection Topics Bias variance trade-off Optimism of training error Estimates of in sample prediction error BIC VC dimension Cross-validation (chapter 3), bootstrap 1 ESL Chapter 7 -- Model Selection Trevor Hastie and Rob Tibshirani Definitions Loss functions ^ L(Y, f (X)) ^ L(G, G(X)) L(G, p(X)) ^ = = = = ^ (Y - f (X))2 squared error, ^ I(G = G(X)) 0-1 loss, K -2 k=1 I(G = k) log pk (X) ^ log-likelihood. -2 log pG (X) ^ Training error (over training set T ): 1 err = N N ^ L(yi , f (xi )). i=1 Test Error (Generalization Error): ^ ^ ErrT = E[L(Y, f (X))|T ] (E[L(Y, G(X))|T ], E[L(Y, p(X))|T ]) ^ This is the expected loss over random realizations of new data test data. 2 ESL Chapter 7 -- Model Selection Trevor Hastie and Rob Tibshirani Expected test error (expected prediction error, expected generalization error) ^ Err = E[L(Y, f (X))] = E[ErrT ] ^ ErrT is the error we can expect if we use the function f (x) trained on our particular training set T to make our predictions. Err averages in addition over all training sets. Since ErrT will depend on the particular nuances of our training set, we can make more general statements about Err than we can about ErrT . 3 ESL Chapter 7 -- Model Selection Trevor Hastie and Rob Tibshirani 1.2 High Bias Low Variance Low Bias High Variance Prediction Error 0.0 0 0.2 0.4 0.6 0.8 1.0 5 10 15 20 25 30 35 Model Complexity (df) Behavior of test sample and training sample error as the model complexity is varied. The light blue curves show the training error err, while the light red curves show the conditional test error ErrT for 100 training sets of size 50 each, as the model complexity is increased. The solid curves show the expected test error Err and the expected training error E[err]. 4 ESL Chapter 7 -- Model Selection Trevor Hastie and Rob Tibshirani Bias-variance decomposition Y = f (X) + Err(x0 ) = = = = ^ E[(Y - f (x0 ))2 |X = x0 ] ^ ^ ^ 2 + [E f (x0 ) - f (x0 )]2 + E[f (x0 ) - E f (x0 )]2 2 ^ ^ + Bias2 (f (x0 )) + Var(f (x0 )) Irreducible Error + Bias2 + Variance. In the above, we need to decide whether xi in the sample are random, or assumed fixed. We assume fixed for what follows (recall homework 1). 5 ESL Chapter 7 -- Model Selection Trevor Hastie and Rob Tibshirani For K-nearest neighbors: Err(x0 ) = = ^ E[(Y - fk (x0 ))2 |X = x0 ] 1 2 + f (x0 ) - k k f (x( ) ) =1 2 2 + /k. For linear regression: ^ fp (x0 ) = xT (XT X)-1 XT y = h(x0 )T y 0 Err(x0 ) = = ^ E[(Y - fp (x0 ))2 |X = x0 ] 2 2 ^ + [f (x0 ) - E fp (x0 )]2 + ||h(x0 )||2 . p 2 ^ [f (xi ) - E f (xi )]2 + , N i=1 6 N 1 N 1 2 Err(xi ) = + N i=1 N ESL Chapter 7 -- Model Selection Trevor Hastie and Rob Tibshirani Classification and 0-1 loss Bias and variance and do not add as they do for squared error: variance tends to dominate, while bias is tolerable as long as you are on the correct side of the decision boundary. Hence biased methods often do well! Friedman (1996) "On Bias, Variance 0-1 loss..." shows ^(x0 ) - 1/2 sign(1/2 - f (x0 )) E f ^ P r(G(x0 ) = G(x0 )) ^ Var(f (x0 )) where G(x0 ) = I(f (x0 ) > 1 ) is the Bayes classifier (Exercise 7.2) 2 Hence on the wrong side of the decision boundary, increasing the variance can help 7 ESL Chapter 7 -- Model Selection Trevor Hastie and Rob Tibshirani Bias-variance schematic The model space is the set of all possible predictions from the model, with the "closest fit" labeled with a black dot. The model bias from the truth is shown, along with the variance, indicated by the large yellow circle centered at the black dot labelled "closest fit in population". A shrunken or regularized fit is also shown, having additional estimation bias, but smaller prediction error due to its decreased variance. Closest fit in population Realization Closest fit Truth Model bias Estimation Bias MODEL SPACE Shrunken fit Estimation Variance RESTRICTED MODEL SPACE 8 ESL Chapter 7 -- Model Selection Trevor Hastie and Rob Tibshirani Simulation Study (next slide) There are 50 observations and 20 predictors, uniformly distributed in the hypercube [0, 1]20 . The situations are as follows: Left panels: Y is 1 if X1 1/2 and 0 otherwise, and we use k-nearest neighbors. 1 Right panels: Y is 1 if 10 j=1 Xj 1/2 and 0 otherwise, and we use best subset linear regression indexed by subset size p. 10 There are 100 realizations of this simulation, and the same fixed test set of size 10,000 is used for each. 9 ESL Chapter 7 -- Model Selection Trevor Hastie and Rob Tibshirani k-NN - Regression 0.4 0.4 Linear Model - Regression 50 40 30 20 10 0 5 10 Subset Size p 15 20 Number of Neighbors k k-NN - Classification 0.4 0.4 Linear Model - Classification Prediction error, squared bias and variance for a simulated example. The top row is regression with squared error loss; the bottom row is classification with 01 loss. The models are k-nearest neighbors (left) and best subset regression of size p (right). The variance and bias curves are the same in regression and classification, but the prediction error curve is different. 0.3 0.2 0.1 0.0 0.3 0.2 0.1 0.0 50 40 30 20 10 0 0.0 0.1 0.2 0.3 0.0 0.1 0.2 0.3 5 10 Subset Size p 15 20 Number of Neighbors k 10 ESL Chapter 7 -- Model Selection Trevor Hastie and Rob Tibshirani Optimism of the Training Error training error 1 err = N In-sample error 1 Errin = N N N ^ L(yi , f (xi )) i=1 ^ EY 0 [L(Yi0 , f (xi ))|T ] i=1 where at each xi T , we observe a new Yi0 . Optimism op Errin - err. If we can estimate op, then we can estimate Errin = err + op. 11 ESL Chapter 7 -- Model Selection Trevor Hastie and Rob Tibshirani For squared error, 01, and other loss functions, one can show quite generally that 2 = Ey (op) = N N Cov(^i , yi ), y i=1 where Ey takes expectation wrt the yi T , but the xi T are held fixed. For linear model fitting (with d coefficients): N 2 Cov(^i , yi ) = d y i=1 for the additive error model Y = f (X) + , and so Errin d 2 = err + 2 N 12 ESL Chapter 7 -- Model Selection Trevor Hastie and Rob Tibshirani has the property that Ey Errin = Ey Errin Cp and AIC statistics: d 2 Cp = err + 2 . ^ N d 2 AIC = - loglik + 2 N N where loglik = i=1 log Pr (yi ) is the observed log-likelihood and ^ ^ is the MLE of the parameter vector . N 13 ESL Chapter 7 -- Model Selection Trevor Hastie and Rob Tibshirani Example: Cp for linear operators Assume yi = f (xi ) + i , i (0, 2 ). Often the fitted vector ^ is linear in y: f ^ = Sy f e.g. linear regression, ridge regression, cubic smoothing splines. N N Ey Errin = N 2 + i=1 [f (xi ) - {Sf }i ]2 + 2 tr(ST S) N Ey (err) = = = E||(I - S)y||2 f T (I - S)T (I - S)f + E ( T (I - S)T (I - S) ) Bias2 + 2 tr(I - 2S + ST S) 14 ESL Chapter 7 -- Model Selection Trevor Hastie and Rob Tibshirani Hence Ey Errin - Ey (err) = But this is = Ey op, and 2 2 tr(S) N Cov(y, ^) = Cov(y, Sy) = 2 S f Based on the above, we define the effective degrees of freedom df = N i=1 Cov(^i , yi ) y 2 For fits from linear operators like above df = tr(S). 15 ESL Chapter 7 -- Model Selection Trevor Hastie and Rob Tibshirani Log-likelihood Loss 0.35 2.5 O Train Test AIC Misclassification Error O O 0-1 Loss O 2.0 0.25 0.30 Log-likelihood 1.5 O O O O O O O O O O O O O O O O O O O O 1.0 O O O O O O O O O O O 2 4 8 16 32 64 128 O O O O O O 0.5 0.10 0.15 0.20 O 2 4 8 16 32 64 128 Number of Basis Functions Number of Basis Functions Phoneme recognition example. The logistic regression coefficient function (f ) = M m=1 hm (f )m is modeled in M spline basis functions. Left panel: AIC statistic used to estimate Errin using log-likelihood loss. Included is an estimate of Err based on a test sample. It does well except for the overparametrized case (M = 256 parameters for N = 1000 observations). Right panel: Same is done for 01 loss. Although the AIC formula does not apply strictly here, it does a reasonable job in this case. 16 ESL Chapter 7 -- Model Selection Trevor Hastie and Rob Tibshirani BIC-- Bayesian information criterion BIC = -2 loglik + (log N ) d 2 under Guassian model is known, -2 loglik equals (up to a 2 2 ^ constant) i (yi - f (xi ))2 / , which is N err/ for squared error loss. Hence we can write N d 2 BIC = 2 err + (log N ) . N Hence BIC is proportional to AIC, except 2 is replaced by log N . candidate models Mm , m = 1, . . . , M , with prior distribution Pr(m |Mm ) Given data Z, the posterior probability of a given model is Pr(Mm |Z) Pr(Mm ) Pr(Z|Mm ) Pr(Mm ) Pr(Z|m , Mm )Pr(m |Mm )dm , 17 ESL Chapter 7 -- Model Selection Trevor Hastie and Rob Tibshirani where Z represents the training data {xi , yi }1 . posterior odds Pr(Mm ) Pr(Z|Mm ) Pr(Mm |Z) = . Pr(M |Z) Pr(M ) Pr(Z|M ) The rightmost quantity Pr(Z|Mm ) BF(Z) = Pr(Z|M ) is called the Bayes factor N 18 ESL Chapter 7 -- Model Selection Trevor Hastie and Rob Tibshirani Typically we assume that the prior over models is uniform, so that Pr(Mm ) is constant. A Laplace approximation to the integral gives dm ^ log N + O(1). log Pr(Z|Mm ) = log Pr(Z|m , Mm ) - 2 ^ m is a maximum likelihood estimate and dm is the number of free parameters If we define our loss function to be ^ -2 log Pr(Z|m , Mm ), this is equivalent to the BIC criterion. Pr(Mm |Z) Pr(Mm )e- 2 BICm . 1 19 ESL Chapter 7 -- Model Selection Trevor Hastie and Rob Tibshirani VC dimension (Vapnik-Chervonenkis) Consider class of indicator functions F = {f (x, )} , indexed by a parameter . eg F1 = {I(0 + 1 x > 0)} , or F2 = {I(sin(x) > 0)} . VC dimension of a class F = {f (x, )} is defined to be the largest number of points (in some configuration) that can be shattered by members of F. A set of points is said to be shattered by a class of functions if, no matter how we we assign a binary label to each point, a member of the class can perfectly separate them. VC dim(F1 )=2, VC dim (F2 )= 20 ESL Chapter 7 -- Model Selection Trevor Hastie and Rob Tibshirani The first three panels show that the class of lines in the plane can shatter three points. The last panel shows that this class cannot shatter four points, as no line will put the hollow points on one side and the solid points on the other. Hence the VC dimension of the class of straight lines in the plane is three. Note that a class of nonlinear curves could shatter four points, and hence has VC dimension greater than three. 21 ESL Chapter 7 -- Model Selection Trevor Hastie and Rob Tibshirani sin(50 x) -1.0 0.0 0.0 1.0 0.2 0.4 0.6 0.8 1.0 x The solid curve is the function sin(50x) for x [0, 1]. The blue (hollow) and green (solid) points illustrate how the associated indicator function I(sin(x) > 0) can shatter (separate) an arbitrarily large number of points by choosing an appropriately high frequency . 22 ESL Chapter 7 -- Model Selection Trevor Hastie and Rob Tibshirani VC bounds Example -- binary classification, with class F = {f (x, )} with VC dimension h. With Pr > 1 - over training samples ErrT err + where 1+ = a1 1+ 4 err 2 h[log (a2 N/h) + 1] - log (/4) , N where 0 < a1 4 and 0 < a2 2. These bounds are typically far too loose for reliable error estimation, but can nevertheless guide model selection (Structural Risk Minimization to Vapnik.) 23 ESL Chapter 7 -- Model Selection Trevor Hastie and Rob Tibshirani AIC 100 % Increase Over Best 40 60 80 reg/KNN reg/linear class/KNN class/linear Boxplots show the distribution of the relative error 100 ErrT () - min ErrT () ^ max ErrT () - min ErrT () 0 20 BIC 100 % Increase Over Best 60 80 reg/KNN reg/linear class/KNN class/linear SRM 100 % Increase Over Best over the four scenarios. This is the error in using the chosen model relative to the best model. There are 100 training sets each of size 80 represented in each boxplot, with the errors computed on test sets of size 10,000. 0 20 40 60 80 0 20 40 reg/KNN reg/linear class/KNN class/linear 24 ESL Chapter 7 -- Model Selection Trevor Hastie and Rob Tibshirani Cross-validation Simple, and best overall--see chapter 3 0.6 0.5 Misclassification Error 0.4 0.2 0.3 10 Subset Size p 15 20 0.0 0.1 5 Prediction error and ten-fold cross-validation curve estimated from a single training set, from the Linear Model - Classification scenario defined earlier. 25 ESL Chapter 7 -- Model Selection Trevor Hastie and Rob Tibshirani Cross-validation: Bias due to reduced training set size 0.8 1-Err 0.0 0 0.2 0.4 0.6 50 100 150 Size of Training Set 200 Hypothetical learning curve for a classifier on a given task; a plot of 1 - Err versus the size of the training set N . With a dataset of 200 observations, fivefold cross-validation would use training sets of size 160, which would behave much like the full set. However, with a dataset of 50 observations fivefold cross-validation would use training sets of size 40, and this would result in a considerable overestimate of prediction error. 26 ESL Chapter 7 -- Model Selection Trevor Hastie and Rob Tibshirani Bootstrap methods We wish to assess the statistical accuracy of a quantity S(Z) computed from our dataset. B training sets Zb , b = 1, . . . , B each of size N are drawn with replacement from the original dataset. The quantity of interest S(Z) is computed from each bootstrap training set, and the values S(Z1 ), . . . , S(ZB ) are used to assess the statistical accuracy of S(Z). 27 ESL Chapter 7 -- Model Selection Trevor Hastie and Rob Tibshirani Bootstrap replications S(Z1 ) S(Z2 ) S(ZB ) Bootstrap samples Z1 Z2 ZB Z = (z1 , z2 , . . . , zN ) Training sample 28 ESL Chapter 7 -- Model Selection Trevor Hastie and Rob Tibshirani bootstrap is useful for estimating the standard error of a statistic s(Z): we use the standard error of the bootstrap values s(Z1 ), s(Z2 ), . . . s(ZB ) E.g. s(Z) could be the prediction from a cubic spline curve at some fixed predictor value x. There is often more than one way to draw bootstrap samples--e.g. for a smoother, could draw samples from the data or draw samples from the residuals bootstrap is "non-parametric"-- i.e doesn't assume a parametric distribution for the data. If we carry the bootstrap out parametrically (i.e. draw from a normal distribution), then we get the usual textbook (Fisher information-based) formulas for standard errors as N . can get confidence intervals for an underlying population parameter from the percentiles for the bootstrap values s(Z1 ), . . . s(ZB ). Other more sophisticated confidence intervals via the bootstrap 29 ESL Chapter 7 -- Model Selection Trevor Hastie and Rob Tibshirani Bootstrap estimation of prediction error Errboot Pr{observation i bootstrap sample b} = = 1 1- 1- N 1 - e-1 0.632. N 1 1 = BN B N ^ L(yi , f b (xi )). b=1 i=1 Can be a poor estimate: Consider: 1-NN, 2 equal classes, class labels independent of features Then Errboot = 0.5(1 - 0.632) = 0.184, while Err = 0.5! 30 ESL Chapter 7 -- Model Selection Trevor Hastie and Rob Tibshirani Leave-one out bootstrap: Err where C -i = {b : i bootstrap sample b}. / .632 bootstrap estimator: Err (.632) (1) 1 = N N i=1 1 |C -i | ^ L(yi , f b (xi )). bC -i = .368 err + .632 Err (1) corrects for learning curve bias -- bootstrap samples represent typically 0.632 of training samples. .632+ bootstrap estimator: . . . 31 ESL Chapter 7 -- Model Selection Trevor Hastie and Rob Tibshirani Cross-validation 100 % Increase Over Best 0 20 40 60 80 reg/KNN reg/linear class/KNN class/linear Bootstrap 100 % Increase Over Best 0 20 40 60 80 reg/KNN reg/linear class/KNN class/linear Boxplots show the distribution of the relative error 100 [ErrT (^ ) - min ErrT ()]/[max ErrT () - min ErrT ()] over the four scenarios This is the error in using the chosen model relative to the best model. There are 20 training sets represented in each boxplot. 32
Find millions of documents on Course Hero - Study Guides, Lecture Notes, Reference Materials, Practice Exams and more. Course Hero has millions of course specific materials providing students with the best way to expand their education.

Below is a small sample set of documents:

Stanford - STATS - 315a
ESL Chapter 5 - Basis Expansions and RegularizationTrevor Hastie and Rob TibshiraniBasis Expansions and RegularizationFor a vector X, we consider models of the formMf (X) =m=1m hm (X)Examples of hm :2 hm (X) = Xj , Xj X , . . . hm (X) = |X|, log
Stanford - STATS - 315a
ESL Chapter 4 - Linear Methods for ClassificationTrevor Hastie and Rob TibshiraniLinear Methods for Classification Linear regression linear and quadatric discriminant functions example: gene expression arrays reduced rank LDA logistic regression separa
Stanford - STATS - 315a
ESL Chapter 1 - IntroductionTrevor Hastie and Rob TibshiraniStatistical Learning Problems Identify the risk factors for prostate cancer (Fig 1.1). Classify a recorded phoneme (Fig 5.5) based on a log-periodogram. Predict whether someone will have a hea
Stanford - STATS - 315a
ESL Chapter 3 - Linear Methods for RegressionTrevor Hastie and Rob TibshiraniLinear Methods for RegressionOutline The simple linear regression model Multiple linear regression Model selection and shrinkage-the state of the art1ESL Chapter 3 - Linear
Stanford - STATS - 315a
SLDM III c Hastie &amp; Tibshirani - March 18, 2010Dimension Reduction and SVD194Principal ComponentsSuppose we have N measurements on each of p variables Xj , j = 1, . . . , p. There are several equivalent approaches to principal components: Produce a de
Stanford - STATS - 315a
ESL Chapter 2 - Overview of Supervised LearningTrevor Hastie and Rob TibshiraniOverview of Supervised LearningNotation X: inputs, feature vector, predictors, independent variables. Generally X will be a vector of p real values. Qualitative features ar
Stanford - STATS - 315a
Notes on Statistical LearningJohn I. Marden Copyright 20062Contents1 Introduction 2 Linear models 2.1 Good predictions: Squared error loss and 2.2 Matrices and least-squares estimates . . 2.3 Mean vectors and covariance matrices . . 2.4 Prediction usi
Stanford - STATS - 315a
AgendaRegularization: Ridge Regression and the LASSOStatistics 305: Autumn Quarter 2006/2007Wednesday, November 29, 2006Statistics 305: Autumn Quarter 2006/2007Regularization: Ridge Regression and the LASSOAgendaAgenda1 2The Bias-Variance Tradeof
Stanford - STATS - 315a
Welcome to Q&amp;A for statisticians, data analysts, data miners and data visualization experts - check out the FAQ!Stack Exchangelog in | blog | meta | about | faqStatistical Analysis Questions Tags Users Badges Unanswered Ask QuestionLeast angle regres
Stanford - STATS - 315a
Regularization and Variable Selection via the Elastic NetHui Zou and Trevor Hastie Journal of Royal Statistical Society, B, 2005 Presenter: Minhua Chen, Nov. 07, 2008 p. 1/1AgendaIntroduction to Regression Models. Motivation for Elastic Net. Naive Ela
Stanford - STATS - 315a
Linear, Ridge Regression, and Principal Component AnalysisLinear, Ridge Regression, and Principal Component AnalysisJia LiDepartment of Statistics The Pennsylvania State UniversityEmail: jiali@stat.psu.edu http:/www.stat.psu.edu/jialiJia Lihttp:/www
Stanford - STATS - 315a
Lecture 5: Multiple Linear RegressionNancy R. ZhangStatistics 203, Stanford UniversityJanuary 19, 2010Nancy R. Zhang (Statistics 203)Lecture 5January 19, 20101 / 25AgendaToday: multiple linear regression. This week: comparing nested models in mul
Stanford - STATS - 315a
Nearest Neighbor ClassificationCharles Elkan elkan@cs.ucsd.edu January 11, 2011What is called supervised learning is the most fundamental task in machine learning. In supervised learning, we have training examples and test examples. A training example i
Stanford - STATS - 315a
Locally Weighted LearningMachine Learning Dr. Barbara HammerLocally Weighted LearningInstance-based Learning (&quot;Lazy Learning&quot;)Local Models k-Nearest Neighbor Weighted Average Locally weighted regressionCase-based reasoningWhen to consider Nearest
Stanford - STATS - 315a
Conditional Expectations and Linear RegressionsWalter Sosa-EscuderoEcon 507. Econometric Analysis. Spring 2009March 31, 2009Walter Sosa-EscuderoConditional Expectations and Linear Regressions`All models are wrong, but some are useful' (George E. P.
Short SalesAn OverviewAD.6Short SalesAn OverviewThis handout is a reproduction of section 2.4.2 (pages 25-30) of Sharpe, William F., and Gordon J. Alexander, 1990, Investments, Prentice-Hall, New Jersey.IntroductionAn old adage from Wall Street is
Frequently Asked Questions about CalculatorsCalculators are useful, but can behave in unexpected ways (at least if, like me, you don't like reading the instructions first). Here are some questions that people often have. I'm just starting this list, and
HP 12C CalculationsThis handout has examples for calculations on the HP12C: 1. Present Value (PV) 2. Present Value with cash flows and discount rate constant over time 3. Present Value with uneven cash flows but constant discount rate 4. Annuity 5. Loan
UGBA 103 Introduction to FinanceDmitry LivdanWalter A. Haas SchoolFall 2011Dmitry Livdan (Haas)UGBA 103Fall 20111 / 1790. IntroductionDmitry Livdan (Haas)UGBA 103Fall 20112 / 1790. IntroductionReadings: Berk and DeMarzo: Chapter 1.Dmitry Li
UGBA 103 Introduction to FinanceDmitry LivdanWalter A. Haas SchoolFall 2011Dmitry Livdan (Haas)UGBA 103Fall 20111 / 339I.4 The Valuation of Risky Cash FlowsDmitry Livdan (Haas)UGBA 103Fall 20112 / 339MotivationWe have by now developed a fair
UGBA 103 2010 Midterm Solutions 1. (a) 10K today: P V (11K) = (b) 1 + rnom = (1 + i)(1 + rreal ) rreal = 1.1/1.05 - 1 = 4.7619% 2. The mortgage has 17 years left, so there are N = 17 12 = 204 payments of C = 1500 remaining. The effective monthly interest
Syllabus (tentative) UGBA 103 Introduction to Finance -Fall 2011instructor: office: email: office hours: class time:ProfessorDmitry Livdan F473 livdan@haas.berkeley.edu Wednesday: 10:00 -11:30 AMUGBA103-1 MW 8:00AM-9:30AM F295 (Anderson Auditorium) UGB
Virtual University of Pakistan - BBA - 3
How to Write a Memo: Standard Conventions for Inter-Office Business Correspondence Emily Thrush March 4, 2000 MEMORANDUM TO: Mr. and Ms. Business Communicator FROM: Emily A. Thrush, Column Editor SUBJECT: Memo writing -This memo outlines how to write an e
John Wood CC - ECO - 101
Chapter 16. A Macroscopic Description of Matter Macroscopic systems are characterized as being either solid, liquid, or gas (water). These are called the phases of matter. In this chapter we'll be interested in when and how a system changes from one phas
John Wood CC - ECO - 101
Starting Monday, the Physics Learning Center (223 Kinard) will be open: M: 8:00am 9:00pm T: 8:00am 8:00pm W: 10:00am 12:00pm, 1:00pm 8:00pm TH: 8:00am 9:00pm F: 8:00am 12:00pm, 2:00pm 9:00pm There will be a homework help session on Thursday, September 1 a
John Wood CC - ECO - 101
Chapter 18. The Micro/Macro ConnectionHeating the air in a hot-air balloon increases the thermal energy of the air molecules. This causes the gas to expand, lowering its density and allowing the balloon to float in the cooler surrounding air. Chapter Goa
John Wood CC - ECO - 101
Chapter 19. Heat Engines and Refrigerators IS THERE A WAY TO TRANSFORM HEAT INTO WORK? Thermodynamics is the branch of physics that studies the transformation of energy.Copyright 2008 Pearson Education, Inc., publishing as Pearson Addison-Wesley. What
John Wood CC - ECO - 101
Chapter 26 Electric Charges and ForcesSample question:In electrophoresis, what force causes DNA fragments to migrate through the gel? How can an investigator adjust the migration rate?Copyright 2007, Pearson Education, Inc., Publishing as Pearson Addis
John Wood CC - ECO - 101
Chapter 27. The Electric FieldWhat makes the electrons flow (current) in your computer and the nerves in your body? ANS: Electric fields Electric fields also line up polymer molecules to form the images in a liquid crystal display (LCD). Chapter Goal: To
John Wood CC - ECO - 101
John Wood CC - ECO - 101
EG 210: CAD and Engineering Application Sections 30, 31 &amp; 32 Fall 2011 HW #2: Grid, Circle, Line, Copy, and Trim commands Date Assigned: 09/08/2011 Due Date: 09/15/2011 in the beginning of the classEx1: Submit the syllabus form. (The syllabus forms wer
John Wood CC - ECO - 101
EG 210: CAD and Engineering Application Sections 30, 31 &amp; 32 Fall 2011HW #3: Linetype, Lineweight, Color, Mirror, Copy, Rotate, and Scale Date Assigned: 09/15/2011 Due Date: 09/22/2011 in the beginning of the classCreate and save the following drawings:
John Wood CC - ECO - 101
EG 210: CAD and Engineering Application Sections 30, 31 &amp; 32 Fall 2011HW #4 Date Assigned: 09/22/2011 Due Date: 09/29/2011 in the beginning of the classCreate and save the following drawing:Ex1: Open the drawing of Ex1 of ICA #9 (the bridge drawing). (
John Wood CC - ECO - 101
EG 210: CAD and Engineering Application Sections 30, 31 &amp; 32 Fall 2011HW #5: Pipe Cutter Figure 4, Page #462 of the textbook.(30 pt)Date Assigned: 09/29/2011 Due Date: 10/06/2011 in the beginning of the classCreate and save the following drawingEx1:
John Wood CC - ECO - 101
EG 210: CAD and Engineering Application Sections 30, 31 &amp; 32 Fall 2011HW #6: Lots and Blocks (40 pt) Date Assigned: 10/06/2011 Due Date: 10/13/2011 in the beginning of the classYou have created this drawing as ICA #12 Ex1:(i) (ii) (iii) (iv) (v) (vi) (
John Wood CC - ECO - 101
EG 210: CAD and Engineering Application Sections 30, 31 &amp; 32 Fall 2011HW #7: Plan and Profile (40 pt) Date Assigned: 10/20/2011 Due Date: 10/25/2011 in the beginning of the classEx1:(i) Submit the printout of ICA #16.HW Submission Submit the printout
John Wood CC - ECO - 101
EG 210: CAD and Engineering Application Sections 30, 31 &amp; 32 Fall 2011HW #8: Floor plan (40 pt) Date Assigned: 10/27/2011 Due Date: 11/01/2011 in the beginning of the classEx1:(i) (ii) (iii) (iv) (v) Submit the printout show on the next page. You have
John Wood CC - ECO - 101
EG 210: CAD and Engineering Application Sections 30, 31 &amp; 32 Fall 2011HW #9: Floor plan (40 pt) Date Assigned: 10/27/2011 Due Date: 11/01/2011 in the beginning of the classEx1:(i) (ii) (iii) (iv) (v) Submit the printout show on the next two pages. You
John Wood CC - ECO - 101
EG 210: CAD and Engineering Application Sections 30, 31 &amp; 32 Fall 2011HW #10: Siteplan(40 pt)Date Assigned: 11/16/2011 Due Date: 11/22/2011 in the beginning of the classEx1:(i) (ii) (iii) (iv) (v) Submit the printouts shown on next three pages. You h
John Wood CC - ECO - 101
EG 210: CAD and Engineering Application Sections 30, 31 &amp; 32 Fall 2011HW #11: Earthwork (40 pt) Date Assigned: 11/16/2011 Due Date: 11/22/2011 in the beginning of the classEx1:(i) (ii) (iii) (iv) (v) Submit the printout shown on next page. You have cre
John Wood CC - ECO - 101
EG 210: CAD and Engineering Application Sections 30, 31 &amp; 32 Fall 2011 ICA #22: Contour Map and Earthwork Date Assigned: 11/08/2011 Due Date: 11/10/2011 in the beginning of the class Follow the instructions from the Chapter 13 of the text book. Ex1:(i)
John Wood CC - ECO - 101
EG 210: CAD and Engineering Application Sections 30, 31 &amp; 32 Fall 2011 ICA #23 Date Assigned: 11/15/2011 Due Date: 11/15/2011 at the end of the classEx1:(i) Work on the individual project #3
John Wood CC - ECO - 101
EG 210: CAD and Engineering Application Sections 30, 31 &amp; 32 Fall 2011 ICA #24: Cross-Sections Date Assigned: 11/17/2011 Due Date: 11/22/2011 in the beginning of the class Follow the instructions from the Chapter 12 (12.6) of the text book. Ex1:(i) (ii)
John Wood CC - ECO - 101
EG 210: CAD and Engineering Application Sections 30, 31 &amp; 32 Fall 2011 ICA #25 Date Assigned: 11/22/2011 Due Date: 11/22/2011 at the end of the classEx1:(i)Work on the team project!
John Wood CC - ECO - 101
EG 210: CAD and Engineering Application Sections 30, 31 &amp; 32 Fall 2011 ICA #26 Date Assigned: 11/29/2011 Due Date: 12/01/2011 at the end of the classEx1:(i)Draw the elevator system shown on page 466 of the text book.ICA Submission Submit the printout
John Wood CC - ECO - 101
EG 210: CAD and Engineering Application Sections 30, 31 &amp; 32 Fall 2011 ICA #27 Date Assigned: 12/01/2011 Due Date: 12/01/2011 at the end of the classEx1:(i)Work on the team project!
John Wood CC - ECO - 101
EG 210: CAD and Engineering Application Sections 30, 31 &amp; 32 Fall 2011 ICA #28 Date Assigned: 12/06/2011 Due Date: 12/06/2011 at the end of the classEx1:(i)Listen to the presentations!
John Wood CC - ECO - 101
EG 210: CAD and Engineering Application Sections 30, 31 &amp; 32 Fall 2011 ICA #29 Date Assigned: 12/08/2011 Due Date: 12/08/2011 at the end of the classEx1:(i)Listen to the presentations!
John Wood CC - ECO - 101
John Wood CC - ECO - 101
EG 210: CAD and Engineering Application Sections 30, 31 &amp; 32 Fall 2011ICA #1: Introduction Date Assigned: 08/25/2010 Due Date: *No work assigned
John Wood CC - ECO - 101
EG 210: CAD and Engineering Application Sections 30, 31 &amp; 32 Fall 2011ICA #2: Customize a Workspace Date Assigned: 08/30/2011 Due Date: 09/01/2011 in the beginning of the classCreate and save the following workspaces: Figures are shown on the next two p
John Wood CC - ECO - 101
EG 210: CAD and Engineering Application Sections 30, 31 &amp; 32 Fall 2011ICA #3: Grid and Snap Capabilities Date Assigned: 09/01/2011 Due Date: 09/06/2011 in the beginning of the classCreate and save the following drawings: Figures are shown on the next pa
John Wood CC - ECO - 101
EG 210: CAD and Engineering Application Sections 30, 31 &amp; 32 Fall 2011ICA #4:Line, Polyline and Polyline EditDate Assigned: 09/06/2011Due Date: 09/08/2011 in the beginning of the classCreate and save the following drawings: Figure is shown on the nex
John Wood CC - ECO - 101
EG 210: CAD and Engineering Application Sections 30, 31 &amp; 32 Fall 2011ICA #5: Point, Circle, Polyline, Polygon, Offset, Move, Trim, and Extend Date Assigned: 09/08/2011 Due Date: 09/13/2011 in the beginning of the classCreate and save the following draw
John Wood CC - ECO - 101
EG 210: CAD and Engineering Application Sections 30, 31 &amp; 32 Fall 2011ICA #7: Color, Hatch, Ellipse, Qleader, Text, Background Mask, Array, and Draw Order Date Assigned: 09/15/2011 Due Date: 09/20/2011 in the beginning of the classCreate and save the fo
John Wood CC - ECO - 101
EG 210: CAD and Engineering Application Sections 30, 31 &amp; 32 Fall 2011ICA #7: Color, Hatch, Ellipse, Qleader, Text, Background Mask, Array, and Draw Order Date Assigned: 09/15/2011 Due Date: 09/20/2011 in the beginning of the classCreate and save the fo
John Wood CC - ECO - 101
EG 210: CAD and Engineering Application Sections 30, 31 &amp; 32 Fall 2011ICA #8: Layers and Layouts Date Assigned: 09/20/2011 Due Date: 09/22/2011 in the beginning of the classCreate and save the following drawings:Ex1: Section 18.8, Figure 8-1 Ex1: Secti
John Wood CC - ECO - 101
EG 210: CAD and Engineering Application Sections 30, 31 &amp; 32 Fall 2011ICA_09: Blocks and Template Date Assigned: 09/22/2011 Due Date: 09/27/2011 in the beginning of the class Download the template files from ICA_09's folder and save as template files.E
John Wood CC - ECO - 101
EG 210: CAD and Engineering Application Sections 30, 31 &amp; 32 Fall 2011ICA #10: Dimensions Date Assigned: 09/27/2011 Due Date: 09/29/2011 in the beginning of the class Create and save the following drawingsEx1:(i) (ii) (iii) (iv) (v) (vi) (vii) Open th
John Wood CC - ECO - 101
EG 210: CAD and Engineering Application Sections 30, 31 &amp; 32 Fall 2011ICA #11: Azimuths, Bearing, and Closed Traverses Date Assigned: 09/29/2011 Due Date: 10/04/2011 in the beginning of the class Create and save the following drawingsEx1: Units : Milli
John Wood CC - ECO - 101
EG 210: CAD and Engineering Application Sections 30, 31 &amp; 32 Fall 2011ICA #12: Lots and Blocks Date Assigned: 10/04/2011 Due Date: 10/06/2011 in the beginning of the class Follow the instructions listed in sections 8.11.3, 6.7, and 6.16 of the text book