This preview shows pages 1–3. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.View Full Document
Unformatted text preview: MODELS FOR VARIABLE RECRUITMENT (continued) Fitting Real Data to the Spawner-Recruit Models One strategy for fitting these spawner-recruit models to real data is to linearize the models by means of some suitable transformation and then apply standard linear regression methods. This was routine practice in the past when computers were unavailable. Better techniques are available now, but the linearization approach is worth exploring because it illustrates some important general lessons. The Linearized Ricker SR Model Divide both sides by S. R a S exp b- S ( ) = ==> R S a exp b- S ( ) = Apply the logarithm function to both sides. ==> ln R S ln a ( ) b S - = Y = A + BX We get a linear equation. To estimate the parameters a and b, one can regress ln(R/S) against S. The parameter estimates are ^ ^ The estimate for parameter a will be subject to the logarithmic transformation bias that we examined in one of the early lectures. Explore the Excel demonstration of fitting the Ricker SR model with data from Table 11.6 of Ricker (1975). The Linearized Beverton and Holt SR Model R S c d S + = 1 c S d + = ==> 1 R d c 1 S + = or S R c d S + = To estimate the parameters c and d, one can regress 1/R against 1/S, or S/R against S. In the first case the parameter estimates are ^ ^ Explore the Excel demonstration of fitting the Beverton & Holt SR model with data from Table 11.8 of Ricker (1975). These linearization methods are quick and easy, but it is likely that the parameter estimates will be biased because the linearized models violate one or more of the assumptions that underlie the method of linear regression. a exp intercept ( ) = b slope- = c slope = d intercept = FW431/531 Copyright 2008 by David B. Sampson Recruitment4 - Page 98 A Brief Review of Linear Regression Theory The basic model of linear regression is of the following form. Y = a + bX + where parameters a and b are constants, Y is the dependent variable, X is the independent variable, and is an error term to account for discrepancies between the observed values of Y and those values predicted by the model. The following assumptions underlie linear regression: The variables Y and X are related as specified in the model. X Y We can fit a straight line to any XY data set, but the results may be meaningless if the underlying relationship is not linear. The residuals (the values) are normally distributed with zero mean and constant variance. The residuals are mutually independent. The X values are known without error. If we look at either of the linearized stock-recruit models, we can see that they violate at least one of these assumptions. For example, if R for a given level of S is a normally distributed random variable, then (1/R) is not normally distributed, and neither is ln(R/S) or (R/S). Also, there are problems with the assumption of independent residuals because usually the R value in one data pair becomes the S value in the next pair. pair becomes the S value in the next pair....
View Full Document
- Fall '09