当前位置:首页 > 伍德里奇计量经济学英文版各章总结
It is fairly easy to heuristically describe the difference between a weakly
dependent process and an integrated process. Using the MA(1) and the stable AR(1) examples is usually sufficient.
When the data are weakly dependent and the explanatory variables are
contemporaneously exogenous, OLS is consistent. This result has many applications, including the stable AR(1) regression model. When we add the appropriate
homoskedasticity and no serial correlation assumptions, the usual test statistics are asymptotically valid.
The random walk process is a good example of a unit root (highly persistent) process. In a one-semester course, the issue comes down to whether or not to first difference the data before specifying the linear model. While unit root tests are covered in Chapter 18, just computing the first-order autocorrelation is often
sufficient, perhaps after detrending. The examples in Section 11.3 illustrate how different first-difference results can be from estimating equations in levels.
Section 11.4 is novel in an introductory text, and simply points out that, if a model is dynamically complete in a well-defined sense, it should not have serial
correlation. Therefore, we need not worry about serial correlation when, say, we test the efficient market hypothesis. Section 11.5 further investigates the
homoskedasticity assumption, and, in a time series context, emphasizes that what is contained in the explanatory variables determines what kind of heteroskedasticity is ruled out by the usual OLS inference. These two sections could be skipped without loss of continuity.
CHAPTER 12
TEACHING NOTES
Most of this chapter deals with serial correlation, but it also explicitly considers heteroskedasticity in time series regressions. The first section allows a review of what assumptions were needed to obtain both finite sample and asymptotic results. Just as with heteroskedasticity, serial correlation itself does not invalidate R-squared. In fact, if the data are stationary and weakly dependent, R-squared and adjusted R-squared consistently estimate the population R-squared (which is well-defined under stationarity).
Equation (12.4) is useful for explaining why the usual OLS standard errors are not generally valid with AR(1) serial correlation. It also provides a good starting point for discussing serial correlation-robust standard errors in Section 12.5. The subsection on serial correlation with lagged dependent variables is included to debunk the myth that OLS is always inconsistent with lagged dependent variables and serial correlation. I do not teach it to undergraduates, but I do to master’s students.
Section 12.2 is somewhat untraditional in that it begins with an asymptotic t test for AR(1) serial correlation (under strict exogeneity of the regressors). It may seem heretical not to give the Durbin-Watson statistic its usual prominence, but I do believe the DW test is less useful than the t test. With nonstrictly exogenous regressors I cover only the regression form of Durbin’s test, as the h statistic is asymptotically equivalent and not always computable.
Section 12.3, on GLS and FGLS estimation, is fairly standard, although I try to show how comparing OLS estimates and FGLS estimates is not so straightforward. Unfortunately, at the beginning level (and even beyond), it is difficult to choose a course of action when they are very different.
I do not usually cover Section 12.5 in a first-semester course, but, because some econometrics packages routinely compute fully robust standard errors, students can be pointed to Section 12.5 if they need to learn something about what the corrections do. I do cover Section 12.5 for a master’s level course in applied econometrics (after the first-semester course).
I also do not cover Section 12.6 in class; again, this is more to serve as a reference for more advanced students, particularly those with interests in finance. One important point is that ARCH is heteroskedasticity and not serial correlation, something that is confusing in many texts. If a model contains no serial correlation, the usual heteroskedasticity-robust statistics are valid. I have a brief subsection on correcting for a known form of heteroskedasticity and AR(1) errors in models with strictly exogenous regressors.
CHAPTER 13
TEACHING NOTES
While this chapter falls under “Advanced Topics,” most of this chapter requires no more sophistication than the previous chapters. (In fact, I would argue that, with the possible exception of Section 13.5, this material is easier than some of the time series chapters.)
Pooling two or more independent cross sections is a straightforward extension of cross-sectional methods. Nothing new needs to be done in stating assumptions, except possibly mentioning that random sampling in each time period is sufficient. The practically important issue is allowing for different intercepts, and possibly different slopes, across time.
The natural experiment material and extensions of the difference-in-differences estimator is widely applicable and, with the aid of the examples, easy to understand. Two years of panel data are often available, in which case differencing across time is a simple way of removing g unobserved heterogeneity. If you have covered Chapter 9, you might compare this with a regression in levels using the second year of data, but where a lagged dependent variable is included. (The second approach only requires collecting information on the dependent variable in a previous year.) These often give similar answers. Two years of panel data, collected before and after a policy change, can be very powerful for policy analysis.
Having more than two periods of panel data causes slight complications in that the errors in the differenced equation may be serially correlated. (However, the traditional assumption that the errors in the original equation are serially uncorrelated is not always a good one. In other words, it is not always more appropriate to used fixed effects, as in Chapter 14, than first differencing.) With large N and relatively small T, a simple way to account for possible serial correlation after differencing is to compute standard errors that are robust to arbitrary serial correlation and hetero-
skedasticity. Econometrics packages that do cluster analysis (such as Stata) often allow this by specifying each cross-sectional unit as its own cluster.
CHAPTER 14
TEACHING NOTES
My preference is to view the fixed and random effects methods of estimation as applying to the same underlying unobserved effects model. The name “unobserved effect” is neutral to the issue of whether the time-constant effects should be treated as fixed parameters or random variables. With large N and relatively small T, it almost always makes sense to treat them as random variables, since we can just view the unobserved ai as being drawn from the population along with the observed variables. Especially for undergraduates and master’s students, it seems sensible to not raise the philosophical issues underlying the professional debate. In my mind, the key issue in most applications is whether the unobserved effect is correlated with the observed explanatory variables. The fixed effects transformation eliminates the unobserved effect entirely whereas the random effects transformation accounts for the serial correlation in the composite error via GLS. (Alternatively, the random effects transformation only eliminates a fraction of the unobserved effect.)
As a practical matter, the fixed effects and random effects estimates are closer when T is large or when the variance of the unobserved effect is large relative to the variance of the idiosyncratic error. I think Example 14.4 is representative of what often happens in applications that apply pooled OLS, random effects, and fixed effects, at least on the estimates of the marriage and union wage premiums. The random effects estimates are below pooled OLS and the fixed effects estimates are below the random effects estimates.
Choosing between the fixed effects transformation and first differencing is harder, although useful evidence can be obtained by testing for serial correlation in the first-difference estimation. If the AR(1) coefficient is significant and negative (say, less than ?.3, to pick a not quite arbitrary value), perhaps fixed effects is preferred.
Matched pairs samples have been profitably used in recent economic applications, and differencing or random effects methods can be applied. In an equation such as (14.12), there is probably no need to allow a different intercept for each sister provided that the labeling of sisters is random. The different intercepts might be needed if a certain feature of a sister that is not included in the observed controls is used to determine the ordering. A statistically significant intercept in the differenced equation would be evidence of this.
CHAPTER 15
TEACHING NOTES
When I wrote the first edition, I took the novel approach of introducing instrumental variables as a way of solving the omitted variable (or unobserved heterogeneity) problem. Traditionally, a student’s first exposure to IV methods comes by way of simultaneous equations models. Occasionally, IV is first seen as a method to solve the measurement error problem. I have even seen texts where the
first appearance of IV methods is to obtain a consistent estimator in an AR(1) model with AR(1) serial correlation.
The omitted variable problem is conceptually much easier than simultaneity, and stating the conditions needed for an IV to be valid in an omitted variable context is straightforward. Besides, most modern applications of IV have more of an
unobserved heterogeneity motivation. A leading example is estimating the return to education when unobserved ability is in the error term. We are not thinking that education and wages are jointly determined; for the vast majority of people, education is completed before we begin collecting information on wages or salaries. Similarly, in studying the effects of attending a certain type of school on student performance, the choice of school is made and then we observe performance on a test. Again, we are primarily concerned with unobserved factors that affect performance and may be correlated with school choice; it is not an issue of simultaneity.
The asymptotics underlying the simple IV estimator are no more difficult than for the OLS estimator in the bivariate regression model. Certainly consistency can be derived in class. It is also easy to demonstrate how, even just in terms of inconsistency, IV can be worse than OLS if the IV is not completely exogenous. At a minimum, it is important to always estimate the reduced form equation and test whether the IV is partially correlated with endogenous explanatory variable. The material on multicollinearity and 2SLS estimation is a direct extension of the OLS case. Using equation (15.43), it is easy to explain why multicollinearity is generally more of a problem with 2SLS estimation.
Another conceptually straightforward application of IV is to solve the
measurement error problem, although, because it requires two measures, it can be hard to implement in practice.
Testing for endogeneity and testing any overidentification restrictions is something that should be covered in second semester courses. The tests are fairly easy to motivate and are very easy to implement.
While I provide a treatment for time series applications in Section 15.7, I admit to having trouble finding compelling time series applications. These are likely to be found at a less aggregated level, where exogenous IVs have a chance of existing. (See also Chapter 16.)
CHAPTER 16
TEACHING NOTES
I spend some time in Section 16.1 trying to distinguish between good and inappropriate uses of SEMs. Naturally, this is partly determined by my taste, and many applications fall into a gray area. But students who are going to learn about SEMS should know that just because two (or more) variables are jointly determined does not mean that it is appropriate to specify and estimate an SEM. I have seen many bad applications of SEMs where no equation in the system can stand on its own
共分享92篇相关文档