we have a true model Yt = b0 + b1Xt + b2Yt + Et where the error term follows a w
ID: 1187736 • Letter: W
Question
we have a true model
Yt = b0 + b1Xt + b2Yt + Et
where the error term follows a weird serial correlation: Et = Et-1 + Ut. Ut is perfectly well-behaved random variable with 0 mean and constant variance. cov(Ut,Us) = 0 when t is different from s.
a. state the consequences of estimating this model using OLS? what kind of Gauss-Markow assumptions that it happens to violate.
b. find a transformation of the data to be able to use the same data to estimate a model that satisfies the Gauss-Markov assumptions. be clear and explicit about the process. clearly explain why the transformed model meets the Gauss-markov assumption.
* this kind of model is random walk serial correlation.
Explanation / Answer
Suppose there is a sequence of random variables {Yt}t=1n and a sequence of vectors of random variables, {Xt}t=1n. In dealing with conditional expectations of Yt given Xt, the sequence {Yt}t=1n is said to be heteroscedastic if the conditional variance of Yt given Xt, changes with t. Some authors refer to this as conditional heteroscedasticity to emphasize the fact that it is the sequence of conditional variances that changes and not the unconditional variance. In fact it is possible to observe conditional heteroscedasticity even when dealing with a sequence of unconditional homoscedastic random variables, however, the opposite does not hold.If the variance changes only because of changes in value of X and not because of a dependence on the index t, the changing variance might be described using a scedastic function. When using some statistical techniques, such as ordinary least squares (OLS), a number of assumptions are typically made. One of these is that the error term has a constant variance. This might not be true even if the error term is assumed to be drawn from identical distributions. For example, the error term could vary or increase with each observation, something that is often the case with cross-sectional or time series measurements. Heteroscedasticity is often studied as part of econometrics, which frequently deals with data exhibiting it. White's influential paper[2] used "heteroskedasticity" instead of "heteroscedasticity" whereas the latter has been used in later works.[3] [edit]Consequences Heteroscedasticity does not cause ordinary least squares coefficient estimates to be biased, although it can cause ordinary least squares estimates of the variance (and, thus, standard errors) of the coefficients to be biased, possibly above or below the true or population variance. Thus, regression analysis using heteroscedastic data will still provide an unbiased estimate for the relationship between the predictor variable and the outcome, but standard errors and therefore inferences obtained from data analysis are suspect. Biased standard errors lead to biased inference, so results of hypothesis tests are possibly wrong. As an example of the consequence of biased standard error estimation which OLS will produce if heteroscedasticity is present, a researcher might find compelling results against the rejection of a null hypothesis at a given significance level as statistically significant, when that null hypothesis was actually uncharacteristic of the actual population (i.e., make a type II error). Under certain assumptions, the OLS estimator has a normal asymptotic distribution when properly normalized and centered (even when the data does not come from a normal distribution). This result is used to justify using a normal distribution, or a chi square distribution (depending on how the test statistic is calculated), when conducting a hypothesis test. This holds even under heteroscedasticity. More precisely, the OLS estimator in the presence of heteroscedasticity is asymptotically normal, when properly normalized and centered, with a variance-covariance matrix that differs from the case of homoscedasticity. In 1980, White[2] proposed a consistent estimator for the variance-covariance matrix of the asymptotic distribution of the OLS estimator. This validates the use of hypothesis testing using OLS estimators and White's variance-covariance estimator under heteroscedasticity. Heteroscedasticity is also a major practical issue encountered in ANOVA problems.[4] The F test can still be used in some circumstances.[5] However, it has been said that students in econometrics should not overreact to heteroskedasticity.[3] One author wrote, "unequal error variance is worth correcting only when the problem is severe."[6] And another word of caution was in the form, "heteroscedasticity has never been a reason to throw out an otherwise good model."[7][3] With the advent of heteroscedasticity-consistent standard errors allowing for inference without specifying the conditional second moment of error term, testing conditional homoscedasticity is not as important as in the past.[citation needed] The econometrician Robert Engle won the 2003 Nobel Memorial Prize for Economics for his studies on regression analysis in the presence of heteroscedasticity, which led to his formulation of the autoregressive conditional heteroscedasticity (ARCH) modeling technique.[citation needed] [edit]Detection Absolute value of residuals for simulated first order Heteroskedastic data. There are several methods to test for the presence of heteroscedasticity. Although tests for heteroscedasticity between groups can formally be considered as a special case of testing within regression models, some tests have structures specific to this case. Tests in regression Park test (1966)[8] Glejser test (1969)[9][10] White test[2] Breusch
Related Questions
Navigate
Integrity-first tutoring: explanations and feedback only — we do not complete graded work. Learn more.