home / study / math / statistics and probability / statistics and probability qu
ID: 3072272 • Letter: H
Question
home / study / math / statistics and probability / statistics and probability questions and answers / suppose that we fit model (1) to the n observations (y1, x11, x21), …, (yn, x1n, x2n). yi ... Your question has been answered Let us know if you got a helpful answer. Rate this answer Question: Suppose that we fit Model (1) to the n observations (y1, x11, x21), …, (yn, x1n, x2n). yi = 0 + ... Suppose that we fit Model (1) to the n observations (y1, x11, x21), …, (yn, x1n, x2n). yi = 0 + 1x1i + 2x2i + i , i = 1, …., n, (1) where ’s are identically and independently distributed as a normal random variable with mean zero and variance 2, i = 1, …, n , and all the x’s are fixed. a) Suppose that Model (1) is the true model. Show that at any observation yi , the point estimator of the mean response and its residual are two statistically independent normal random variables. b) Suppose the true model is Model (1), but we fit the data to the following Model (2) (that is, ignore the variable x2). yi = 0 + 1x1i + i , i = 1, …., n. Assume that average of x1 =0, average of x2=0. The sum of x1i and x2i equals 0. Derive the least-squares estimator of 1 obtained from fitting Model (2). Is this least-squares estimator biased for 1 under Model (1)?
Explanation / Answer
OLS estimator. Let 1-hat be the estimator; y-hat = 1-hat * x be the predicted value, so u-hat = y - y-hat = y - 1-hat*x is the residual.
To get the sum of squared residuals:
u-hat = y - 1-hat*x
(u-hat)2 = (y - 1-hat*x)2 = y2 - 2*1-hat*x*y + 1-hat2*x2
(u-hat)2 = [y2 - 2*1-hat*x*y + 1-hat2*x2] = (y2) - (2*1-hat*x*y) + (1-hat2*x2)
u-hat2 = y2 - 21-hat* (x*y) + 1-hat2 * x2
Take the derivative w.r.t. beta-hat, and set equal to zero, in order to minimize sum of squared residuals:
/1-hat [ y2 - 21-hat * (x*y) + 1-hat2 * x2] = 0 - 2(x*y) + 2 1-hat* x2 = 0
2(x*y) = 2 1-hat* x2
1-hat = (x*y) / x2
Expected value: E[1-hat] = E[(x*y) / x2 ] = E[(x*(1x + u)) / x2 ] = E[(1x2 + xu)) / x2 ] = E[1x2 / x2 + xu / x2 ] = E[1x2 / x2 ] + E[ xu / x2 ] = E[1] + E[ xu / x2 ] = 1 + E[ x/ x2 * u]. If x and u are independent, then this last term is zero, so E[1-hat] = 1. Yes, unbiased.
Var[1-hat] = E[((x*y) / x2 - 1)2] = E[((x*(x1+u)) / x2 - 1)2] = E[((x*(x1+u)) / x2 - 1)2] = E[((x21+xu)) / x2 - 1)2] = E[(x21+xu) / x2 - 1)2] = E[(x21/ x2+xu/ x2) - 1)2] = E[1+xu/ x2) - 1)2] = E[(xu/ x2)2] =E[(xu)2/( x2)2] = E[(x2E[u]2/( x2)2] = E[u]2/E[ x2].
Related Questions
Navigate
Integrity-first tutoring: explanations and feedback only — we do not complete graded work. Learn more.