Academic Integrity: tutoring, explanations, and feedback — we don’t complete graded work or submit on a student’s behalf.

We have discussed in class the fact that asymptotic theory enables us to simplif

ID: 1138126 • Letter: W

Question

We have discussed in class the fact that asymptotic theory enables us to simplify the requirements for OLS analysis and hypothesis testing.

State the assumptions required to obtain asymptotic normality when using OLS.

Explain in non-technical terms the two theoretical simplifications you obtain when using asymptotic results. Which of these is likely to be more useful in practice?

Your friend is taking econometrics for the first time, and is confused about the difference between asymptotic normality and asymptotic efficiency. Explain to him what each of those phrases mean.

Explanation / Answer

assumption into two parts:
Homoscedasticity: E[i2|X] = 2, which means that the error term has the same variance 2 in each observation. When this requirement is violated this is called heteroscedasticity, in such case a more efficient estimator would be weighted least squares. If the errors have infinite variance then the OLS estimates will also have infinite variance (although by the law of large numbers they will nonetheless tend toward the true values so long as the errors have zero mean). In this case, robust estimation techniques are recommended.
No autocorrelation: the errors are uncorrelated between observations: E[ij|X] = 0 for i j. This assumption may be violated in the context of time series data, panel data, cluster samples, hierarchical data, repeated measures data, longitudinal data, and other data with dependencies. In such cases generalized least squares provides a better alternative than the OLS. Another expression for autocorrelation is serial correlation.
Normality. It is sometimes additionally assumed that the errors have normal distribution conditional on the regressors:[19]
{displaystyle arepsilon mid Xsim {mathcal {N}}(0,sigma ^{2}I_{n}).}
This assumption is not needed for the validity of the OLS method, although certain additional finite-sample properties can be established in case when it does (especially in the area of hypotheses testing). Also when the errors are normal, the OLS estimator is equivalent to the maximum likelihoodThere are several different frameworks in which the linear regression model can be cast in order to make the OLS technique applicable. Each of these settings produces the same formulas and same results. The only difference is the interpretation and the assumptions which have to be imposed in order for the method to give meaningful results. The choice of the applicable framework depends mostly on the nature of data in hand, and on the inference task which has to be performed.
One of the lines of difference in interpretation is whether to treat the regressors as random variables, or as predefined constants. In the first case (random design) the regressors xi are random and sampled together with the yi's from some population, as in an observational study. This approach allows for more natural study of the asymptotic properties of the estimators. In the other interpretation (fixed design), the regressors X are treated as known constants set by a design, and y is sampled conditionally on the values of X as in an experiment. For practical purposes, this distinction is often unimportant, since estimation and inference is carried out while conditioning on X. All results stated in this article are within the random design framework.In statistics, ordinary least squares (OLS) is a type of linear least squares method for estimating the unknown parameters in a linear regression model. OLS chooses the parameters of a linear function of a set of explanatory variables by the principle of least squares: minimizing the sum of the squares of the differences between the observed dependent variable (values of the variable being predicted) in the given dataset and those predicted by the linear function.
Geometrically, this is seen as the sum of the squared distances, parallel to the axis of the dependent variable, between each data point in the set and the corresponding point on the regression surface – the smaller the differences, the better the model fits the data. The resulting estimator can be expressed by a simple formula, especially in the case of a simple linear regression, in which there is a single regressor on the right side of the regression equation.
The OLS estimator is consistent when the regressors are exogenous, and optimal in the class of linear unbiased estimators when the errors are homoscedastic and serially uncorrelated. Under these conditions, the method of OLS provides minimum-variance mean-unbiased estimation when the errors have finite variances. Under the additional assumption that the errors are normally distributed, OLS is the maximum likelihood estimator.
OLS is used in fields as diverse as economics (econometrics), political science, psychology and engineering (control theory Estimators Edit
Consistency Edit
A sequence of estimates is said to be consistent, if it converges in probability to the true value of the parameter being estimated:
{displaystyle {hat { heta }}_{n} {xrightarrow {overset {}{p}}} heta _{0}.}
That is, roughly speaking with an infinite amount of data the estimator (the formula for generating the estimates) would almost surely give the correct result for the parameter being estimated.
Efficiency Edit
Asymptotic distribution Edit
If it is possible to find sequences of non-random constants {an}, {bn} (possibly depending on the value of 0), and a non-degenerate distribution G such that
{displaystyle b_{n}({hat { heta }}_{n}-a_{n}) {xrightarrow {d}} G,}
then the sequence of estimators {displaystyle extstyle {hat { heta }}_{n}} is said to have the asymptotic distribution G.
Most often, the estimators encountered in practice are asymptotically normal, meaning their asymptotic distribution is the normal distribution, with an = 0, bn = n, and G = N(0, V):
{displaystyle {sqrt {n}}({hat { heta }}_{n}- heta _{0}) {xrightarrow {d}} {mathcal {N}}(0,V).}
Asymptotic confidence regions Editasymptotic theory, or large sample theory, is a framework for assessing properties of estimators and statistical tests. Within this framework, it is typically assumed that the sample size n grows indefinitely; the properties of estimators and tests are then evaluated in the limit as n . In practice, a limit evaluation is treated as being approximately valid for large finite sample sizes, as well.
Many mathematical models involve input parameters, which are not precisely known. Global sensitivity analysis aims to identify the parameters whose uncertainty has the largest impact on the variability of a quantity of interest (output of the model). One of the statistical tools used to quantify the influence of each input variable on the output is the Sobol sensitivity index. We consider the statistical estimation of this index from a finite sample of model outputs: we present two estimators and state a central limit theorem for each. We show that one of these estimators has an optimal asymptotic variance. We also generalize our results to the case where the true output is not observable, and is replaced by a noisy version.

Hire Me For All Your Tutoring Needs
Integrity-first tutoring: clear explanations, guidance, and feedback.
Drop an Email at
drjack9650@gmail.com
Chat Now And Get Quote