Academic Integrity: tutoring, explanations, and feedback — we don’t complete graded work or submit on a student’s behalf.

How can the width of a confidence interval be increased? How are confidence inte

ID: 3150805 • Letter: H

Question

How can the width of a confidence interval be increased? How are confidence intervals and hypothesis tests related? What is the definition of a p-value? How are p-values used to make decisions about hypotheses? How are p-values for one-sided tests related to p-values for two-sided tests? What happens when a Type I error occurs? What about a Type II error? What are the different ways of increasing the statistical power of a hypothesis test? How are the standard normal distribution and t-distribution similar? How are they different? How does the sample size play a role in hypothesis tests and the degrees of freedom in a test? How does a matched pairs test differ from the other hypothesis tests that test a population mean?

Explanation / Answer

a) to decrease the sample size

b) There is an extremely close relationship between confidence intervals andhypothesis testing. When a 95% confidence interval is constructed, all values in the interval are considered plausible values for the parameter being estimated. Values outside the interval are rejected as relatively implausible. If the value of the parameter specified by the null hypothesis is contained in the 95% interval then the null hypothesis cannot be rejected at the 0.05 level. If the value specified by the null hypothesis is not in the interval then the null hypothesis can be rejected at the 0.05 level. If a 99% confidence interval is constructed, then values outside the interval are rejected at the 0.01 level.

c) the p-value is a function of the observed sample results (a test statistic) relative to a statistical model, which measures how extreme the observation is. The p-value is the probability that the observed result has nothing to do with what one is actually testing for.

d) if p-value<level of significance,then we reject H0

if p-value>level of significance,then we do not reject H0

e) p-value of two sided test=2*p-value of one sided test

f)

Type I error

When the null hypothesis is true and you reject it, you make a type I error. The probability of making a type I error is , which is the level of significance you set for your hypothesis test. An of 0.05 indicates that you are willing to accept a 5% chance that you are wrong when you reject the null hypothesis. To lower this risk, you must use a lower value for . However, using a lower value for alpha means that you will be less likely to detect a true difference if one really exists.

Type II error

When the null hypothesis is false and you fail to reject it, you make a type II error. The probability of making a type II error is , which depends on the power of the test. You can decrease your risk of committing a type II error by ensuring your test has enough power. You can do this by ensuring your sample size is large enough to detect a practical difference when one truly exists.

The probability of rejecting the null hypothesis when it is false is equal to 1–. This value is the power of the test.

g)  If the consequences of making one type of error are more severe or costly than making the other type of error, then choose a level of significance and a power for the test that will reflect the relative severity of those consequences.

h)

i) when n is large,those two distribution behave almost same and for small n,they behave differently.

j) Use the paired t–test when you have one measurement variable and two nominal variables, one of the nominal variables has only two values, and you only have one observation for each combination of the nominal variables; in other words, you have multiple pairs of observations. It tests whether the mean difference in the pairs is different from 0.

Hire Me For All Your Tutoring Needs
Integrity-first tutoring: clear explanations, guidance, and feedback.
Drop an Email at
drjack9650@gmail.com
Chat Now And Get Quote