The One-Sample t-Test

Statistical Inference Using Estimated Standard Errors: The One-Sample t Test

- We’ve covered the basics of hypothesis testing using the one-sample z-test.

- The one-sample z-test is appropriate when we know the standard deviation in the population.

- In actual research, we rarely know the population standard deviation, so we have to estimate the standard error of the mean from our sample data:

- When we use an estimated standard error of the mean in the denominator, we’re now using a one-sample t-test instead of a z-test:

- The t-score gives the estimated number of standard errors that our differs from (the value given in the null hypothesis:

- But this causes a problem

- By using the estimated standard error, we can no longer use the normal distribution (& z-table) to find our critical values – there is a separate t distribution where we get our critical values.

- The t distribution is pretty similar to the normal distribution, but does differ especially when your sample size is small

- So, the major difference here in terms of our hypothesis-testing procedure is that we need to look up our critical values from the t distribution rather than the z (normal) distribution.

- Appendix D (p. 573) provides these – how to use the Appendix

- Look at the column corresponding to your alpha level (e.g., non-directional, )

- Look down the first column to find your degrees of freedom (N – 1)

- So, for df = 10, the critical t is 2.228 for a non-directional test using .

- (Notice at the bottom, when df is infinite, critical t is 1.96 – same as for z)

Steps of One-Sample t-test

1. State hypotheses:

- Non-directional (two-tailed) test

H0:

H1:

2. State decision rules:

- For df = N –1 = 15 – 1 = 14 and , critical t-value = 2.145

- If tobs > +2.145, or tobs < -2.145, reject null

- If –2.145 tobs +2.145, do not reject null

3. Determine for sample size:

4. Calculate tobs:

5. Compare tobs to tcrit:

- -2.145 < -2.123 < +2.145 – do not reject null – the observed difference is most likely due to sampling error.

6. State the conclusion in words:

- The mean of 14 is not statistically significantly different from 15, so on average, students seem to be studying the recommended number of hours per week.

Confidence Intervals

- With the formal hypothesis-testing procedures we’ve covered, we tested a sample mean against a specific value of

- Often we may not be very confident in the value of the true population mean being exactly the value we observed for our sample mean (because of sampling error).

• Another approach to estimating the population mean would be to specify a range of values that we are relatively confident the population mean falls within
• The confidence interval most commonly used by researchers in the behavioral sciences is the 95% confidence interval
• Basically, we take as our best estimate of , and we specify a range of values around the that
has a high probability of falling in.

Confidence Intervals when σx is known

The formula looks like this:

From: To:

• for the 95% confidence interval, we would use z = 1.96
• width of confidence interval is influenced by σx so we know that both σ and N will impact
• as N increases, CI will decrease
• as σ decreases, CI will decrease

Confidence Intervals when σx is unknown

Structure of the formula is the same, except that the t distribution is used in place of the z distribution and ŝ x is used in place of σx

From: To:

where t is the nondirectional t critical value

• The t value will depend on your sample size, so we cannot use a constant value as we did with the z test
• We get the appropriate t critical value by calculating the degrees of freedom and going to Appendix D in the textbook

CI especially provides more info when you reject the null because you get a range for the actual population mean for your observed mean (e.g., the actual population mean for our sample of SUNY students)

- With traditional hypothesis testing, we’d reject null because our falls beyond a critical value

- With the 95% CI around , we can likewise see that the hypothesized is not in the confidence interval.