The Difference between Two Means

Difference between Two Means (Independent Groups)

Learning Objectives

  1. State the assumptions for testing the difference between two means
  2. Estimate the population variance assuming homogeneity of variance
  3. Compute the standard error of the difference between means
  4. Compute t and p for the difference between means
  5. Format data for computer analysis

It is much more common for a researcher to be interested in the difference between means than in the specific values of the means themselves. This section covers how to test for differences between means from two separate groups of subjects. A later section describes how to test for differences between the means of two conditions in designs where only one group of subjects is used and each subject is tested in each condition.

We take as an example the data from the "Animal Research" case study. In this experiment, students rated (on a 7-point scale) whether they thought animal research is wrong. The sample sizes, means, and variances are shown separately for males and females in Table 1.

Table 1. Means and Variances in Animal Research study.

Group n Mean Variance
Females 17 5.353 2.743
Males 17 3.882 2.985

As you can see, the females rated animal research as more wrong than did the males. This sample difference between the female mean of 5.35 and the male mean of 3.88 is 1.47. However, the gender difference in this particular sample is not very important. What is important is whether there is a difference in the population means.

In order to test whether there is a difference between population means, we are going to make three assumptions:

  1. The two populations have the same variance. This assumption is called the assumption of homogeneity of variance.
  2. The populations are normally distributed.
  3. Each value is sampled independently from each other value. This assumption requires that each subject provide only one value. If a subject provides two scores, then the scores are not independent. The analysis of data with two scores per subject is shown in the section on the correlated t test later in this chapter.

The consequences of violating the first two assumptions are investigated in the simulation in the next section. For now, suffice it to say that small-to-moderate violations of assumptions 1 and 2 do not make much difference. It is important not to violate assumption 3.

We saw the following general formula for significance testing in the section on testing a single mean:

\mathrm{t}=\frac{\text { statistic-hypothesized value }}{\text { estimated standard error of the statistic }}

In this case, our statistic is the difference between sample means and our hypothesized value is 0. The hypothesized value is the null hypothesis that the difference between population means is 0.

We continue to use the data from the "Animal Research" case study and will compute a significance test on the difference between the mean score of the females and the mean score of the males. For this calculation, we will make the three assumptions specified above.

The first step is to compute the statistic, which is simply the difference between means.


Since the hypothesized value is 0, we do not need to subtract it from the statistic.

The next step is to compute the estimate of the standard error of the statistic. In this case, the statistic is the difference between means, so the estimated standard error of the statistic is \left(S_{M_{1}-M_{2}}\right). Recall from the relevant section in the chapter on sampling distributions that the formula for the standard error of the difference between means is:

\sigma_{M_{1}-M_{2}}=\sqrt{\frac{\sigma_{1}^{2}}{n_{1}}+\frac{\sigma_{2}^{2}}{n_{2}}}=\sqrt{\frac{\sigma^{2}}{n}+\frac{\sigma^{2}}{n}}=\sqrt{\frac{2 \sigma^{2}}{n}}

In order to estimate this quantity, we estimate \sigma^{2} and use that estimate in place of \sigma^{2} . Since we are assuming the two population variances are the same, we estimate this variance by averaging our two sample variances. Thus, our estimate of variance is computed using the following formula:


where \mathrm{MSE} is our estimate of \sigma^{2}. In this example,

\mathrm{MSE}=(2.743+2.985) / 2=2.864

Since n (the number of scores in each group) is 17 ,

s_{M_{1}-M_{2}}=\sqrt{\frac{2 M S E}{n}}=\sqrt{\frac{(2)(2.864)}{17}}=0.5805

The next step is to compute t by plugging these values into the formula:

t=1.4705 / .5805=2.533

Finally, we compute the probability of getting a \mathrm{t} as large or larger than 2.533 or as small or smaller than -2.533. To do this, we need to know the degrees of freedom. The degrees of freedom is the number of independent estimates of variance on which \mathrm{MSE} is based. This is equal to \left(n_{1}-1\right)+\left(n_{2}-1\right), where n_{1} is the sample size of the first group and n_{2} is the sample size of the second group. For this example, n_{1}=n_{2}=17. When n_{1}=n_{2}, it is conventional to use " n " to refer to the sample size of each group. Therefore, the degrees of freedom is 16+16=32.

Once we have the degrees of freedom, we can use the t distribution calculator to find the probability. Figure 1 shows that the probability value for a two-tailed test is 0.0164. The two-tailed test is used when the null hypothesis can be rejected regardless of the direction of the effect. As shown in Figure 1, it is the probability of a t < -2.533 or a t > 2.533.

Figure 1. The two-tailed probability.

The results of a one-tailed test are shown in Figure 2. As you can see, the probability value of 0.0082 is half the value for the two-tailed test.

Figure 2. The one-tailed probability.

Source: David M. Lane,
Public Domain Mark This work is in the Public Domain.