A Complete Example
Site: | Saylor Academy |
Course: | MA121: Introduction to Statistics |
Book: | A Complete Example |
Printed by: | Guest user |
Date: | Tuesday, 20 May 2025, 8:21 AM |
Description
This section explains linear regression, from presenting the data to using scatter plots to identify the linear pattern. It then fits a linear model using least squares estimation and addresses statistical inferences on correlation coefficient and slope parameter.
A Complete Example
Learning Objective
- To see a complete linear correlation and regression analysis, in a practical setting, as a cohesive whole.
In the preceding sections numerous concepts were introduced and illustrated, but the analysis was broken into disjoint pieces by sections. In this section we will go through a complete example of the use of correlation and regression analysis of data from start to finish, touching on all the topics of this chapter in sequence.
In general educators are convinced that, all other factors being equal, class attendance has a significant bearing on course performance. To investigate the relationship between attendance and performance, an education researcher selects for study a multiple section introductory statistics course at a large university. Instructors in the course agree to keep an accurate record of attendance throughout one semester. At the end of the semester 26 students are selected a random. For each student in the sample two measurements are taken: \(x\), the number of days the student was absent, and \(y\), the student's score on the common final exam in the course. The data are summarized in Table 10.4 "Absence and Score Data".
Table 10.4 Absence and Score Data
Absences | Score | Absences | Score |
---|---|---|---|
x | y | x | y |
2 | 76 | 4 | 41 |
7 | 29 | 5 | 63 |
2 | 96 | 4 | 88 |
7 | 63 | 0 | 98 |
2 | 79 | 1 | 99 |
7 | 71 | 0 | 89 |
0 | 88 | 1 | 96 |
0 | 92 | 3 | 90 |
6 | 55 | 1 | 90 |
6 | 70 | 3 | 68 |
2 | 80 | 1 | 84 |
2 | 75 | 3 | 80 |
1 | 63 | 1 | 78 |
A scatter plot of the data is given in Figure 10.13 "Plot of the Absence and Exam Score Pairs". There is a downward trend in the plot which indicates that on average students with more absences tend to do worse on the final examination.
Figure 10.13 Plot of the Absence and Exam Score Pairs

The trend observed in Figure 10.13 "Plot of the Absence and Exam Score Pairs" as well as the fairly constant width of the apparent band of points in the plot makes it reasonable to assume a relationship between \(x\) and \(y\) of the form
\(y=β_1x+β_0+ε\)
where \(β_1\) and \(β_0\) are unknown parameters and ε is a normal random variable with mean zero and unknown standard deviation \(σ\). Note carefully that this model is being proposed for the population of all students taking this course, not just those taking it this semester, and certainly not just those in the sample. The numbers \(β_1\), \(β_0\), and σ are parameters relating to this large population.
First we perform preliminary computations that will be needed later. The data are processed in Table 10.5 "Processed Absence and Score Data".
Table 10.5 Processed Absence and Score Data
x | y | x2 | xy | y2 | x | y | x2 | xy | y2 |
---|---|---|---|---|---|---|---|---|---|
2 | 76 | 4 | 152 | 5776 | 4 | 41 | 16 | 164 | 1681 |
7 | 29 | 49 | 203 | 841 | 5 | 63 | 25 | 315 | 3969 |
2 | 96 | 4 | 192 | 9216 | 4 | 88 | 16 | 352 | 7744 |
7 | 63 | 49 | 441 | 3969 | 0 | 98 | 0 | 0 | 9604 |
2 | 79 | 4 | 158 | 6241 | 1 | 99 | 1 | 99 | 9801 |
7 | 71 | 49 | 497 | 5041 | 0 | 89 | 0 | 0 | 7921 |
0 | 88 | 0 | 0 | 7744 | 1 | 96 | 1 | 96 | 9216 |
0 | 92 | 0 | 0 | 8464 | 3 | 90 | 9 | 270 | 8100 |
6 | 55 | 36 | 330 | 3025 | 1 | 90 | 1 | 90 | 8100 |
6 | 70 | 36 | 420 | 4900 | 3 | 68 | 9 | 204 | 4624 |
2 | 80 | 4 | 160 | 6400 | 1 | 84 | 1 | 84 | 7056 |
2 | 75 | 4 | 150 | 5625 | 3 | 80 | 9 | 240 | 6400 |
1 | 63 | 1 | 63 | 3969 | 1 | 78 | 1 | 78 | 6084 |
Adding up the numbers in each column in Table 10.5 "Processed Absence and Score Data" gives
Rounding these numbers to two decimal places, the least squares regression line for these data is
The goodness of fit of this line to the scatter plot, the sum of its squared errors, is
This number is not particularly informative in itself, but we use it to compute the important statistic
The size and sign of the slope \( \hat β_1=−5.23\) indicate that, for every class missed, students tend to score about 5.23 fewer points lower on the final exam on average. Similarly for every two classes missed students tend to score on average \(2×5.23=10.46\) fewer points on the final exam, or about a letter grade worse on average.
Since 0 is in the range of x-values in the data set, the y-intercept also has meaning in this problem. It is an estimate of the average grade on the final exam of all students who have perfect attendance. The predicted average of such students is \(\hat β_0=91.24\).
Before we use the regression equation further, or perform other analyses, it would be a good idea to examine the utility of the linear regression model. We can do this in two ways: 1) by computing the correlation coefficient \(r\) to see how strongly the number of absences \(x\) and the score \(y\) on the final exam are correlated, and 2) by testing the null hypothesis \(H_0: β_1=0\) (the slope of the population regression line is zero, so \(x\) is not a good predictor of \(y\)) against the natural alternative \(H_a:β_1 < 0\) (the slope of the population regression line is negative, so final exam scores \(y\) go down as absences \(x\) go up).
The correlation coefficient r is
Turning to the test of hypotheses, let us test at the commonly used 5% level of significance. The test is
From Figure 12.3 "Critical Values of ", with \(df=26−2=24\) degrees of freedom \(t_{0.05}=1.711\), so the rejection region is \((−∞,−1.711]\). The value of the standardized test statistic is
which falls in the rejection region. We reject \(H_0\) in favor of \(H_a\). The data provide sufficient evidence, at the 5% level of significance, to conclude that \(β_1\) is negative, meaning that as the number of absences increases average score on the final exam decreases.
As already noted, the value \(β_1=−5.23\) gives a point estimate of how much one additional absence is reflected in the average score on the final exam. For each additional absence the average drops by about 5.23 points. We can widen this point estimate to a confidence interval for \(β_1\). At the 95% confidence level, from Figure 12.3 "Critical Values of " with \(df=26−2=24\) degrees of freedom, \(t_{α∕2}=t_{0.025}=2.064\). The 95% confidence interval for \(β_1\) based on our sample data is
or (−7.38,−3.08). We are 95% confident that, among all students who ever take this course, for each additional class missed the average score on the final exam goes down by between 3.08 and 7.38 points.
This text was adapted by Saylor Academy under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 License without attribution as requested by the work's original creator or licensor.
Exercises
The exercises in this section are unrelated to those in previous sections.
1. The data give the amount \(x\) of silicofluoride in the water (mg/L) and the amount y of lead in the bloodstream (μg/dL) of ten children in various communities with and without municipal water. Perform a complete analysis of the data, in analogy with the discussion in this section (that is, make a scatter plot, do preliminary computations, find the least squares regression line, find \(SSE\), \(sε\), and \(r\), and so on). In the hypothesis test use as the alternative hypothesis \(β_1 > 0\), and test at the 5% level of significance. Use confidence level 95% for the confidence interval for \(β_1\). Construct 95% confidence and predictions intervals at \(x_p=2\) at the end.
\(\begin{array}{c|ccccc}
x & 0.0 & 0.0 & 1.1 & 1.4 & 1.6 \\
\hline y & 0.3 & 0.1 & 4.7 & 3.2 & 5.1 \\
x & 1.7 & 2.0 & 2.0 & 2.2 & 2.2 \\
\hline y & 7.0 & 5.0 & 6.1 & 8.6 & 9.5
\end{array}\)
Large Data Set Exercises
3. Large Data Sets 3 and 3A list the shoe sizes and heights of 174 customers entering a shoe store. The gender of the customer is not indicated in Large Data Set 3. However, men's and women's shoes are not measured on the same scale; for example, a size 8 shoe for men is not the same size as a size 8 shoe for women. Thus it would not be meaningful to apply regression analysis to Large Data Set 3. Nevertheless, compute the scatter diagrams, with shoe size as the independent variable \((x)\) and height as the dependent variable \((y)\), for (i) just the data on men, (ii) just the data on women, and (iii) the full mixed data set with both men and women. Does the third, invalid scatter diagram look markedly different from the other two?
http://www.gone.2012books.lardbucket.org/sites/all/files/data3.xls
http://www.gone.2012books.lardbucket.org/sites/all/files/data3A.xls
5. Separate out from Large Data Set 3A just the data on women and do a complete analysis, with shoe size as the independent variable \((x)\) and height as the dependent variable \((y)\). Use \(α=0.05\) and \(x_p=10\) whenever appropriate.
http://www.gone.2012books.lardbucket.org/sites/all/files/data3A.xls
Answers
1.
\(Σx=14.2\), \(Σy=49.6\), \(Σxy=91.73\), \(Σx^2=26.3\), \(Σy^2=333.86\).
\(SS_{xx}=6.136\), \(SS_{xy}=21.298\), \(SS_{yy}=87.844\).
\( \overline x =1.42\), \( \overline y =4.96\).
\(\hat β_1=3.47\), \(\hat β_0=0.03\).
\(SSE=13.92\).
\(sε=1.32\).
\(r = 0.9174\), \(r^2 = 0.8416\).
\(df=8\), \(T = 6.518\).
The 95% confidence interval for \(β_1\) is: \((2.24,4.70)\).
At \(x_p=2\), the 95% confidence interval for \(E(y)\) is \((5.77,8.17)\).
At \(x_p=2\), the 95% prediction interval for \(y\) is \((3.73,10.21)\).
3. The positively correlated trend seems less profound than that in each of the previous plots.
5. The regression line: \(\hat y=3.3426x+138.7692\). Coefficient of Correlation: \(r = 0.9431\). Coefficient of Determination: \(r^2 = 0.8894\). \(SSE=283.2473\). \(s_e=1.9305\). A 95% confidence interval for \(β_1: (3-.0733,3.6120)\). Test Statistic for \(H_0:β_1=0: T = 24.7209\). At \(x_p=10\), \(\hat y=172.1956\); a 95% confidence interval for the mean value of \(y\) is: \((171.5577,172.8335)\); and a 95% prediction interval for an individual value of \(y\) is: \((168.2974,176.0938)\).