Topic Name Description
Course Introduction Course Syllabus
1.1.1: What is Statistics? What are Statistics?

Read this brief introduction to the field of statistics and some relevant examples of how statistics can lend credibility to making arguments. Complete the practice questions in these sections.

1.1.2: Descriptive and Inferential Statistics Descriptive and Inferential Statistics

Read these sections and complete the questions at the end of each section. Here, we introduce descriptive statistics using examples and discuss the difference between descriptive and inferential statistics. We also talk about samples and populations, explain how you can identify biased samples, and define differential statistics.

Basic Definitions and Concepts

Read section 1 from chapter 1 to further enhance your understanding of the elements of descriptive and inferential statistics. This section will introduce some of the key concepts in statistics and has numerous exercise and examples. Complete the odd-numbered exercises before checking the answers.

1.1.3: Types of Data and Their Collection Variables and Data Collection

Read these sections and complete the questions at the end of each section. This section introduces several types of data and their distinguishing features. You will learn about independent and dependent variables and how common data can be coded and collected.

Presenting Data

This section talks about how data can be presented. Attempt the exercises and check your answers.

1.2.1: Graphical Methods for Describing Quantitative Data Graphing

Read these sections and complete the questions at the end of each section. First, we'll look at the available methods to portray distributions of quantitative variables. Then, we'll introduce the stem and leaf plot and how to capture the frequency of your data. We'll also discuss box plots for the purpose of identifying outliers and for comparing distributions and bar charts for quantitative variables. Finally, we'll talk about line graphs, which are based on bar graphs.

Three Popular Data Displays

This section elaborates on how to describe data. In particular, you will learn about the relative frequency histogram. Complete the exercises and check your answers.

1.2.2: Numerical Measures of Central Tendency and Variability Numerical Measures of Central Tendency and Variability

Read these sections and complete the questions at the end of each section. First, we will define central tendency and introduce mean, median, and mode. We will then elaborate on median and mean and discusses their strengths and weaknesses in measuring central tendency. Finally, we'll address variability, range, interquartile range, variance, and the standard deviation.

Measures of Central Location

This section elaborates on mean, median, and mode at the population level and sample level. This section also contains many interesting examples of range, variance, and standard deviation. Complete the exercises and check your answers.

Mean, Median, Mode, and Variance

Watch this video series, which begins with a discussion on descriptive statistics and inferential statistics and then talks about mean, median, and mode, as well as sample variance.

1.2.3: Methods for Describing Relative Standing Percentiles

This section discusses percentiles, which are useful for describing relative standings of observations in a dataset.

1.2.4: Methods for Describing Bivariate Relationships Scatterplots and Bivariate Data

Watch this tutorial to learn how to create the scatter plot for bivariate data using two variables, x and y.

Pearson's r

This section introduces Pearson's correlation and explains what the typical values represent. It then elaborates on the properties of r, particularly that it is invariant under linear transformation. Finally, it introduces several formulas we can use to compute Pearson's correlation.

2.1.1: Events, Sample Spaces, and Probability Introduction to Probability

First, we will discuss experiments where outcomes are equally likely to occur and the frequency approach to assigning probabilities. Then, we will focus on the concept of events and touch on the issue of conditional probability.

Basic Concepts of Probability

Read this section about basic concepts of probability, including spaces, and events. This section discusses set operations using Venn diagrams, including complements, intersections, and unions. Finally, it introduces conditional probability and talks about independent events.

2.1.2: Counting Rules Permutations and Combinations

This section introduces formulas for combinations and permutations, which are helpful in computing probabilities.

The Addition Rule for Probability with a Venn Diagram Example

Watch these videos, which introduce Venn diagrams in the context of playing cards and discuss the addition rule for probability.

2.2.1: Common Discrete Random Variables Random Variables and Probability Distributions

This section first defines discrete and continuous random variables. Then, it introduces the distributions for discrete random variables. It also talks about the mean and variance calculations.

Binomial Distributions

Watch these videos on binomial distributions. The first explains how to compute the mean of a binomial distribution. The next two videos introduce binomial probabilities and show how to graph them. The remaining videos elaborate on binomial distribution in the context of basketball examples.

Binomial, Poisson, and Multinomial Distributions

First, we will talk about binomial probabilities, how to compute their cumulatives, and the mean and standard deviation. Then, we will introduce the Poisson probability formula, define multinomial outcomes, and discuss how to compute probabilities by using the multinomial distribution.

2.2.2: Normal Distribution The Standard Normal Distribution

This section talks about the standard normal curve and how to compute certain areas underneath the curve. This section also contains numerous exercises and examples.

More on Normal Distributions

First, this section talks about the history of the normal distribution and the central limit theorem and the relation of normal distributions to errors. Then, it discusses how to compute the area under the normal curve. It then moves on to the normal distribution, the area under the standard normal curve, and how to translate from non-standard normal to standard normal. Finally, it addresses how to compute (cumulative) binomial probabilities using normal approximations.

Introduction to the Normal Distribution

Watch this video on the normal distribution. It introduces the normal distribution and its density curve and explains how to read the areas underneath the normal curve. It also touches on the central limit behavior.

3.1.1: Continuous Random Variables Continuous Random Variables

First, this section talks about how to describe continuous distributions and compute related probabilities, including some basic facts about the normal distribution. Then, it covers how to compute probabilities related to any normal random variable and gives examples of using $z$-score transformations. Finally, it defines tail probabilities and illustrates how to find them.

3.1.2: Definition and Interpretation Introduction to Sampling Distributions

This section introduces sampling distribution using a concrete, discrete example, followed by a continuous example. This section also discusses sampling distributions' relationship to inferential statistics.

3.1.3: Sampling Distributions Properties Wolfram Demonstrations Project

Use the information provided on the demonstration pages and interact with the various simulations.

If you have not done so already, you will need to download install the free Wolfram CDF Player™. If using Chrome as your browser, you will also need to download the CDF files from the pages linked to above, and run them through the CDF Player on your desktop. Other browsers will allow you to interact with the demonstrations directly on the webpage.

3.2.1: The Sampling Distribution of Sample Mean The Sampling Distribution of a Sample Mean

First, this section discusses the mean and variance of the sampling distribution of the mean. It also shows how central limit theorem can help to approximate the corresponding sampling distributions. Then, it talks about the properties of the sampling distribution for differences between means by giving the formulas of both mean and variance for the sampling distribution. Using the central limit theorem, it also talks about how to compute the probability of a difference between means being beyond a specified value.

The Mean, Standard Deviation, and Sampling Distribution of the Sample Mean

This section gives several concrete examples of calculating the exact distributions of the sample mean. The corresponding means and standard deviations are computed for demonstration based on these distributions. Next, it discusses sampling distributions of sample means when the sample size is large. It also considers the case when the population is normal. Finally, it uses the central limit theorem for large sample approximations.

Sampling Distribution

Watch these videos, which discuss sampling distributions.

3.2.2: The Sampling Distribution of Pearson's r Sampling Distribution of r

Now, we'll talk about how the shape of the sampling distribution of Pearson correlation deviates from normality and then discusses how to transform $r$ to a normally distributed quantity. Then, we will discuss how to calculate the probability of obtaining an $r$ above a specified value.

3.2.3: The Sampling Distribution of the Sample Proportion Sampling Distribution of p

Here, we introduce the mean and standard deviation of the sampling distribution of $p$ and the relationship between the sampling distribution of $p$ and the normal distribution.

Standard Deviation

Watch this video on determining the standard deviation.

4.1.1: Sample Statistics and Parameters Basic Sample Statistics and Parameters

First, we'll discuss the basic concepts of sample statistics and population parameters. Then, we'll talk about the degree of freedom, which is the number of independent pieces of information that a point estimate is based on. Finally, we will talk about variance, which depends on the degrees of freedom.

4.1.2: Bias and Sampling Variability Characteristics of Estimators

This section discusses two important characteristics used as point estimates of parameters: bias and sampling variability. Bias refers to whether an estimator tends to over or underestimate the parameter. Sampling variability refers to how much the estimate varies from sample to sample.

4.2.1: Confidence Intervals for Mean Confidence Intervals for the Mean

This section explains the need for confidence intervals and why a confidence interval is not the probability the interval contains the parameter. Then, it discusses how to compute a confidence interval on the mean when sigma is unknown and needs to be estimated. It also explains when to use t-distribution or a normal distribution. Next, it covers the difference between the shape of the t distribution and the normal distribution and how this difference is affected by degrees of freedom. Finally, it explains the procedure to compute a confidence interval on the difference between means.

Demonstration: Confidence Intervals for a Mean

This demonstration provided here is a supplement to the previous section.

t Distribution Demonstration

Read the instructions and watch the video to see how the degrees of freedom affect the difference between t and normal distributions.

Comparing Normal and Student's t-Distributions

This demonstration is a supplement to the previous materials.

4.2.2: Confidence Intervals for Correlation and Proportion Confidence Intervals for Correlation and Proportion
First, this section shows how to compute a confidence interval for Pearson's correlation. The solution uses Fisher's z transformation. Then, it explains the procedure to compute confidence intervals for population proportions where the sampling distribution needs a normal approximation.
Confidence Intervals

Watch these videos, which discuss confidence intervals.

5.1.1: Setting up Hypotheses Setting Up Hypotheses
This section discusses the logic behind hypothesis testing using concrete examples and explains how to set up null and alternative hypothesis. It explains what Type I and II errors are and how they can occur. Finally, it introduces one-tailed and two-tailed tests and explains which one you should use for testing purposes.
5.1.2: Interpreting Hypotheses Testing Results The Observed Significance of a Test
This section explains what the observed significance of a test is, including how to compute and use it in the p-value approach.
Results

First, this section discusses whether rejection of the null hypothesis should be an all-or-none proposition. Then, it discusses how to interpret non-significant results; for example, it explains why the null hypothesis should not be accepted or should be accepted with caution. It also describes how a non-significant result can increase confidence that the null hypothesis is false.

Hypothesis Testing with One Sample
Read this section on the two types of errors in hypothesis testing and some examples of each.
More on Hypothesis Testing

Watch these videos on hypothesis testing.

5.1.3: Steps in Hypothesis Testing and Its Relation to Confidence Intervals Steps and Confidence Intervals in Hypothesis Testing
This section lists four key steps in hypothesis testing and explains the close relationship between confidence intervals and significance tests.
5.2.1: Testing Single Mean Testing a Single Mean

This section shows how to test the null hypothesis that the population mean is equal to some hypothesized value, using a very concrete example. In this example, all the main elements of hypothesis testing come in to play a role.

Sample Tests for a Population Mean

This section talks about using the central limit theorem to test a population mean when the sample size is large. It also addresses how to interpret the test results in the application background. Then, it discusses testing a population mean when the sample size is small, outlines a five-step testing procedure, and illustrates the procedure with an example. Study the example carefully and complete the relevant exercises and applications. Finally, it talks about large sample tests for a population proportion. The critical value and p-value approach are introduced based on a standardized test statistic.

5.2.2: Testing the Difference between Two Means The Difference between Two Means

This section covers how to test for differences between means from two separate groups of subjects and gives an example of opinions on animal research. The detailed testing procedure is carried out using the standard steps in hypothesis testing.

Difference of Means

Watch these videos on the difference of means.

5.3: Chi-Square Distribution Contingency Tables
Read this section, which discusses contingency tables, and answer the questions at the end of the section. While this section is optional, studying it may help you if you wish to take the Saylor Direct Credit exam for this course.
Chi-Square Distributions and Goodness of Fit
Read these sections, which discuss chi-square distributions and how to test the goodness of fit. While these sections are optional, studying them may help you if you wish to take the Saylor Direct Credit exam for this course.
More on Chi-square Distributions
Watch these videos, which discuss chi-square distributions, goodness of fit, and contingency tables.
5.4: Comparing the Proportions of Populations Comparing Population Proportions

Watch these videos, which discuss comparing population proportions. While these videos are optional, studying these topics may help you if you are interested in taking the credit-aligned exam that is linked with this course.

6.1.1: Scatter Plot of Two Variables and Regression Line Introduction to Linear Regression

This section defines simple linear regression, uses scatter plots to reveal linear patterns, and talks about prediction error. It also discusses how to compute regression line by minimizing squared errors.

Linear Regression

Read these sections on linear regression. Linear regression, the simplest form of regression, is used to obtain a linear relationship between two variables.

6.1.2: Correlation Coefficient Correlation
Read these sections on correlation. You will learn the interpretation and calculation of the correlation coefficient, how to test its significance, and the relation between correlation and causation.
The Linear Correlation Coefficient

Read this discussion on linear correlation. You will learn what the linear correlation coefficient is, how to compute it, and what it tells us about the relationship between two variables x and y.

6.1.3: Sums of Squares Partitioning Sums of Squares

This section discusses the sums of squares, including partitioning sum of squares into sums of squares predicted and sum of squares error.

Regression Lines

Watch these videos, which discuss the regression line.

6.2.1: Standard Errors of the Least Squares Estimates Standard Error of the Estimate

This section discusses how to compute the standard error of the estimate based on errors of prediction as well as how to compute the standard error of the estimate based on a sample.

6.2.2: Statistical Inference for the Slope and Correlation Inferential Statistics for b and r

This section starts with assumptions on the errors that are necessary for statistical inference. Then, it gives an example of a significance test for the slope. Finally, it talks about constructing confidence intervals for the slope and closes with a significance test for the correlation.

Statistical Inference about Slope

This section further details two types of inferences on the slope parameter, considering both confidence intervals and hypothesis testing.

6.2.3: Influential Observations Influential Observations

This section discusses the notion of influence and describes what makes a point influential. It introduces the concepts of leverage and distance, which are useful to detect influential observations.

A Complete Example

This section explains linear regression, from presenting the data to using scatter plots to identify the linear pattern. It then fits a linear model using least squares estimation and addresses statistical inferences on correlation coefficient and slope parameter.

6.3: ANOVA ANOVA

Watch these videos, which discuss each of the steps in ANOVA. While these videos are optional, studying ANOVA may help you if you are interested in taking the credit-aligned exam that is linked with this course.

More on ANOVA

Read this chapter and complete the questions at the end of each section. While these sections are optional, studying ANOVA may help you if you are interested in taking the Saylor Direct Credit exam for this course.

Study Guide MA121 Study Guide

Course Feedback Survey Course Feedback Survey