Demonstration: Confidence Intervals for a Mean

This demonstration provided here is a supplement to the previous section.


Confidence Intervals for a Mean

A confidence interval is a way of estimating the mean of an unknown distribution from a set of data drawn from this distribution. If the unknown distribution is nearly normal or the sample size is sufficiently large, the interval  \bar{X} \pm t_{(c+1) / 2} \frac{s}{\sqrt{n}} is a 100 \times c \% confidence interval for the mean of the unknown distribution, where \bar{X} is the sample mean, t_{(c+1)/2} is the (c+1)/2^{th} quantile of the T-distribution with n-1 degrees of freedom, s is the sample standard deviation, and n is the sample size. If this interval were computed from repeated random samples from the unknown distribution, a fraction approaching 100 \times c \% of the time the mean of the distribution would fall in the interval. This Demonstration uses a normal distribution as the "unknown" or population distribution, whose mean and variance can be adjusted using the sliders. In the image, the vertical brown line shows the value of the mean of the "unknown" distribution, and the horizontal lines (blue if they include the true value and red if they do not) are each confidence intervals computed from different random samples from this distribution.




Source: Chris Boucher, and Gary McClelland, https://demonstrations.wolfram.com/ConfidenceIntervalsForAMean/
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 License.

Last modified: Tuesday, May 3, 2022, 4:51 PM