Hypothesis Testing with One Sample

Site: Saylor Academy
Course: MA121: Introduction to Statistics
Book: Hypothesis Testing with One Sample
Printed by: Guest user
Date: Thursday, April 18, 2024, 11:28 PM

Description

Read this section on the two types of errors in hypothesis testing and some examples of each.

Introduction

LEARNING OBJECTIVES

Distinguish between Type I and Type II error and discuss the consequences of each.


KEY TAKEAWAYS

Key Points
  • A type I error occurs when the null hypothesis \left(\mathrm{H}_{0}\right) is true but is rejected.
  • The rate of the type I error is called the size of the test and denoted by the Greek letter \alpha (alpha).
  • A type II error occurs when the null hypothesis is false but erroneously fails to be rejected.
  • The rate of the type II error is denoted by the Greek letter \beta (beta) and related to the power of a test (which equals 1-\beta ).

Key Terms
  • type II error: Accepting the null hypothesis when the null hypothesis is false.
  • Type I error: Rejecting the null hypothesis when the null hypothesis is true.

The notion of statistical error is an integral part of hypothesis testing. The test requires an unambiguous statement of a null hypothesis, which usually corresponds to a default "state of nature" - for example "this person is healthy," "this accused is not guilty" or "this product is not broken. " An alternative hypothesis is the negation of null hypothesis (for example, "this person is not healthy," "this accused is guilty," or "this product is broken"). The result of the test may be negative, relative to null hypothesis (not healthy, guilty, broken) or positive (healthy, not guilty, not broken).

If the result of the test corresponds with reality, then a correct decision has been made. However, if the result of the test does not correspond with reality, then an error has occurred. Due to the statistical nature of a test, the result is never, except in very rare cases, free of error. The two types of error are distinguished as type I error and type II error. What we actually call type I or type II error depends directly on the null hypothesis, and negation of the null hypothesis causes type I and type II errors to switch roles.



Source: Boundless, https://courses.lumenlearning.com/boundless-statistics/chapter/hypothesis-testing-one-sample/
Creative Commons License This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 License.

Type I Error

A type I error occurs when the null hypothesis \left(\mathrm{H}_{0}\right) is true but is rejected. It is asserting something that is absent, a false hit. A type I error may be compared with a so-called false positive (a result that indicates that a given condition is present when it actually is not present) in tests where a single condition is tested for. A type I error can also be said to occur when we believe a falsehood. In terms of folk tales, an investigator may be "crying wolf" without a wolf in sight (raising a false alarm). \mathrm{H}_{0}: no wolf.

The rate of the type I error is called the size of the test and denoted by the Greek letter \alpha (alpha). It usually equals the significance level of a test. In the case of a simple null hypothesis, \alpha is the probability of a type I error. If the null hypothesis is composite, \alpha is the maximum of the possible probabilities of a type I error.

Type II Error

A type II error occurs when the null hypothesis is false but erroneously fails to be rejected. It is failing to assert what is present, a miss. A type II error may be compared with a so-called false negative (where an actual "hit" was disregarded by the test and seen as a "miss") in a test checking for a single condition with a definitive result of true or false. A type II error is committed when we fail to believe a truth. In terms of folk tales, an investigator may fail to see the wolf ("failing to raise an alarm"). Again, \mathrm{H}_{0}: no wolf.

The rate of the type II error is denoted by the Greek letter \beta (beta) and related to the power of a test (which equals 1-\beta).

False Negative Error

A false negative error is where a test result indicates that a condition failed, while it actually was successful. A common example is a guilty prisoner freed from jail. The condition: "Is the prisoner guilty? " actually had a positive result (yes, he is guilty). But the test failed to realize this and wrongly decided the prisoner was not guilty.

A false negative error is a type II error occurring in test steps where a single condition is checked for and the result can either be positive or negative.

Consequences of Type I and Type II Errors

Both types of errors are problems for individuals, corporations, and data analysis. A false positive (with null hypothesis of health) in medicine causes unnecessary worry or treatment, while a false negative gives the patient the dangerous illusion of good health and the patient might not get an available treatment. A false positive in manufacturing quality control (with a null hypothesis of a product being well made) discards a product that is actually well made, while a false negative stamps a broken product as operational. A false positive (with null hypothesis of no effect) in scientific research suggest an effect that is not actually there, while a false negative fails to detect an effect that is there.

Based on the real-life consequences of an error, one type may be more serious than the other. For example, NASA engineers would prefer to waste some money and throw out an electronic circuit that is really fine (null hypothesis: not broken; reality: not broken; test find: broken; action: thrown out; error: type I, false positive) than to use one on a spacecraft that is actually broken. On the other hand, criminal courts set a high bar for proof and procedure and sometimes acquit someone who is guilty (null hypothesis: innocent; reality: guilty; test find: not guilty; action: acquit; error: type II, false negative) rather than convict someone who is innocent.

Minimizing errors of decision is not a simple issue. For any given sample size the effort to reduce one type of error generally results in increasing the other type of error. The only way to minimize both types of error, without just improving the test, is to increase the sample size, and this may not be feasible. An example of acceptable type I error is discussed below.

Type I Error: NASA engineers would prefer to waste some money and throw out an electronic circuit that is really fine than to use one on a spacecraft that is actually broken. This is an example of type I error that is acceptable.