## BUS204 Study Guide

### 2a. Identify values of and differentiate between permutations and combinations

• What is the difference between a combination and a permutation?

A combination is the number of ways that $x$ objects can be arranged out of a larger group of n objects in any order.

A permutation is the same, except that order matters. For example: If we are choosing $x=3$ letters from the first five ($n=5$) letters of the alphabet, there are 10 combinations and 60 permutations, since within each of the 10 combinations, the letters can be arranged 6 different ways: ABC, ACB, BAC, BCA, CAB, CBA. But, all six of them represent the same combination.

It is not only the order that matters. If the selected members can be assigned any unique characteristics, such as 3 people being elected President, Vice President, and Treasurer from a club of 20 people, we are looking at a permutation, since for any group of 3 members, it does matter which title each person has, just like it would matter what order they'd be in if they stood in a line.

For more outside of what we've already covered, see Permutations and Combinations.

### 2b. Explain and apply the different methods for determining probability: equally likely outcomes, frequency theory, and subjective theory

• Define probability.
• Describe the method of equally likely outcomes and how it can be used to find probabilities.

An outcome is a single possible result for an experiment. An event is made up of several outcomes. For example, {1,2,3,4,5,6} are the possible outcomes for a die roll. The event "Roll greater than a 4" is made of the outcomes {5,6}.

The set of possible outcomes for a probability experiment is called the sample space. A set is made of elements.

The fundamental definition of probability is the number of possible "successful" events or outcomes, divided by the total number of possible events or outcomes. So the probability of rolling a 5 on a 6-sided die is 1/6, because the "success" is rolling a 5 (one possible success) out of 6 possible outcomes. We can use this principle as long as each outcome is equally likely.

The intersection of two (or more) sets is the set whose elements appear in both (or all) sets. The union of two (or more) sets is the set whose elements appear in at least one of those sets. Example: if $A=\{1,2,3,4\}$ and $B=\{2,3,4,6,8\}$, then the union $\cup$ is $\{1,2,3,4,5\}$ (we do not repeat the 2 or 3), and the intersection $\cap$ is $\{2,3,4\}$. The empty set $\emptyset$ has no elements. If $E=\{1,2\}$ and $F=\{3,4\}$, then $E \cap F = \{ \emptyset \}$.

To review, see section 3.1 of the textbook.

### 2c. Define and apply the axioms of probability theory

• What is a compound event? How do unions and intersections of events relate to unions and intersections of sets?
• What is the difference between independent and dependent events?
• What is the difference between mutually exclusive and non-mutually exclusive events?
• Why are there two different formulas for $P(A\cap B)$?
• Why are there two different formulas for $P(A\cup B)$?
• Explain why for independent events $A$ and $B$, $P(A | B) = p(A)$.

Compound events happen when two or more events are connected by "AND" (intersection) or "OR" (union). $P(A\cap B)$ is the probability of both events $A$ and $B$ occurring. $P(A\cup B)$ is the probability of either $A$ OR $B$ (or both) occurring. The events can be represented as circles in a Venn diagram, and the outcomes as data points within those events. Note that a circle can also be used to represent a set of data points. $P(A)$, where event $A$ includes outcomes $A_1$, $A_2$, and $A_3$ is the same thing as finding the proportion of the number of outcomes in set $A$ divided by the number of all outcomes in the sample space.

Two events are independent if the occurrence of one does not depend on the occurrence of the other. Examples of independent events are $A=$"Roll a 5 on a die" and $B=$"Flip heads on a coin". The two have no causal relationship. Dependent events are $A=$"The temperature is below freezing" and $B=$"It snows". In this case, event $B$ is much more likely if $A$ occurs than if it doesn't.

Mutually exclusive events (also called disjoint events) are events that cannot both occur. Example: $A=$"Roll greater than a 4 on a six-sided die"; $B=$"Roll less than 3 on a six-sided die". These are mutually exclusive, because they cannot both happen. Symbolically, this is described as $P(A \cap B)=0$

The notation $P(A | B)$ is pronounced "$A$ given $B$". It means that we are looking for the probability that $A$ occurs given that $B$ occurs. This is called conditional probability.

The addition rule is used to find the probability of an "or" compound event.

• $P(A \cup B)=P(A)+P(B)-p(A\cap B)$
• If $A$ and $B$ are mutually exclusive, then the last term is 0 and the formula reduces to
$P(A \cup B)=P(A)+P(B)$

The multiplication rule is used to find "and" probabilities:

• $P(A \cap B)=P(A)\times P(B|A)$ is the general multiplication rule, and it works for all events.
• $P(A \cap B)=P(A)\times P(B)$ is the special multiplication rule, and it works for independent events. Note that if the events are independent, $P(B|A)=P(B)$.

To review, see section 3.2, section 3.3, and section 3.5 of the textbook, and see Addition Rule for Probability.

### 2d. Apply probability distributions and explain the properties of different distributions

• What is a random variable? What is the difference between a discrete and a continuous random variable?
• How is a probability distribution similar or different from a relative frequency distribution?
• What are some uses for probability distributions?
• What is sampling error and why does it occur? Does it imply a mistake in data gathering?

Remember: a random variable is a variable that has its value as a result of an experiment or survey as opposed to solving an equation. A discrete random variable has a fixed number of possible values. A continuous random variable cannot easily have all its possible values listed.

A probability distribution consists of each possible value (or interval of values) of a random variable, and the probability that the variable will take on that value. Probability distributions have many implications in decision making. When you interpret data, you will need to know what probability distribution the value you're trying to estimate follows.

As an example, if you think that 30% of consumers might be interested in a particular product, and sample data from a small group shows 26%, you would conduct a statistical test (see Unit 5) what the probability is that a confidence interval of sample proportions would include 26% if the true value is 30%. Remember, even if the true population proportion is 30%, because of random sampling error, proportions of samples will likely differ from 30%, so we have to find if the 26% is a significant enough difference to cast doubt on the hypothesis of 30%.

Sampling error occurs when conducting statistical tests, because taking samples of a population will result in different sample means, since each sample is unique. For example, if the mean of a population is 10, then the sample means should at least cluster around 10, but may not be exactly 10.

To review, see Probability Density Functions and Random Variables.

### 2e. Solve problems using binomial distribution, and explain when it should be used

• What properties must be true of all the experiments in an event to make that a binomial event?
• When calculating binomial probabilities, why is it important to use the formula for calculating the number of combinations?
• Why is it important that the probabilities of success be equal for each experiment?
• Why is it important that the experiments are independent of each other?

A binomial event has the following properties:

• It consists of $n$ experiments, each of which can have 2 possible outcomes.
• All of the experiments succeed or fail independently of each other, with equal probability $P$.

Example: Roll a 6-sided die $n=10$ times. The probability $P$ of rolling a 6 is $\frac{1}{6}$. Because the experiment (die roll) is performed a fixed number of times, each time the probability of "success" (rolling a 6) is equal, there are two possible outcomes (6 or not 6), and each die roll is independent of all the others, this qualifies as a binomial experiment.

Each experiment will result in either a "success" or "failure" (two possible outcomes). Success is simply defined as the result we are looking for, whether it is a good or bad occurrence.

A binomial random variable represents the number of successes x out of n events. Example: If $n=10$ experiments, then the random variable $x$ can have values $\{0,1,2,...,9,10\}$.

To review, see section 4.2 of the textbook.

### 2f. Differentiate between discrete and continuous probability distributions

• What is the difference between a discrete and a continuous distribution?
• Is the Binomial Distribution considered discrete or continuous?

Discrete probability distributions can have all possible outcomes listed. The random variable is discrete.

Continuous probability distributions are made up of possible intervals of values for a continuous random variable and the individual values of the random value cannot all be listed – but the intervals they're grouped into can be.

One property of a continuous random variable $x$ is that $P(X=x) = 0$. In other words, when finding probabilities involving continuous distributions, we only work with finding probabilities of $x$ being in an interval of values, just as a continuous frequency distribution only uses intervals of possible values.

Examples of discrete distributions are uniform-discrete, binomial, and poisson distributions. Examples of continuous distributions are uniform-continuous and normal distributions.

To review, see Probability Density Functions and Random Variables.

### 2g. Apply expected value and calculate it for various probability distributions

• Why is the mean of a distribution often referred to as its expected value?
• If a set of 20 binomial experiments are continuously run, where each experiment has 0.25 probability of success, and sets of 20 experiments are performed continuously and the number of successes out of 20 recorded, what should we expect those numbers to average (arithmetic mean) to?

Expected value, also called the mean of a probability distribution, is the expected long-term arithmetic mean of random variables following that distribution. For example, if a binomial event consists of 6 experiments, each with $P=\frac{1}{3}$ chance of success, then the expected value of that distribution will be one third of 6, or 2 successes. For a Binomial distribution, the expected value or mean is equal to $n \times P$.

Calculating the expected value of a distribution will in theory give you the same number as if you took random numbers with that distribution and calculated the arithmetic mean. In the example above, where $n=6$ and $P=\frac{1}{3}$, if you conduct the 6 experiments over and over again, the number of successes might be 1,1,1,2,2,2,2,2,2,3,5. The arithmetic mean of these numbers is 2.3.

To review, see section 3.1 of the textbook. This passage refers to the long term probability of an event. Expected value is the long-term mean of random variables occurring with that probability.

### Unit 2 Vocabulary

This vocabulary list includes terms that might help you with the review items above and some terms you should be familiar with to be successful in completing the final exam for the course.

Try to think of the reason why each term is included.

• combination
• permutation
• set
• outcome
• event
• probability
• union
• intersection
• compound events
• mutually exclusive events
• independent events
• conditional probability