Within this subunit, we encounter basic definitions and operators. Fundamental symbology is also presented and discussed. There are several examples that help you understand the math in terms of human language. You will see several ways to say the same thing, while remaining logically and mathematically correct.
Logic is the study of consequence. Given a few mathematical statements or facts, we would like to be able to draw some conclusions. For example, if I told you that a particular real-valued function was continuous on the interval [0, 1], and f (0) = −1 and f (1) = 5, can we conclude that there is some point between [0, 1] where the graph of the function crosses the x-axis? Yes, we can, thanks to the Intermediate Value Theorem from Calculus. Can we conclude that there is exactly one point? No. Whenever we ﬁnd an "answer" in math, we really have a (perhaps hidden) argument. Mathematics is really about proving general statements (like the Intermediate Value Theorem), and this too is done via an argument, usually called a proof. We start with some given conditions, the premises of our argument, and from these, we ﬁnd a consequence of interest, our conclusion.
The problem is, as you no doubt know from arguing with friends, not all arguments are good arguments. A "bad" argument is one in which the conclusion does not follow from the premises, i.e., the conclusion is not a consequence of the premises. Logic is the study of what makes an argument good or bad. In other words, logic aims to determine in which cases a conclusion is, or is not, a consequence of a set of premises.
By the way, "argument" is actually a technical term in math (and philosophy, another discipline which studies logic):
An argument is a set of statements, one of which is called the conclusion and the rest of which are called premises. An argument is said to be valid if the conclusion must be true whenever the premises are all true. An argument is invalid if it is not valid; it is possible for all the premises to be true and the conclusion to be false.
For example, consider the following two arguments:
(The symbol "∴" means "therefore")
Are these arguments valid? Hopefully, you agree that the ﬁrst one is but the second one is not. Logic tells us why by analyzing the structure of the statements in the argument. Notice the two arguments above look almost identical. Edith and Florence both eat their vegetables. In both cases, there is a connection between the eating of vegetables and cookies. But we claim that it is valid to conclude that Edith gets a cookie, but not that Florence does. The diﬀerence must be in the connection between eating vegetables and getting cookies. We need to be skilled at reading and comprehending these sentences. Do the two sentences mean the same thing? Unfortunately, in everyday language, we are often sloppy, and you might be tempted to say they are equivalent. But notice that just because Florence must eat her vegetables, we have not said that doing so would be enough (she might also need to clean her room, for example). In everyday (non-mathematical) practice, you might be tempted to say this "other direction" is implied. In mathematics, we never get that luxury.
Before proceeding, it might be a good idea to quickly review Section 0.2 where we ﬁrst encountered statements and the various forms they can take. The goal now is to see what mathematical tools we can develop to better analyze these, and then to see how this helps read and write proofs.
3.1 Propositional Logic
You stumble upon two trolls playing Stratego®. They tell you:
Troll 1: If we are cousins, then we are both knaves.
Troll 2: We are cousins or we are both knaves.
Could both trolls be knights? Recall that all trolls are either always-truth-telling knights or always-lying knaves.
Attempt the above activity before proceeding.
A proposition is simply a statement. Propositional logic studies the ways statements can interact with each other. It is important to remember that propositional logic does not really care about the content of the statements. For example, in terms of propositional logic, the claims, "if the moon is made of cheese then basketballs are round", and "if spiders have eight legs then Sam walks with a limp" are exactly the same. They are both implications: statements of the form, P → Q.
Here's a question about playing Monopoly:
If you get more doubles than any other player then you will lose, or if you lose then you must have bought the most properties.
True or false? We will answer this question, and won't need to know anything about Monopoly. Instead, we will look at the logical form of the statement.
We need to decide when the statement (P → Q) ∨ (Q → R) is true. Using the deﬁnitions of the connectives in Section 0.2, we see that for this to be true, either P → Q must be true or Q → R must be true (or both). Those are true if either P is false or Q is true (in the ﬁrst case) and Q is false or R is true (in the second case). So – yeah, it gets kind of messy. Luckily, we can make a chart to keep track of all the possibilities. Enter truth tables. The idea is this: on each row, we list a possible combination of T's and F's (for true and false) for each of the sentential variables, and then mark down whether the statement in question is true or false in that case. We do this for every possible combination of T's and F's. Then we can clearly see in which cases the statement is true or false. For complicated statements, we will ﬁrst ﬁll in values for each part of the statement, as a way of breaking up our task into smaller, more manageable pieces.
Since the truth value of a statement is completely determined by the truth values of its parts and how they are connected, all you really need to know is the truth tables for each of the logical connectives. Here they are:
The truth table for negation looks like this:
None of these truth tables should come as a surprise; they are all just restating the deﬁnitions of the connectives. Let's try another one.
Make a truth table for the statement ¬P ∨ Q.
Solution. Note that this statement is not ¬(P ∨ Q), the negation belongs to P alone. Here is the truth table:
We added a column for ¬P to make ﬁlling out the last column easier. The entries in the ¬P column were determined by the entries in the P column. Then to ﬁll in the ﬁnal column, look only at the column for Q and the column for ¬ P and use the rule for ∨.
Now let's answer our question about monopoly:
Analyze the statement, "if you get more doubles than any other player you will lose, or that if you lose you must have bought the most properties", using truth tables.
Solution. Represent the statement in symbols as ( P → Q) ∨ (Q → R), where P is the statement "you get more doubles than any other player", Q is the statement "you will lose", and R is the statement "you must have bought the most properties". Now make a truth table.
The truth table needs to contain 8 rows in order to account for every possible combination of truth and falsity among the three statements. Here is the full truth table:
The ﬁrst three columns are simply a systematic listing of all possible combinations of T and F for the three statements (do you see how you would list the 16 possible combinations for four statements?). The next two columns are determined by the values of P, Q, and R and the deﬁnition of implication. Then, the last column is determined by the values in the previous two columns and the deﬁnition of ∨. It is this ﬁnal column we care about.
Notice that in each of the eight possible cases, the statement in question is true. So our statement about monopoly is true (regardless of how many properties you own, how many doubles you roll, or whether you win or lose).
The statement about monopoly is an example of a tautology, a statement which is true on the basis of its logical form alone. Tautologies are always true but they don't tell us much about the world. No knowledge about monopoly was required to determine that the statement was true. In fact, it is equally true that "If the moon is made of cheese, then Elvis is still alive, or if Elvis is still alive, then unicorns have 5 legs".
You might have noticed in Example 3.1.1 that the ﬁnal column in the truth table for ¬P ∨ Q is identical to the ﬁnal column in the truth table for P → Q:
This says that no matter what P and Q are, the statements ¬ P ∨ Q and P → Q either both true or both false. We therefore say these statements are logically equivalent.
Two (molecular) statements P and Q are logically equivalent provided P is true precisely when Q is true. That is, P and Q have the same truth value under any assignment of truth values to their atomic parts.
To verify that two statements are logically equivalent, you can make a truth table for each and check whether the columns for the two statements are identical.
Recognizing two statements as logically equivalent can be very helpful. Rephrasing a mathematical statement can often lend insight into what it is saying, or how to prove or refute it. By using truth tables we can systematically verify that two statements are indeed logically equivalent.
Are the statements, "it will not rain or snow" and "it will not rain and it will not snow" logically equivalent?
Solution. We want to know whether ¬(P ∨Q) is logically equivalent to ¬P ∧ ¬Q. Make a truth table which includes both statements:
Since in every row the truth values for the two statements are equal, the two statements are logically equivalent.
Notice that this example gives us a way to "distribute" a negation over a disjunction (an "or"). We have a similar rule for distributing over conjunctions ("and"s):
De Morgan's Laws.
¬(P ∧ Q) is logically equivalent to ¬P ∨ ¬Q .
This suggests there might be a sort of "algebra" you could apply to statements (okay, there is: it is called Boolean algebra) to transform one statement into another. We can start collecting useful examples of logical equivalence, and apply them in succession to a statement, instead of writing out a complicated truth table.
De Morgan's laws do not directly help us with implications, but as we saw above, every implication can be written as a disjunction:
Implications are Disjunctions.
P → Q is logically equivalent to ¬P ∨ Q.
Example: "If a number is a multiple of 4, then it is even" is equivalent to, "a number is not a multiple of 4 or (else) it is even".
With this and De Morgan's laws, you can take any statement and simplify it to the point where negations are only being applied to atomic propositions. Well, actually not, because you could get multiple negations stacked up. But this can be easily dealt with:
¬¬P is logically equivalent to P.
Example: "It is not the case that c is not odd" means "c is odd".
Let's see how we can apply the equivalences we have encountered so far.
Prove that the statements ¬(P → Q) and P ∧ ¬Q are logically equivalent without using truth tables.
Solution. We want to start with one of the statements, and transform it into the other through a sequence of logically equivalent statements. Start with ¬( P → Q). We can rewrite the implication as a disjunction this is logically equivalent to
¬(¬P ∨ Q).
Now apply DeMorgan's law to get
¬¬P ∧ ¬Q.
Finally, use double negation to arrive at P ∧ ¬Q
Notice that the above example illustrates that the negation of an implication is NOT an implication: it is a conjunction! We saw this before, in Section 0.2, but it is so important and useful, it warrants a second blue box here:
Negation of an Implication.
The negation of an implication is a conjunction:
¬(P → Q) is logically equivalent to P ∧ ¬Q.
That is, the only way for an implication to be false is for the hypothesis to be true AND the conclusion to be false.
To verify that two statements are logically equivalent, you can use truth tables or a sequence of logically equivalent replacements. The truth table method, although cumbersome, has the advantage that it can verify that two statements are NOT logically equivalent.
Are the statements (P ∨ Q) → R and (P → R) ∨ (Q → R) logically equivalent?
Solution. Note that while we could start rewriting these statements with logically equivalent replacements in the hopes of transforming one into another, we will never be sure that our failure is due to their lack of logical equivalence rather than our lack of imagination. So instead, let's make a truth table:
Look at the fourth (or sixth) row. In this case, (P → R)∨(Q → R) is true, but (P ∨ Q) → R is false. Therefore the statements are not logically equivalent.
While we don't have logical equivalence, it is the case that whenever (P ∨ Q) → R is true, so is (P → R) ∨ (Q → R ). This tells us that we can deduce (P → R ) ∨ (Q → R) from (P ∨ Q) → R, just not the reverse direction.
Holmes owns two suits: one black and one tweed. He always wears either a tweed suit or sandals. Whenever he wears his tweed suit and a purple shirt, he chooses to not wear a tie. He never wears the tweed suit unless he is also wearing either a purple shirt or sandals. Whenever he wears sandals, he also wears a purple shirt. Yesterday, Holmes wore a bow tie. What else did he wear?
Attempt the above activity before proceeding
Earlier we claimed that the following was a valid argument:
If Edith eats her vegetables, then she can have a cookie. Edith ate her vegetables. Therefore Edith gets a cookie.
How do we know this is valid? Let's look at the form of the statements. Let P denote "Edith eats her vegetables" and Q denote "Edith can have a cookie". The logical form of the argument is then:
This is an example of a deduction rule, an argument form that is always valid. This one is a particularly famous rule called modus ponens. Are you convinced that it is a valid deduction rule? If not, consider the following truth table:
This is just the truth table for P → Q, but what matters here is that all the lines in the deduction rule have their own column in the truth table. Remember that an argument is valid provided the conclusion must be true given that the premises are true. The premises in this case are P → Q and P. Which rows of the truth table correspond to both of these being true? P is true in the ﬁrst two rows, and of those, only the ﬁrst row has P → Q true as well. And lo-and-behold, in this one case, Q is also true. So if P → Q and P are both true, we see that Q must be true as well.
Here are a few more examples.
is a valid deduction rule.
Solution. We make a truth table which contains all the lines of the argument form:
(we include a column for ¬P just as a step to help getting the column for ¬P → Q).
Now, look at all the rows for which both P → Q and ¬P → Q are true. This happens only in rows 1 and 3. Hey! In those rows, Q is true as well, so the argument form is valid (it is a valid deduction rule).
is a valid deduction rule.
Solution. Let's make a truth table containing all four statements.
Look at the second to last row. Here all three premises of the argument are true, but the conclusion is false. Thus this is not a valid deduction rule.
While we have the truth table in front of us, look at rows 1, 3, and 5. These are the only rows in which all of the statements P → R, Q → R, and P ∨ Q are true. It also happens that R is true in these rows as well. Thus we have discovered a new deduction rule we know is valid:
As we saw in Section 0.2, not every statement can be analyzed using logical connectives alone. For example, we might want to work with the statement:
All primes greater than 2 are odd.
To write this statement symbolically, we must use quantiﬁers. We can translate as follows:
∀x((P(x) ∧ x > 2) → O(x)).
In this case, we are using P(x) to denote "x is prime" and O(x) to denote "x is odd". These are not propositions, since their truth-value depends on the input x. Better to think of P and O as denoting properties of their input. The technical term for these is predicates and when we study them in logic, we need to use predicate logic.
It is important to stress that predicate logic extends propositional logic (much in the way quantum mechanics extends classical mechanics). You will notice that our statement above still used the (propositional) logical connectives. Everything that we learned about logical equivalence and deductions still applies. However, predicate logic allows us to analyze statements at a higher resolution, digging down into the individual propositions P, Q, etc.
A full treatment of predicate logic is beyond the scope of this text. One reason is that there is no systematic procedure for deciding whether two statements in predicate logic are logically equivalent (i.e., there is no analogue to truth tables here). Rather, we end with two examples of logical equivalence and deduction, to pique your interest.
Suppose we claim that there is no smallest number. We can translate this into symbols as
¬∃x∀ y (x ≤ y)
(literally, "it is not true that there is a number x such that for all numbers y, x is less than or equal to y").
However, we know how negation interacts with quantiﬁers: we can pass a negation over a quantiﬁer by switching the quantiﬁer type (between universal and existential). So the statement above should be logically equivalent to
∀x∃ y (y < x).
Notice that y < x is the negation of x ≤ y. This literally says, "for every number x there is a number y which is smaller than x". We see that this is another way to make our original claim.
Can you switch the order of quantiﬁers? For example, consider the two statements:
∀x∃ yP(x, y) and ∃y∀xP( x, y).
Are these logically equivalent?
Solution. These statements are NOT logically equivalent. To see this, we should provide an interpretation of the predicate P(x, y ) which makes one of the statements true and the other false.
Let P(x, y ) be the predicate x < y. It is true, in the natural numbers, that for all x there is some y greater than that x (since there are inﬁnitely many numbers). However, there is not a natural number y which is greater than every number x. Thus it is possible for ∀x∃ y P (x, y) to be true while ∃y∀xP (x, y) is false.
We cannot do the reverse of this though. If there is some y for which every x satisﬁes P (x, y), then certainly for every x there is some y which satisﬁes P(x, y ). The ﬁrst is saying we can ﬁnd one y that works for every x. The second allows diﬀerent y's to work for diﬀerent x's, but there is nothing preventing us from using the same y that work for every x. In other words, while we don't have logical equivalence between the two statements, we do have a valid deduction rule:
Put yet another way, this says that the single statement
∃y∀xP (x, y) → ∀x∃yP (x, y)
is always true. This is sort of like a tautology, although we reserve that term for necessary truths in propositional logic. A statement in predicate logic that is necessarily true gets the more prestigious designation of a law of logic (or sometimes logically valid, but that is less fun).
Source: Oscar Levin, http://discrete.openmathbooks.org/pdfs/dmoi-tablet.pdf
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 License.