Dokumen - Pub An Introduction To Symbolic Logic
Dokumen - Pub An Introduction To Symbolic Logic
Exposition
of
Symbolic Logic
with
Kalish-Montague
derivations
Aug 2013
Preface
The system of logic used here is essentially that of Kalish & Montague 1964 and Kalish,
Montague and Mar, Harcourt Brace Jovanovich, 1992. The principle difference is that written
justifications are required for boxing and canceling: 'dd' for a direct derivation, 'id' for an indirect
derivation, etc. This text is written to be used along with the UCLA Logic 2010 software program,
but that program is not mentioned, and the text can be used independently (although you would
want to supplement the exercises).
The system of notation is almost the same as KK&M; major differences are that the signs '∀' and
'∃' are used for the quantifiers, name and operation symbols are the small letters between ‘a’ and
‘h’, and variables are the small letters between ‘i’ and ‘z’.
Chapters 1-3 cover pretty much the same material as KM&M except that the rule allowing for the
use of previously proved theorems is now in chapter 2, immediately following the section on
theorems. (Previous versions of this text used the terminology ‘tautological implication’ in section
2.11. This has been changed to ‘tautological validity’ to agree with the logic program.)
Chapters 4-6 include invalidity problems with infinite universes, where one specifies the
interpretation of notation "by description"; e.g. "R(): ≤". These are discussed in the final
section of each chapter, so they may easily be avoided. (They are not currently implemented in
the logic program.)
Chapter 4 covers material from KK&M chapter IV, but without operation symbols. Chapter 4 also
includes material from KK&M chapter VII, namely interchange of equivalents, biconditional
derivations, monadic sentences without quantifier overlay, and prenex form.
Chapter Two
Sentential Logic with 'and', 'or', if-and-only-if'
1 SYMBOLIC NOTATION
2 ENGLISH EQUIVALENTS OF THE CONNECTIVES
3 COMPLEX SENTENCES
4 RULES
5 SOME DERIVATIONS USING RULES S, ADJ, CB
6 ABBREVIATING DERIVATIONS
7 USING THEOREMS AS RULES
8 DERIVED RULES
9 OFFICIAL CONDITIONS FOR DERIVATIONS
10 TRUTH TABLES AND TAUTOLOGIES
11 TAUTOLOGICAL VALIDITY
Chapter Three
Individual constants, Predicates, Variables and Quantifiers
1 INDIVIDUAL CONSTANTS AND PREDICATES
2 QUANTIFIERS, VARIABLES, AND FORMULAS
3 SCOPE AND BINDING
4 MEANINGS OF THE QUANTIFIERS
5 SYMBOLIZING SENTENCES WITH QUANTIFIERS
6 DERIVATIONS WITH QUANTIFIERS
7 UNIVERSAL DERIVATIONS
8 SOME DERIVATIONS
9 DERIVED RULES
10 INVALIDITIES
11 EXPANSIONS
Chapter Five
Identity and Operation Symbols
1 IDENTITY
2 AT LEAST AND AT MOST, EXACTLY, AND ONLY
3 DERIVATIONAL RULES FOR IDENTITY
4 INVALIDITIES WITH IDENTITY
5 OPERATION SYMBOLS
6 DERIVATIONS WITH COMPLEX TERMS
7 INVALID ARGUMENTS WITH OPERATION SYMBOLS
8 COUNTER-EXAMPLES WITH INFINITE UNIVERSES
Chapter Six
Definite Descriptions
1 DEFINITE DESCRIPTIONS
2 SYMBOLIZING SENTENCES WITH DEFINITE DESCRIPTIONS
3 DERIVATIONAL RULES FOR DEFINITE DESCRIPTIONS: PROPER DESCRIPTIONS
4 SYMBOLIZING ORDINARY LANGUAGE
5 DERIVATIONAL RULES FOR DEFINITE DESCRIPTIONS: IMPROPER DESCRIPTIONS
6 INVALIDITIES WITH DEFINITE DESCRIPTIONS
7 UNIVERSAL DERIVATIONS
8 COUNTER-EXAMPLES WITH INFINITE UNIVERSES
Introduction
Logic is concerned with arguments, good and bad. With the
docile and the reasonable, arguments are sometimes useful in
settling disputes. With the reasonable, this utility attaches only to
good arguments. It is the logician's business to serve the
reasonable. Therefore, in the realm of arguments, it is the logician
who distinguishes good from bad.
Kalish & Montague 1964 p. 1
1 DEDUCTIVE REASONING
Logic is the study of correct reasoning. It is not a study of how this reasoning originates, or what
its effects are in persuading people; it is rather a study of what it is that makes some reasoning
"correct" as opposed to "incorrect". If you have ever found yourself or someone else making a
mistake in reasoning, then this is an example of someone being taken in by incorrect reasoning,
and you have some idea of what we mean by correct reasoning: it is reasoning that contains no
mistakes, persuasive or otherwise.
It is typical in logic to divide reasoning into two kinds: deductive and inductive, or, roughly,
"airtight" and "merely probable". Here is an example of probable reasoning. You have just been
told that Mary bought a new car, and you say to yourself:
In the past, Mary always bought big cars.
Big cars are usually gas-guzzlers.
So she (probably) now has a gas-guzzler.
Your conclusion, that Mary has a gas-guzzler, is not one that you think of as following logically
from the information that you have; it is merely a probable inference.
Inductive Logic, which is the study of probable reasoning, is not very well understood at present.
There are certain rather special cases that are well developed, such as the application of the
probability calculus to gambling games. But a general study has not met with great success.
This is not a book about probable reasoning, but if you are interested in it, this is the place to
start. This is because most studies of Inductive Logic take for granted that you are already
familiar with Deductive Logic -- the logic of "airtight" reasoning -- which forms the subject matter
of this book. So you have to start here anyway.
Here is an example of deductive reasoning. Suppose that you recall reading that either James
Polk or Eli Whitney was a president of the United States, but you can't remember which one.
Some knowledgeable person tells you that Eli Whitney was never president (he was a famous
inventor). Based on this information you conclude that Polk was a president.
The information that you have, and the conclusion that you draw from this information, is:
Either Polk or Whitney was a president.
Whitney was not a president.
So Polk was a president.
Let us compare this reasoning with the other reasoning given above. They both have one thing in
common: the information that you start with is not known for certain. In the first example, you
have only been told that Mary bought a new car, and this may be a lie or a mistake. Likewise,
you may be misremembering her past preferences for car sizes. The same is true in the second
reasoning: you were only told that Eli Whitney was not president -- by someone else or by a
history book -- and your memory that either Polk or Whitney was a president may also be
inaccurate. In both cases the information that you start with is not known for certain, and so in
this sense your conclusions are only probable. Reasoning is always reasoning from some
claims, called the premises of the reasoning, to some further claim, called the conclusion. If the
premises are not known for certain, then no matter how good the reasoning is, the conclusion will
not be known for certain either. (There are certain special exceptions to this; see the exercises
below.) There is, however, a difference in the nature of the inferences in the two cases. In the
The triangle made of three dots is an abbreviation of the word `therefore', and is a way of
identifying the conclusion of an argument. In order to save on writing, and also to begin
displaying the form of the arguments under discussion, we will start abbreviating simple
sentences by capital letters. For the time being we will abbreviate Polk was a president by `P',
When we talk about "truth" here we do not have anything deep or mysterious in mind. For
example, we say that the sentence 'There is beer in the refrigerator' is true if there is beer in the
refrigerator, and false if there isn't beer in the refrigerator. That's all there is to it.
We have already seen one case of a valid argument which has all of its premises true and its
conclusion true as well:
P or W True
not W True
∴P True
What other possibilities are there? Well, as we noted above, it is possible to have some of the
premises false and the conclusion false too. (This is sometimes referred to as a case of the
"garbage in, garbage out" principle.) Suppose we use `R' to abbreviate Robert E. Lee was a
president. Then this argument does not have all of its premises true, nor is its conclusion true:
R or W False
not W True
∴R False
Yet this argument is just as good, as far as its validity is concerned, as the first one. If its
premises were true, then that would guarantee that its conclusion would be true too. There is no
logically possible situation in which the premises are all true and the conclusion false. This
argument, though it starts with a false premise and ends up with a false conclusion, has exactly
the same logical form as the first one. This sameness of logical form lies at the foundation of
the theory in this book; it is discussed in the following section.
Although false inputs can lead to false outputs, there is no guarantee that this will happen, for you
can reason validly from false information and accidentally end up with a conclusion that is true.
Here is an example of that:
P or not W True
W False
∴P True
In this example, one of the premises is false, but the conclusion happens to be true anyway.
Mistaken assumptions can sometimes lead to a true conclusion by chance.
The one combination that we cannot have is a valid argument which has all true premises and a
false conclusion. This is in keeping with the definition given above: a deductively valid argument
is one for which it is logically impossible for its conclusion to be false if its premises are all true.
We have seen that there are valid arguments of each of these sorts:
PREMISES all true not all true not all true
CONCLUSION true false true
What about invalid arguments? (That is, what about arguments that are not deductively valid?)
What combination of truth-values can the parts of invalid arguments have? The answer is that
they can have any combination of truth-values whatsoever. Here are some examples:
P True
not W True PREMISES ALL TRUE
∴ not R True CONCLUSION TRUE
P or W True
W False PREMISES NOT ALL TRUE
∴P True CONCLUSION TRUE
W or R False
P True PREMISES NOT ALL TRUE
∴R False CONCLUSION FALSE
The moral of the story so far is that if you know that an argument is invalid, that fact alone tells
you nothing at all about the actual truth-values possessed by its parts. And if you know that it is
valid, all that that fact tells you about the actual truth-values of its parts is that it does not have all
of its premises true plus its conclusion false.
However, there is more to be said. Suppose that you want to show that an argument is invalid,
but the argument does not already have all true premises and a false conclusion. How can you
do this? One approach is to appeal directly to the characterization of validity, and describe a
possible situation in which the premises are all true and the conclusion false. For example,
suppose someone has given this (invalid) argument:
Either Roosevelt or Truman (or perhaps both) was a president.
Truman was a president.
∴ Roosevelt was a president.
There is no mistake of fact involved here, but the argument is a bad one, and you would like to
establish this. You could do so as follows. You say:
"Suppose that Truman had been a president, but not Roosevelt. In that situation
the premises would have been true, but the conclusion false."
This is enough to show the reasoning bad, that is, to show the argument invalid.
We can do even more than this, as we will see in the next section.
EXERCISES
This book provides a stock of exercises as an aid to learning. They were written in the belief that
the "hands on" approach to modern logical theory is the best way to master it. You will also be
supplied with answers to many of the exercises. You should attempt every exercise on your own,
and then check your efforts against the answers that are given. If you do not understand one
or more of the exercises, ask for help!
Several of the exercises contain material that supplements the explanations in the body of the
text. None of the exercises presuppose material that is not provided in the text or in the exercise
itself.
1. Decide whether each of the following arguments is valid or invalid. If the argument is invalid
then describe a possible situation in which its premises are all true and its conclusion false.
a. Either Polk or Lee was a president.
Either Lee or Whitney was a president.
∴ Either Polk or Whitney was a president.
b. Lee wasn't a president, and Polk was.
Either Polk or Whitney was a president.
∴ Whitney was a president.
3. An argument which is valid and which also has all of its premises true is called sound. Based
on this definition, which of the following are true, and which false:
a. All valid arguments are sound.
b. All invalid arguments are unsound.
c. All sound arguments have true conclusions.
d. If an argument is sound, and you produce a new argument from it by adding one or more
premises to it, the resulting argument will still be sound.
e. All unsound arguments are invalid.
f. If an argument has a necessarily true conclusion, it is sound.
7. (a) Give an example of a "reversing" argument, that is, one which is guaranteed to have a
false conclusion if its premises are true, and is guaranteed to have a true conclusion if any of its
premises are false. (b) Give an example of an argument that must have a true conclusion no
matter what the truth-values of its premises. Is this argument valid?
3 LOGICAL FORM
If you want to show that an argument is invalid, you can describe a possible situation in which the
premises are all true and the conclusion false. We illustrated this above with the argument:
Either Roosevelt or Truman (or perhaps both) was a president.
Truman was a president.
∴ Roosevelt was a president.
But this direct appeal to possible situations is sometimes difficult to articulate, and judgments of
possibility can differ. Fortunately, there is another technique that is often more useful. You could
challenge the above reasoning by saying:
"That reasoning is no good. If that reasoning were good,
we could prove that McGovern was a president! For we know that:
Either McGovern or Truman was a president,
and we know:
Truman was a president.
So, by your reasoning we should be able to conclude that
McGovern was a president too!"
This challenge, like the first one, also shows that the argument given above is invalid. But
whereas the first type of challenge focuses on how the ORIGINAL argument works in some
POSSIBLE situation, this second challenge is based on how some OTHER argument works in
the ACTUAL situation. What we do in this second technique is to give an argument that is
different than the first, but closely related to it. In the case in question, the new argument is:
Either McGovern or Truman was a president.
Truman was a president.
∴ McGovern was a president.
We know the new argument is invalid because it actually has all true premises and a false
conclusion (we chose it on purpose to be this way). Since the new argument is invalid, so is the
original one.
But why should the original argument be invalid just because this second argument is invalid?
The answer is that, intuitively speaking, they both employ the same reasoning, and it is the
reasoning that is being assessed when we make a judgment about validity. But how can we tell
that they employ the same reasoning? The answer is that they both have the same form. Each
argument is one in which one of the premises is an "or" statement, with the other premise being
one of the parts of the "or" statement and the conclusion being the other part. This sameness of
structure or form indicates a sameness of the reasoning involved.
A key assumption on which all of modern logical theory is based is that goodness of deductive
reasoning is a matter of form. Any argument which has just the same form as the argument we
were just discussing is invalid, no matter whether its subject matter is religion, politics,
mathematics, or baseball. Likewise, any argument which has this form:
P or W
not W
∴P
It follows from this definition that if an argument is formally valid, so is any argument with exactly
that form, if an argument is not formally valid, neither is any argument with exactly that form. A
central preoccupation of modern logic, then, is the investigation and classification of logical forms.
(That is why this logic is called "formal logic".) This will be our business throughout the chapters
that follow.
EXERCISES
1. Decide whether each of the following arguments is valid or invalid. If the argument is invalid
then give an argument which has the same form, and which actually has all true premises and a
false conclusion.
a. Either Polk or Lee was a president.
Either Whitney or Lee was a president.
∴ Either Polk or Whitney was a president.
b. Lee wasn't a president, and Polk was.
Either Polk or Whitney was a president.
∴ Whitney was a president.
c. Polk was a president and so was Lee.
Whitney was a president.
∴ Polk was a president and so was Whitney.
d. Either Polk or Whitney was a president.
Lee was not a president.
∴ Lee wasn't a president and Polk was.
c. If you are wondering whether an argument is valid or not, and you fail to find another
argument which has the same form and all true premises and a false conclusion, that
shows the original argument to be valid.
3. Here are some argument forms. For each, say whether every argument with that form is valid.
If it is not valid, give an example of an argument with the given form that has true premises and a
false conclusion.
a. If A then B
A
∴B
b. If A then B
B
∴A
c. not (A and B)
not-B
∴ not A
d. A or B
B
∴A
e. A and not-A
∴B
f. A
∴ B or not-B
g. A or B
not-B or C
∴ A or C
4. Recall that an argument which is valid and which also has all of its premises true is called
sound.
a. If you are wondering whether an argument is sound, and you manage to find another one
with the same form and having all true premises and a false conclusion, does that show
the original argument to be unsound? Why?
b. If you are wondering whether an argument is sound, and you manage to find another one
with the same form and having all true premises and a true conclusion, does that show
the original argument to be sound? Why?
c. If you are wondering whether an argument is sound, and you manage to find another one
with the same form and having all false premises and a false conclusion, does that show
the original argument to be sound? To be unsound? Why?
5. For each of the examples in 3, say whether or not every argument with that form is sound, and
also say whether some argument with that form is sound.
6. {This question is speculative, and does not necessarily have a straightforward answer} Could
there be an argument that is valid but not formally valid? Could there be an argument that is
formally valid but not valid?
4 SYMBOLIC NOTATION
Our investigation of logical forms will take an indirect route, but one that has proved to be
worthwhile. Instead of attempting a direct classification of the logical forms of sentences of
English, we will develop an artificial language that is considerably simpler than English. It will in
some ways be like English without some of the logically irrelevant aspects of English. And it will
lack some of the characteristics that make the use of English confusing when used in
argumentation. For example, the artificial language will lack some of the structural ambiguity of
English. Consider this English sentence:
Mary teaches little girls and boys.
Does this tell us that Mary teaches little girls and little boys, or that she teaches little girls and
regular-size boys? If this sentence occurred in an argument, the validity of the argument might
turn on how the sentence was read. In the artificial language to be developed, structural
ambiguities of this sort will be absent.
The artificial language will be especially designed to make logical form perspicuous. You are
already familiar with this from arithmetic. Consider the partly symbolic sentence:
For any two numbers x and y, x+y = y+x.
It is clear what this says. The same thing can also be said without any symbols:
Given any two numbers, the result of adding them together in one order is the same as
the result of adding them together in the reverse order.
It is apparent that the use of symbols makes the claim clearly and vividly. Our logical symbolism
will be like this.
5 IDEALIZATIONS
The data that we have to deal with are incredibly complex, and this is only an introductory text.
So we will idealize from time to time. This is no different from any other art or science. In physics
you usually begin by studying the behavior of bodies falling in a perfectly uniform gravitational
field, or sliding down frictionless planes. There are no perfectly uniform gravitational fields, and
no frictionless planes, yet studying these things gives us clear and simple models that can be
applied to real phenomena as approximations. And then in the advanced courses you can learn
how friction affects the sliding, and how non-uniform fields affect the movement of things in them.
Here are some of the idealizations that we will make in this book: We will look only at arguments
with indicative sentences, not with imperative or interrogative. We will ignore any problems due
to vagueness. For example, given a perfect understanding of the situation, you may still be
unsure whether to say that Mary loves John, because of the vagueness of distinguishing between
loving and liking. We will also totally ignore the fact that sentences may change truth-value over
time and with differing situations. If I say today:
I'm feeling great!
this may be true, but the very same sentence may be false tomorrow. And it may be true when I
say it, yet false when someone else utters it. This "context dependence" of truth has aroused a
great deal of interest, and there are many theories about how it works. They all presuppose that
their readers have already learned the material in this book. We will pretend in our investigations
that sentences come with unique truth-values that do not change with context. The effects of
context constitute an advanced study.
We will also assume that each sentence is either true or false. Again, the question of whether,
and which, sentences lack truth-value is interesting, but is not to be pursued at the beginning.
Many other idealizations will become apparent as we proceed.
1. a. INVALID Any possible situation in which Lee was a president but neither of the
others was.
b. INVALID Any possible situation in which Polk was a president but neither of the
others was.
c. VALID
d. INVALID Any possible situation in which Whitney was a president and neither of
the others was.
2. a. True. (Such an argument will always have at least one false premise.)
b. False. Some do; some don't.
c. True.
d. False. Sometimes adding a premise converts an invalid argument into a valid one, and
sometimes it does not. It depends on what you add.
e. True. There can't be a possible situation in which it has all true premises and a false
conclusion because there can't be a possible situation in which it has all true premises.
f. True. There can't be a possible situation in which it has all true premises and a false
conclusion because there can't be a possible situation in which it has a false conclusion.
g. False. It might be valid, or it might be invalid.
4. It has to be valid. For suppose it were not. Then there would be a possible situation in which
A is true and C is false. Since the first argument is valid, B is true in this situation; but then since
the second argument is valid, C is also true in that situation, contradicting our supposition that
there is a situation in which A is true and C is false.
5. It has to be sound. It has to be valid for the same reason as in the previous example. And
since the first argument is sound, A is true. So its premise is true.
6. We know that at least one of them is invalid, but we don't know which. If they were both valid,
the first argument would have to be valid, as in exercise 4. So they aren't both valid. But there
are cases in which the first is valid and the second invalid, and cases in which the first is invalid
and the second valid, and cases in which they are both invalid.
First valid and second invalid:
A Polk was a president
B Polk or Lee was a president
C Lee was a president
First invalid and second valid:
A Polk was a president
B Polk and Lee were presidents
C Lee was a president
Both invalid:
SECTION 3
1. a. Either McGovern or Nixon was president.
Either Nixon or Goldwater was president
∴ Either McGovern or Goldwater was president.
b. The original argument will do; it already has all true premises and a false conclusion.
c. VALID.
d. Either Whitney or Polk was a president.
Lee was not a president.
∴ Lee wasn't a president and Whitney was.
2. a. False.
b. False. This does not show that no argument with that form has true premises and a false
conclusion.
c. False. You might not have looked hard enough.
3. a. VALID
b. INVALID If Polk and Lee were both presidents, Polk was a president.
Polk was a president.
∴ Polk and Lee were both presidents.
c. INVALID not (Polk was a president and Lee was a president)
not Lee was a president
∴ not Polk was a president
d. INVALID Lee or Polk was a president.
Polk was a president.
∴ Lee was a president.
e. VALID
f. VALID
g. VALID (This depends interpreting `or' inclusively; this is discussed in chapter 2
below.)
4. a. Yes. It shows the original argument invalid, and an invalid argument is not sound.
b. No. The original argument could still be invalid, or have a false premise, or both.
Example:
Original argument: "Found" argument:
Lee was a president Nixon was a president.
∴ Whitney was a president ∴ Kennedy was a president.
c. It shows neither.
Examples:
Original unsound argument: "Found" argument:
Lee wasn't a president Nixon wasn't a president.
∴ Whitney wasn't a president ∴ Kennedy wasn't a president.
Original sound argument: "Found" argument:
Lee wasn't a president Nixon wasn't a president.
∴ Lee wasn't a president ∴ Nixon wasn't a president.
5. a. Some arguments with this form are sound: the ones with true premises.
But not all; some of them have false premises.
b. None are sound, since none are valid.
c. None are sound, since none are valid.
But not all; some of them have false premises.
d. None are sound, since none are valid.
e. None are sound, since none has a true premise.
f. Some arguments with this form are sound: the ones with true premises.
But not all; some of them have false premises.
g. Some arguments with this form are sound: the ones with true premises.
But not all; some of them have false premises.
6. Many logicians think that there are arguments that are valid, but not formally valid. An
example is:
Herman is a bachelor
∴ Herman is unmarried
The validity of this argument comes from the meaning of the word 'bachelor', and not
from the form of the sentences in the argument.
As we have defined 'formally valid', any argument that is formally valid is automatically
valid.
Chapter One
Sentential Logic with 'if' and 'not'
1 SYMBOLIC NOTATION
In this chapter we begin the study of sentential logic. We start by formulating the basic part of the
symbolic notation mentioned in the Introduction. For purposes of this chapter and the next, our symbolic
sentences will consist entirely of simple sentences, called atomic sentences, together with molecular
sentences made by combining simpler ones with connectives. The simple sentences are capital letters,
which can be thought of as abbreviating sentences of English, as in the Introduction. In this chapter, the
connectives are the negation sign, '~', and the conditional sign, '→'.
The negation sign, '~', is used much as the word 'not' is used in English, to state the opposite of
what a given sentence says. For example, if 'P' abbreviates the sentence 'Polk was a president',
then '~P' abbreviates the sentence 'Polk was not a president'.
The conditional sign, '→', is used much as 'if . . , then . . .' is used in English. If 'P' abbreviates
the sentence 'Polk was a president' and 'W' abbreviates 'Whitney was a president' then '(P→W)'
abbreviates the sentence 'If Polk was a president, then Whitney was a president'.
We need to be precise about exactly what the symbolic sentences of Chapter 1 are:
Some terminology:
• A symbolic sentence containing no connectives at all is an atomic sentence. In this chapter and
the next, only sentence letters are atomic.
• Any symbolic sentence that contains one or more connectives is called a molecular sentence.
• We call '~□' the negation of '□'.
• We call any symbolic sentence of the form '(□ → ○)' a conditional sentence; we call '□' the
antecedent of the conditional, and '○' the consequent of the conditional.
Examples of symbolic sentences with minimal complexity are:
U
~U
(U→V)
The first is an atomic sentence. The second is the negation of that atomic sentence. The last is a
conditional whose antecedent is the atomic sentence 'U' and whose consequent is the atomic sentence
'V'.
Once a molecular sentence is constructed, it can itself be combined with others to make more complex
molecular sentences:
~(U→V) it is not the case that if U then V
(~V → (U→V)) if it is not the case that V then if U then V
~~(V → (U→V)) it is not the case that it is not the case that if V then if U then V
The formation rules determine when parentheses occur in a symbolic sentence. When adding a negation
sign to a sentence you do not add any parentheses. These are not symbolic sentences because they
contain extra (prohibited) parentheses :
~(U), ~(~U), ~((U→V))
Although '~(U→V)' has a parenthesis immediately following the negation sign, that parenthesis got into the
sentence when constructing '(U→V)', and not because of the later addition of the negation sign.
When combining sentences with the conditional sign, parentheses are required. For example, this is not a
sentence:
U→V→W
There is one exception to the need for parentheses. If a sentence appears all by itself, not as part of a
larger sentence, then its outer parentheses may be omitted. So these sentences are taken informally to
be conditional symbolic sentences:
U→V
~U → V
(U→V) → ~U
Any well-formed sentence can be "parsed" into its constituents. You begin with the sentence itself, and
you indicate below it how it is constructed out of its constituents. First you locate the main connective,
which is the last connective introduced when constructing the sentence. If the sentence is a negation, the
main connective is the negation sign; you draw a vertical line under it and write the part of the sentence to
which the negation sign is applied. If it is a conditional, the main connective is the conditional sign; you
draw branching lines below the main conditional sign and write the antecedent and consequent:
~P P→Q
| 2
P P Q
EXERCISES
1. For each of the following state whether it is a sentence in official notation, or a sentence in informal
notation, or not a sentence at all. If it is a sentence, parse it as indicated above.
a. ~~~P
b. ~Q→~R
c. ~(Q~→R)
d. ~(~P)→~R
e. (P→Q) → (R→~Q)
f. P → (Q→R) → Q
g. (P → (Q→R) → Q)
h. (~S→R) → ((~R→S) → ~(~S→R))
i. P → (Q→P)
The conditional sign: The conditional sign is meant to capture some part of the logical import of 'if . . ,
then' in English. But it is not completely clear under what circumstances an 'if . . , then' claim in English is
true. It seems clear that any English sentence of the form 'If P then Q' is false when 'P' is true but 'Q' is
false. If you say 'If the Angels win there will be a thunderstorm', then if the Angels do win and if there is no
thunderstorm, what you said is false. In other cases things are not so clear. Consider these conditional
sentences uttered in normal circumstances:
If it rains, the game will be called off.
If the cheerleaders are late, the game will be called off.
Now suppose that it rains, and the cheerleaders are late, and the game is called off. Are the sentences
above true or false? Most people would be inclined to say that the first is true. But the second is less
obvious. After all, the game was not called off because the cheerleaders were late. So there is something
funny about the second sentence. If it is false, it will be impossible to capture the logical import of
conditionals by means of any truth functional connective. For the truth of the first sentence above requires
that some conditionals be true when both their parts are true, and the second would require that some
conditionals be false when both their parts are true.
However, you might hold that the second sentence above is true. Granted, the game was not called off
because the cheerleaders were late, but so what? The second sentence doesn't say anything at all about
why the game was called off. It only says that it will be called off if the cheerleaders are late; and they
were late, and the game was called off, so it is true. If so, perhaps conditionals are truth functional.
There is no universal agreement about how conditionals work in natural language. The position taken in
this text is that 'if . . , then. . .' is sometimes used to express what is called the "material conditional". This
is the use of 'if . . , then . . .' where a conditional sentence is false in case the antecedent is true and the
consequent false, and it is true in every other case. This use is truth functional. It is described by means
of this truth table:
□ ○ □→○
T T T
T F F
F T T
F F T
The conditional is used in this way by mathematicians, and by others. We will assume in doing exercises
and examples that the logical import of 'if . . , then' is intended to coincide with our symbolic '→'. There
may be other uses of 'if . . , then' that convey more than '→', but we will not address them in this text.
The word 'if' in English has many synonyms. In at least some contexts these are all interchangeable:
if If Maria sings, Xavier will leave
provided that Provided that Maria sings, Xavier will leave
assuming that Assuming that Maria sings, Xavier will leave
given that Given that Maria sings, Xavier will leave
in case In case Maria sings, Xavier will leave
on the condition that On the condition that Maria sings, Xavier will leave
Using 'S' for 'Maria sings' and 'X' for 'Xavier will leave', these can all be symbolized as
S→X
'If' clauses in English may also occur at the end of a sentence instead of at the beginning. So these also
may be symbolized as 'S→X':
Xavier will leave if Maria sings
Xavier will leave provided that Maria sings
Xavier will leave assuming that Maria sings
Xavier will leave given that Maria sings
Xavier will leave in case Maria sings
Xavier will leave on the condition that Maria sings
In either use, the word 'If' immediately precedes the antecedent of the conditional.
Many of these "synonyms" of 'if' can be used to say more than what is said with a simple use of the word
'if'. For example, a person who says 'assuming that' may want to convey that s/he is indeed making a
certain assumption, and not just saying 'if'. But in other contexts no assuming is indicated. A physicist
who says 'Assuming that there are planets with orbits outside the orbit of Pluto, we will need to send space
probes to investigate them' may simply be responding to the question 'What if there are planets beyond
Pluto?', and not doing any assuming at all. In doing the exercises we will take for granted that the
locutions identified above are being used in the most minimal sense of 'if', which we take to be that of the
connective '→'.
Only if: The word 'only' can be added to the word 'if', to make 'only if'. The 'only' has the effect of
reversing antecedent and consequent. As a result, whereas 'if', when used alone, immediately precedes
the antecedent of a conditional, 'only if' immediately precedes the consequent. So we have these
equivalences:
If P, Q P→Q
Only if P, Q Q→P
P if Q Q→P
P only if Q P→Q
Some will find it more natural to represent 'P only if Q' by 'If not Q then not P', or '~Q → ~P'. It will turn out
that this is logically equivalent to 'P → Q'. We will generally use the latter form because it's simpler.
When ‘only if' comes first, there are grammatical changes in the last clause, which converts into its
interrogative word order:
The game will be called off only if it rains = Only if it rains will the game be called off
'Only' may also precede any of the synonyms of 'if', so these all may be symbolized as 'X → S':
Xavier will leave only if Maria sings
Xavier will leave only provided that Maria sings
Xavier will leave only assuming that Maria sings
Xavier will leave only given that Maria sings
Xavier will leave only in case Maria sings
Xavier will leave only on the condition that Maria sings
Only if Maria sings will Xavier leave
Only provided that Maria sings will Xavier leave
Only assuming that Maria sings will Xavier leave
Only given that Maria sings will Xavier leave
Only in case Maria sings will Xavier leave
Only on the condition that Maria sings Xavier will leave
EXERCISES
For these exercises assume that 'S' abbreviates 'Susan will be late' and 'R' abbreviates 'It will rain'.
1. For each of the following sentences say which symbolic sentence is equivalent to it.
a. Only if it rains will Susan be late
S→R
R→S
b. Susan will be late provided that it rains
S→R
R→S
c. Susan won't be late
~S
~~S
d. Susan will be late only if it rains
S→R
R→S
e. Given that it rains, Susan will be late
S→R
R→S
2. Symbolize each of the following:
a. Susan will be late only provided that it rains
b. Only on condition that it rains will Susan be late
c. Susan will be late only in case it rains
d. Susan will be late only if it rains
e. It is not the case that Susan will be late
Complex sentences of English generally translate into complex sentences of the logical notation. Here it
is important to be clear about the grouping of clauses in the English sentence. Consider, the sentence:
If Roberta doesn't call, Susan will be distraught
This is a conditional whose antecedent is a negation. Using 'P' for 'Roberta calls' and 'Q' for 'Susan will be
distraught', this may be symbolized:
~P → Q
It is not the negation of a conditional:
~(P→Q)
To make the negation of a conditional, you need to say something like:
It is not the case that if Roberta calls, Susan will be distraught
which is symbolized as:
~(P → Q)
There are a few fundamental principles that govern symbolizations of English sentences in the logical
notation.
SOURCES OF '~'
The locution 'fail to' always yields a negation sign that applies to the symbolization of the
smallest sentence that 'fail to' is part of.
Likewise for the word 'not'.
The expression 'it is not the case that' applies to a sentence immediately following it.
Notice that in the sentence 'It is not the case that Willa will leave if Sam does' there are two grammatical
sentences that immediately follow 'It is not the case that', namely, 'Willa will leave' and 'Willa will leave if
Sam does'. So there are two ways to symbolize the sentence:
S → ~W
~(S→W)
The sentence is in fact ambiguous.
SOURCES OF '→'
If: The word 'if' always gives rise to a conditional, '□→○'.
Wherever 'if' occurs (not as part of 'only if'), the antecedent of the conditional is the
symbolization of a sentence immediately following 'if.
The consequent of the conditional is either the symbolization of a sentence immediately
preceding 'if' (with no comma in between) -- as in '○ if □' -- or it is the symbolization of a
sentence immediately following the sentence that is symbolized as the antecedent -- as in
'if □ then ○'.
Then: If 'then' occurs it must be paired with a preceding 'if'.
The antecedent of the conditional introduced by 'if' is the symbolization of the sentence exactly
between 'if' and 'then'.
Its consequent is the symbolization of a sentence immediately following 'then'.
Only if: The expression 'only if' always gives rise to a conditional, '□→○'.
The consequent of '□→○' is the symbolization of a sentence immediately following 'only if' --
as in '□ only if ○' -- or as in 'only if ○, □'.
The antecedent of '□→○' is the symbolization of a sentence immediately preceding 'only if'
(with no comma in between) -- '□ only if ○' -- or of a sentence immediately following the
consequent -- as in 'only if ○, □'. In the latter case, that sentence is grammatically changed
(to its interrogative word order).
Illustration: These principles determine that the sentence: 'Pat won't call only if it is not the case that the
quilt is dirty' is symbolized:
~P → ~Q
The (contracted) 'not' in 'Pat won't call' yields a negation that applies directly to 'P'. The 'it is not the case
that' yields a negation that applies directly to 'Q', since 'the quilt is dirty' is the only sentence immediately
following 'it is not the case that'. The only sentence immediately to the left of the 'only if' is 'Pat won't call',
so that is the antecedent of the conditional, and the only sentence immediately to the right of the 'only if' is
'it is not the case that the quilt is dirty', so that is the consequent.
Illustration: In the sentence 'If Wilma leaves then Xavier stays if Yolanda sings' the first 'if . , then . . .'
exactly encloses 'Wilma leaves', so W is the antecedent of the first conditional. There are two sentences
immediately following 'then'; they are the whole 'Xavier stays if Yolanda sings' and just 'Xavier stays'. So
the sentence must have the form:
W → (Xavier stays if Yolanda sings)
or
(W → X) if Yolanda sings
The second 'if' comes between its consequent and antecedent. It must give rise to a conditional that has
'Y' as its antecedent, since the only sentence following the 'if' is 'Yolanda sings'. The consequent of that
conditional can be the symbolization of either just 'Xavier stays', or 'If Wilma leaves then Xavier stays',
since each of these immediately precedes the 'if'. The first of these fits with the first partial symbolization
above, giving:
W → (Y→X)
and the second fits with:
Y → (W→X).
Both of these symbolizations are possible, which agrees with the intuition that the original English
sentence is ambiguous. (Some people find the first reading more natural than the second, but the second
is a possible reading under some circumstances.)
Illustration: In the sentence 'If Wilma leaves then Xavier stays only if Yolanda sings' the first 'if' is exactly
as in the previous case, so the sentence has the form:
W → (Xavier stays only if Yolanda sings)
or
(W → X) only if Yolanda sings
The 'only if' comes between its antecedent and consequent. It must give rise to a conditional that has 'Y'
as its consequent, since the only sentence following the 'only if' is 'Yolanda sings'. The antecedent of that
conditional can be the symbolization of either just 'Xavier stays', or 'If Wilma leaves then Xavier stays',
since each of these immediately precedes the 'only if'. The first of these fits with the first partial
symbolization above, giving:
W → (X→Y)
and the second fits with:
(W→X) → Y
Again, the sentence is ambiguous.
The principles above also apply when 'if' is replaced by one of its synonyms, such as 'given that'.
So 'If Wilma leaves then Xavier stays provided that Yolanda sings' has the same symbolization options as
'If Wilma leaves then Xavier stays if Yolanda sings':
W → (Y→X)
and
Y → (W→X).
Commas: We have seen that the fundamental principles governing words that yield negations and
conditionals can permit a significant amount of ambiguity. A common way to eliminate such ambiguity
from sentences is to use commas to indicate how parts of the symbolization are to be grouped. Commas
are used for a wide variety of purposes, so the presence of a comma may be irrelevant to the
symbolization. But sometimes they are used to indicate that sentences should be grouped together, that
is, combined into a single sentence. When this happens, the comma appears right after the sentence that
results from the grouping. Or, a comma may be used to indicate that sentences to the right should be
grouped together.
COMMAS
A comma indicates that the symbolizations of sentences to its left should be combined into a
single sentence, or that sentences to its right should be combined into a single sentence.
EXERCISES
For these questions please use the following scheme of abbreviation:
V Veronica will leave
W William will leave
Y Yolanda will leave
1. For each of the following say which of the proposed translations are correct.
a. If Veronica doesn’t leave William won’t either
~(V→W)
~V→~W
V → ~~W
b. William will leave if Yolanda does, provided that Veronica doesn’t
(W→Y) → ~V
V → (W→Y)
~V → (Y→W)
c. If Yolanda doesn’t leave, then Veronica will leave only if William doesn’t
~Y → (~W → V)
~Y → (V → ~W)
~W → (~Y→V)
d. If Yolanda doesn’t leave then Veronica will leave, given that William doesn’t
~Y → (~W → V)
~Y → (V → ~W)
~W → (~Y→V)
4 RULES
A "rule" is a particular valid form of argument which may be used in extended reasoning. Certain rules
involving negations and conditionals have been recognized for centuries, and they have traditional names.
The basic rules used in this chapter are:
RULES
Repetition: □
∴ □
Repetition is the most trivial rule; it indicates that if you have any sentence you may validly infer it from
itself. Although trivial, this rule will have an important use later when we construct a method of showing
that an argument is valid.
Modus ponens indicates that if you have any conditional sentence along with its antecedent you may
infer its consequent. For example, this valid argument is an instance of modus ponens:
If Polk was a president, so was Whitney P→W
Polk was a president P
∴ Whitney was a president ∴ W
This rule may be justified by noting that a conditional with a true antecedent and false consequent is false.
Modus Tollens indicates that if you have any conditional sentence along with the negation of its
consequent, you may infer the negation of its antecedent. For example, this valid argument is an instance
of modus tollens:
If Polk was a president, so was Whitney P→W
Whitney wasn't a president ~W
∴ Polk wasn't a president ∴ ~P
This rule may be justified in a similar way to that used in justifying modus ponens.
Double negation indicates that from any sentence you may infer the result of putting two negation signs
on the front, or vice versa. For example, both of these valid arguments are instances of double negation:
Polk was a president P
∴ It is not the case that Polk wasn't a president ∴ ~~P
It is not the case that Polk wasn't a president ~~P
∴ Polk was a president ∴ P
It should be obvious upon reflection that any argument whose conclusion follows from its premises by a
single application of one of these rules is formally valid. That is, there cannot be a situation in which it has
true premises and a false conclusion.
These rules apply to anything that fits their pattern, even if it is complex. For example, this is an instance
of modus ponens:
EXERCISES
1. For each of the following arguments, say whether it is an instance of modus ponens, or modus tollens,
or double negation, or none of the above.
a. P→~Q b. ~P → Q c. ~~(P→Q)
Q ∴ ~(P → Q) ∴ P→Q
∴ ~P
d. ~P → ~Q e. ~P → ~Q f. P→Q
~P ~~Q ~R
∴ ~Q ∴ ~~P ∴ ~P
5 DIRECT DERIVATIONS
Complex reasoning often consists of stringing together simple inferences so as to show the validity of a
complex argument. Here is an example. We are given the following premises and conclusion:
If Polk was a president then if Whitney was a president so was Trump.
Polk wasn't a president only if Trump was a president.
Trump wasn't a president.
∴ Whitney wasn't a president.
Lines 2-4 indicate that the first part of the reasoning appeals to two of the premises of the argument, and it
draws a conclusion from them by modus tollens. Line 5 indicates that the reasoning goes from line 4 to 5
by double negation. Lines 6 and 7 indicate that line 5 together with the first premise lead to the sentence
on line 7 by modus ponens. Finally, lines 8 and 9 indicate that line 7 together with the third premise lead
to the conclusion of the argument.
This layout of premises and inferences constitute most of the ingredients of what we will call a derivation.
Every line consists of a line number followed by a sentence followed by a justification. The sentence on
each line either (i) occurs as a premise, and the line is justified by writing "pr", or (ii) follows from previous
lines by a rule, and the line is justified by writing the number(s) of the line(s) from which it follows, along
with a short name of the rule. The short names of the rules that we have so far are “r”, "mp", "mt", and
"dn".
The particular approach taken here is to see a derivation as carrying out a task. Each task is to show that
a sentence follows from certain things. Our derivations begin with a special line stating the task; that is,
stating what is to be shown. In the sequence of steps above it would come first, and would be of this form:
1. Show ~W
A "show" line may be introduced at any time, and it does not need a justification, because it only states
what it is we intend to derive. All other lines need justifications.
Suppose a derivation is to be constructed, guided by the reasoning given above. Following the 'show' line
we repeat two of the premises, justifying them with the notation 'pr':
1. Show ~W
2. ~T pr
3. ~P→T pr
From these two lines we infer the third by modus ponens:
1. Show ~W
2. ~T pr
3. ~P→T pr
4. ~~P 2 3 mt
The cancellation of the show line indicates that the task was successfully completed, and the boxing
encloses the lines used in completing that task.
It is also permissible to wait and write the "dd" on a later line. Such a line contains no sentence itself; its
justification consists of the number of the line where the target sentence occurs, followed by "dd". Here is
the same derivation with the dd justification on a later line:
1. Show ~W
2. ~T pr
3. ~P→T pr
4. ~~P 2 3 mt
5. P 4 dn
6. P→(W→T) pr
7. W→T 5 6 mp
8. ~T pr
9. ~W 7 8 mt
10. 9 dd Empty line with "dd"
It is often a matter of taste which technique to use for indicating the completion of a direct derivation.
Here is another illustration of a direct derivation, used to show this argument valid:
Q→~S
V→X
~V→S
~X
∴ ~Q
The derivation begins with a line indicating that the task is to show the conclusion of the argument:
1. Show ~Q
The next few lines give the reasoning steps:
2. V→X pr
3. ~X pr
4. ~V 2 3 mt
5. ~V→S pr
6. S 4 5 mp
7. ~~S 6 dn
8. Q→~S pr
9. ~Q 7 8 mt
On line 9 we have completed the task. So we write "dd" and box and cancel:
1. Show ~Q
2. V→X pr
3. ~X pr
4. ~V 2 3 mt
5. ~V→S pr
6. S 4 5 mp
7. ~~S 6 dn
8. Q→~S pr
9. ~Q 7 8 mt dd
(Line 7 is necessary before using modus tollens with line 8. This form of inference is indeed valid:
□→~○
○
∴ ~□
however, it is not itself an instance of modus tollens. It is instead an inference that is easily justified using
double negation along with modus tollens.)
We indent all of the lines immediately following a show line; this is a device for keeping track (by
indentation) of where the task that is initiated by the "show" is being carried out. The indentation also
reserves a space for the box that will be drawn if the derivation is successful.
In getting precise about how to construct a direct derivation, it will help to specify what previously occurring
things can be appealed to when applying a rule. We will say that a previous line is available from a given
line just in case it is an earlier line that is not an uncancelled show line and is not already in a box:
In a derivation from a set P of premises, a line is available from a given line just in
case it is a member of P or it is an earlier line that is not an uncancelled show line
and is not already in a box.
Whether a line is available or not depends on your perspective. A show line is not available from the line
immediately below it -- because it is not yet cancelled. But once it is cancelled, it is available from all lines
below the line from which it was cancelled. And a line may be available from a given line, but once it is
boxed, it is not available from any line outside the box.
A direct derivation from a set of sentences P consists of a sequence of lines (including justifications
when appropriate) that is built up, step by step, where each step is in accordance with these provisions:
• A show line consists of the word "Show" followed by a sentence. The first step of producing a
derivation must be to introduce a show line. A show line also may be introduced at any later step.
Show lines are not given a justification.
• At any step, any sentence from the set of sentences, P, may be introduced, justified with the
notation "pr".
• At any step a line may be introduced if it follows by a rule from previous available lines in the
derivation; it is justified by citing the numbers of those previous lines and the name of the rule.
• If a line is introduced whose sentence is the same as the sentence in the closest previous
uncancelled show line, one may, as the next step, write "dd" at the end of that line, draw a line
through the word "Show", and draw a box around all the lines below the show line, including the
current line.
• (Alternatively) At any step, if any previous available line contains a sentence that is the same as
that in the closest previous uncancelled show line, one may introduce a line with no sentence on
it, justifying it by citing the number of the earlier line followed by "dd"; one then draws a line
through the word "Show", and draws a box around all the lines below that show line, including the
current line.
These instructions show how to construct a derivation in a step by step fashion. Any number of steps
results in a derivation, but not necessarily one that completes any of the tasks set out by its show lines.
For example, when constructing a derivation above, at a certain stage we had reached this far in the
process:
1. Show ~W
2. ~T pr
3. ~P→T pr
4. ~~P 2 3 mt
This sequence of lines satisfies the conditions for being a derivation as defined above. But there is a
sense in which it is not yet finished. For this purpose we define a "complete" derivation:
A derivation is complete if every show line is cancelled and every line that is not a show
line is boxed.
The point of doing a derivation is often to show that a certain argument is formally valid. When a
derivation shows that an argument is formally valid, we say that it "validates" the argument:
EXERCISES
1. Check through each line of the following direct derivations to determine whether it can be constructed
by means of the provisions for direct derivations given above, where the set P is taken to be the premises
of the displayed arguments. (When assessing a given line, assume that all previous lines are correct.)
Argument: P → (Q→~R)
~P → ~Q
Q
∴ ~R
1. Show ~R
2. Q pr
3. ~~Q 2 dn
4. ~P → ~Q pr
5. ~~P 3 4 mt
6. P 6 dn
7. P → (Q→~R) pr
8. Q→~R 6 7 mp
9. ~R 2 8 mp dd
Argument: P → (R→~Q)
~P → ~Q
Q
∴ ~R
1. Show ~R
2. Q pr
3. ~P→~Q 2 pr
4. P 2 3 mt
5. R→~Q 4 mp
6. ~~Q 2 dn
7. ~~R 5 6 mt
8. R 7 dn
9. ~Q 5 8 mp dd
Argument: ~O
S → (W→~O)
O→S
W
∴ ~S
1. Show ~S
2. W pr
3. ~S pr
4. O→S pr
5. ~O 3 4 mt
6. W pr
7. ~(W→~O) 5 6 mt
8. S → (W→~O) pr
9. ~S 7 8 mt
10. 9 dd
W → ~(V→~Y)
X → (V→~Y)
V→Y
(V→Y) → X
∴ ~W
(W→Z) → (Z→W)
(Z→W) → ~X
P→X
~~P
∴ ~(W→Z)
6 CONDITIONAL DERIVATIONS
In this section we learn about one of the most powerful and useful procedures for constructing proofs in a
natural way. The procedure is called conditional derivation. It is meant to reflect a natural reasoning
process that involves hypothetical inference. Suppose that you wish to show that the following argument
is valid:
If Robert drives, Sam won't drive.
If Sam doesn't drive, Teresa won't go.
Willa will go only if Teresa does.
∴ If Robert drives, Willa won't go.
If we try to reason as above using our rules mp, mt, and dn, we will not succeed; none of them apply to the
premises we are given. What you would probably do on your own is to reason somewhat as follows:
ASSUME that Robert drives
Well, if he drives, Sam won't (given); so Sam won't drive
But if Sam doesn't drive, Teresa won't go (given), so Teresa won't go.
But Willa will go only if Teresa does (given), so Willa won't go.
So, SUMMING UP, if Robert drives, Willa won't go.
The middle three steps look familiar; they are inferences from premises and previously stated sentences,
and they are all justifiable by rules that we have. But the first and last steps are new. What does it mean
to "assume", as we have done in the first step, and what is this "summing up" in the last step? What role
do these have as legitimate parts of a piece of reasoning?
Here is what goes on in "conditional" reasoning. Our goal is to show that a certain conditional sentence
follows from certain premises. (In the example above, the conditional sentence is 'If Robert drives, Willa
won't go'.) We then "assume" the antecedent of the conditional. If we can use this to derive the
consequent of the conditional, we conclude that this reasoning has shown the conditional itself to follow
from the given premises. An example:
R→~S
~S→~T
W→T
∴ R→~W
A derivation using the conditional derivation technique begins with a line specifying the task, which is to
show the conclusion. This is followed by an assumption of the antecedent of the conditional to be shown:
1. Show R→~W
2. R ass cd assumption for conditional derivation
(the goal is now to derive the consequent: ~W)
The reasoning in the center of the derivation proceeds normally:
3. R→~S pr
4. ~S 2 3 mp
5. ~S→~T pr
6. ~T 4 5 mp
7. W→T pr
8. ~W 6 7 mt
Now that we have derived the consequent of the conditional on line 8, We cite "cd", and we box and
cancel:
1. Show R→~W
2. R ass cd Cancel the "Show"
3. R→~S pr
4. ~S 2 3 mp
5. ~S→~T pr Box the lines
6. ~T 4 5 mp
7. W→T pr
8. ~W 6 7 mt cd Write "cd"
Lines 2-8 show that given the premises, we may derive '~W' from 'R'. Our conditional derivation technique
says that this amounts to showing that those premises validate 'R→~W', so we may box and cancel.
Here is another example used to show that the following very short argument is valid:
S
∴ (S→R)→R
1. Show (S→R)→R
2. S→R ass cd <the goal is now to derive the consequent: R>
3. S pr
4. R 2 3 mp
We have assumed the antecedent of the conditional on the show line, and we have now succeeded in
deriving the consequent of that conditional. So we may use the technique of conditional derivation:
1. Show (S→R)→R
2. S→R ass cd
3. S pr
4. R 2 3 mp cd
EXERCISES
1. For each of the following derivations, determine which lines are correct and which incorrect. (In
assessing a line, assume that previous lines are correct.)
a. P → (~Q→R)
~R
∴ P→Q
1. Show P → Q
2. P ass cd
3. P → (~Q→R) pr
4. ~Q → R 2 3 mp
5. ~R pr
6. ~~Q 4 5 mt
7. Q 6 dn cd
b. P → (Q→~R)
R
∴ P → ~Q
1. Show P → ~Q
2. P ass cd
3. ~Q 1 2 mp
4. P → (Q→~R) pr
5. Q → ~R 2 4 mp
6. R pr
7. ~Q 5 6 mt cd
c. ~S → (Q→R)
R → ~(Q→R)
~P → R
∴ P → ~S
1. Show P → ~S
2. ~S ass cd
3. Q→R 2 mp
4. ~~(Q→R) 3 dn
5. R → ~(Q→R) pr
6. ~R 4 5 mt
7. ~P → R pr
8. ~~P 6 7 mt
9. P 8 dn cd
2. Construct correct derivations for each of the following arguments using conditional derivations.
a. P → (Q → (R→S))
~Q → ~R
R
∴ P→S
b. Q → ~(R→S)
P → (R→S)
~Q → R
∴ P→S
c. U → (U→V)
~R → ~(U→V)
R → ~S
∴ U → ~S
3. Symbolize the following arguments using the sentence letters given, and then give derivations to
validate them.
a. If Seymour likes papayas he'll have them for tea. He won't have them for tea if we don't have any.
If we didn't shop yesterday, we don't have any papayas. So if Seymour likes papayas, we
shopped yesterday. (P: Seymour likes papayas; T: Seymour will have papayas for tea; X: We
have some papayas; S: We shopped yesterday.)
b. If today is Thursday, then Saturday is two days from now. If the party is on Saturday, then if
Saturday is two days from now, then so is the party. I can't go to the party if it's two days from
now. The party is on Saturday. So if today is Thursday, I can't go to the party. (T: Today is
Thursday; S: Saturday is two days from now; P: The party is two days from now; Y: The party is
on Saturday; X: I can go to the party.)
c. If Samantha is at home, she won't cause any trouble. If she isn't at home, she can't be reached
by telephone. If she can't be reached by telephone, it's too late to tell her about the party. If she
comes to the party she will cause some trouble. So if it's not too late to tell her about the party,
she won't come. (S: Samantha is at home' T: Samantha will cause trouble; R: Samantha can be
reached by telephone; X: It's too late to tell Samantha about the party; Y: Samantha will come to
the party.)
7 INDIRECT DERIVATIONS
There is a third technique for doing derivations, called indirect derivation. It is often used in cases where
direct derivation and conditional derivation do not obviously apply. An example is an attempt to show that
this inference is valid:
Polk was a president
Whitney wasn't a president
∴ It is not the case that if Polk was a president, so was Whitney
The conclusion is not a conditional, so a straightforward application of conditional derivation does not
seem possible. Nor is it clear how to derive the conclusion using mp, mt, or dn. To validate this argument
you might reason as follows:
We want to show that it is not the case that if Polk was a president, so was Whitney. Well,
assume the opposite: assume that it is the case that if Polk was a president then so was Whitney.
Then, since we are given that Polk was a president, so was Whitney. But we are given that
Whitney wasn't. So we are led to absurd conclusions: Whitney was a president and Whitney was
not a president. So the assumption we made, which lead to these inferences, must not be true
(given the premises of the argument).
The reasoning is called indirect because in order to show something, you assume the opposite and derive
contradictory sentences from it (along with the premises). (Sentences are contradictory when one is the
negation of the other.) If you succeed in doing this, you have shown that the negation of what you are
trying to derive isn't true; it can’t be true because it entails contradictory sentences. So what you are trying
to derive must itself be true.
The technique of indirect derivation has two parts. First, there is a new kind of assumption: immediately
following a show line you may assume the opposite of the sentence on the show line. (The opposite of the
sentence is its negation, or its "unnegation" if it is already a negation.) Then when you have two
sentences, one of which is the negation of the other, after the last one derived you add the line number of
the other and write "id" for "indirect derivation"; then you box and cancel. (Alternatively, you may write a
later line with no sentence on it, citing the line numbers of both of the contradictory sentences, write "id';
and then box and cancel.)
1. Show ~(P→W)
2. P→W ass id assumption for indirect derivation
3. P pr
4. W 2 3 mp
5. ~W pr
Having reached line 5 we may add to it the line number 4 and "id"; then box and cancel:
1. Show ~(P→W)
2. P→W ass id Cancel the "Show"
3. P pr Box the lines
4. W 2 3 mp
5. ~W pr 4 id Write "id"
The "4 id" at the end of line 5 indicates that the sentence on line 4 contradicts that on the current line, line
5, and thus the assumption (on line 2) which lead to them must be false (given the premises of the
argument).
THINKING UP DERIVATIONS: Suppose that you have an argument and you are not sure whether or not
it is valid. You can show it not to be valid by producing what is usually called a "counter-example" -- a
logically possible situation in which its premises are all true and its conclusion false. If you get a counter-
example, the argument is invalid. Suppose that you aren't able to find a counter-example. That might be
because the argument is valid, or it might be because you haven't been lucky enough or clever enough to
find a counter-example. So failing to find a counter-example, by itself, shows nothing. However,
sometimes when you try to find a counter-example, and you fail, this is because any attempt to make the
premises true and conclusion false leads you to assign opposite truth values to some sentence. If this
happens to you, your failure can be used as a guide to producing a derivation that validates the argument.
Typically, it is a guide to producing an indirect derivation.
Here is an example. You are given the argument:
Here is a derivation based on that reasoning. The assumption that 'P' is true is just like an assumption for
purposes of indirect derivation:
1. Show ~P
2. P ass id
Now the next few steps of our attempt to get a counter-example are paralleled by familiar steps in a
derivation:
1. Show ~P
2. P ass id
3. P → (T→Z) pr
4. T→Z 2 3 mp
5. T pr
6. Z 4 5 mp
7. Z → ~P pr
8. ~P 6 7 mp
The "Oops" in the failed counter-example search parallels the fact that we now have contradictories on
lines 2 and 8. So we may box and cancel:
1. Show ~P
2. P ass id
3. P → (T→Z) pr
4. T→Z 2 3 mp
5. T pr
6. Z 4 5 mp
7. Z → ~P pr
8. ~P 6 7 mp 2 id
EXERCISES
1. For each of the following derivations, determine which lines are correct and which incorrect. (In
assessing a line, assume that previous lines are correct.)
a. R → (S → T)
S
~T
∴ ~R
1. Show ~R
2. R ass id
3. R → (S → T) pr
4. S→T 2 3 mp
5. S pr
6. T 4 5 mp
7. ~T pr 6 id
b. ~S → P
R
S → ~R
∴ P
1. Show P
2. R pr
3. ~S → ~R pr
4. ~S 2 3 mt
5. ~P ass id
6. ~S → P pr
7. ~~S 5 6 mt 5 id
c. U → (V→~W)
X → (U→V)
(V → ~W) → X
∴ U→V
1. Show U → V
2. U → ~V ass id
3. U → (V→~W) pr
4. ~V 2 3 mt
5. X → (U→V) pr
6. ~X 2 5 mt
7. ~(U→V) 5 6 mt 2 id
2. Construct correct derivations for each of the following arguments using indirect derivations.
a. ~Q → R
S → ~R
~S → Q
∴ Q
b. (P→Q) → R
S → (P→Q)
~S → R
∴ R
c. ~P → (R→S)
(R→S) → T
~T
Q → (R→S)
∴ ~(P → Q)
8 SUBDERIVATIONS
Reasoning can be intricate. One way in which this happens is when derivations occur within derivations.
Consider the following argument in symbolic form:
~(R→Q) → P
P → (~Q→Q)
~Q
∴ ~R
1. Show ~R
It is not clear how to directly derive ~R using our rules mp, mt, and dn. A conditional derivation is not
applicable because ~R is not a conditional. One could assume R for purposes of doing an indirect
derivation, but it is not clear how to proceed from there. An alternative approach is to use a derivation
within the main derivation. Here is one way to proceed:
Try to derive the negation of ‘~Q→Q’ from the third premise, and then use modus tollens on the
second, and then on the first, premise to get R→Q, which then leads to the desired conclusion
using mt and the third premise.
The first part of this strategy -- deriving ‘~(~Q→Q)’ from the third premise -- requires a derivation of its
own. You can do this as a conditional derivation, leaving you with:
1. Show ~R
2. Show ~(~Q→Q)
3. ~Q→Q ass id
4. ~Q pr
5. Q 3 4 mp 4 id
You now may use line 2 just as you would use any other line; once the "Show" is cancelled, the line is no
longer something you are trying to derive, it is something you have derived. (A cancelled 'show' means
"shown".) You proceed:
1. Show ~R
2. Show ~(~Q→Q)
3. ~Q→Q ass id
4. ~Q pr
5. Q 3 4 mp 4 id
6. P → (~Q→Q) pr
7. ~P 2 6 mt
8. ~(R→Q) → P pr
9. ~~(R→Q) 7 8 mt
10. R→Q 9 dn
11. ~Q pr
12. ~R 10 11 mt
1. Show ~R
2. Show ~(~Q→Q)
3. ~Q→Q ass id
4. ~Q pr
5. Q 3 4 mp 4 id
6. P → (~Q→Q) pr
7. ~P 2 6 mt
8. ~(R→Q) → P pr
9. ~~(R→Q) 7 8 mt
10. R→Q 9 dn
11. ~Q pr
12. ~R 10 11 mt dd
1. Show T→S
2. T ass cd
3. Show P→~Q
4. P ass cd
5. ~~P 4 dn
6. Q→~P pr
7. ~Q 5 6 mt cd
8. (P→~Q) → (R→S) pr
9. R→S 3 8 mp
10. T→R pr
11. R 2 10 mp
12. S 9 11 mp
Line 12 completes the main conditional derivation, so we box and cancel, and we are done:
1. Show T→S
2. T ass cd
3. Show P→~Q
4. P ass cd
5. ~~P 4 dn
6. Q→~P pr
7. ~Q 5 6 mt cd
8. (P→~Q) → (R→S) pr
9. R→S 3 8 mp
10. T→R pr
11. R 2 10 mp
12. S 9 11 mp cd
Now that we have derivations within derivations, the availability of previous lines for the purpose of
applying rules can change from line to line; a line that is not available at one point can become available
later, and one that is available may become unavailable. Examples:
At line 4 above, line 3 is not available, because from the point of view of line 4, line 3 is an
uncancelled show line. But the 'show' on line 3 is cancelled at line 7, so from the point of view of
line 8, line 3 is available.
On the other hand, at line 6, line 5 is available; but line 5 is no longer available at line 8, because it
has been boxed at line 7.
We can now state explicitly what may appear in a derivation which may contain subderivations:
DERIVATIONS
A derivation from a set of sentences P consists of a sequence of lines that is built up in order, step by
step, where each step is in accordance with these provisions:
• Show line: A show line consists of the word "Show" followed by a symbolic sentence. A show
line may be introduced at any step. Show lines are not given a justification.
• Premise: At any step, any symbolic sentence from the set P may be introduced, justified with the
notation "pr".
• Rule: At any step, a line may be introduced if it follows by a rule from sentences on previous
available lines; it is justified by citing the numbers of those previous lines and the name of the rule.
• Direct derivation: When a line (which is not a show line) is introduced whose sentence is the
same as the sentence on the closest previous uncancelled show line, one may, as the next step,
write "dd" following the justification for that line, draw a line through the word "Show", and draw a
box around all the lines below the show line, including the current line.
• Assumption for conditional derivation: When a show line with a conditional sentence is
introduced, as the next step one may introduce an immediately following line with the antecedent
of the conditional on it; the justification is "ass cd".
• Conditional derivation: When a line (which is not a show line) is introduced whose sentence is
the same as the consequent of the conditional sentence on the closest previous uncancelled
show line, one may, as the next step, write "cd" at the end of that line, draw a line through the
word "Show", and draw a box around all the lines below the show line, including the current line.
• Assumption for indirect derivation: When a show line is introduced, as the next step one may
introduce an immediately following line with the [un]negation of the sentence on the show line; the
justification is "ass id".
• Indirect derivation: When a sentence is introduced on a line which is not a show line, if there is a
previous available line containing the [un]negation of that sentence, and if there is no uncancelled
show line between the two sentences, as the next step you may write the line number of the first
sentence followed by "id" at the end of the line with the second sentence. Then you cancel the
closest previous "show", and box all sentences below that show line, including the current line.
Except for steps that involve boxing and canceling, every step introduces a line. When writing out a
derivation, every line that is introduced is written directly below previously introduced lines.
Optional variant: When boxing and canceling with direct or conditional derivation, the "dd" or "cd"
justification may be written on a later line which contains no sentence at all, and which is followed by the
number of the line that satisfies the conditions for direct or conditional derivation. With indirect derivation,
the "id" justification may be written on a later line which contains no sentence at all, and which is followed
by the numbers of the two lines containing contradictory sentences. In both cases, the lines cited must be
available from the later line.
EXERCISES
1. For each of the following derivations, determine which lines are correct and which incorrect. (In
assessing a line, assume that previous lines are correct.)
Tip: When a box occurs in a correctly formed derivation, it is put there by the command ('dd', or 'cd' or 'id')
that appears at the end of the last line within the box. When a "show" is cancelled, it is cancelled by the
same command that puts the box immediately below the "show'.
a. P → (Q→R)
~Q → S
∴ P → (~S→R)
1. Show P→(~S→R)
2. P ass cd
3. Show ~S → R
4. ~S ass cd
5. ~Q → S pr
6. ~~Q 4 5 mt
7. Q 6 dn
8. P → (Q→R) pr
9. Q→R 2 8 mp
10. R 7 9 mp cd
11. 3 cd
b. R→Q
Q→P
∴ R→P
1. Show R → P
2. R ass cd
3. R→Q pr
4. Q 2 3 mp
5. Show P
6. ~P ass id
7. Q→P pr
8. ~Q 6 7 mt 4 id error on this line!
9. 5 cd
c. P→Q
(R→Q) → S
(U→S) → ~P
∴ ~P
1. Show ~P
2. P ass id
3. Show R → Q
4. R ass cd
5. P→Q pr
6. Q 2 6 mp cd
7. Show U→ S
8. U ass cd
9. (R→Q) → S pr
10. S 3 9 mp cd
11. (U→S) → ~P pr
12. ~P 7 11 mp
13. P 2 r 12 id
9 SHORTCUTS
Writing long derivations can be tedious. Here are two shortcuts.
This is equivalent to just assuming that the premises all come with line numbers: pr1, pr2, pr3, . . ., and
citing those line numbers when we use a rule of inference. (Our directions above for constructing
derivations already include this option.)
For example, here is a derivation we gave earlier:
Premises: P→(W→T)
~P→T
~T
Conclusion: ∴ ~W
1. Show ~W
2. ~T pr
3. ~P→T pr
4. ~~P 2 3 mt
5. P 4 dn
6. P→(W→T) pr
7. W→T 5 6 mp
8. ~T pr
9. ~W 7 8 mt dd
Using the shortcut, we can essentially skip lines 2, 3, 6, 8, to get this shortened derivation which has
analogues of lines 1, 4, 5, 7, 9 in the original derivation:
1. Show ~W
2. ~~P pr2 pr3 mt
3. P 2 dn
4. W→T 3 pr1 mp
5. ~W 4 pr3 mt dd
Mixed derivations: Our derivation rules are already formulated in a way that lets you use any one of dd,
cd, id, in cases in which you expect to use another of them. For example, suppose you are trying to show
'P→Q' by cd, assuming 'P'. But you derive 'P→Q' instead of 'Q', Then you can use dd to box and cancel.
The fact that you have assumed 'P' for purposes of producing a conditional derivation does not interfere
with this use of dd. Given these lines:
1. Show □→○
2. □ ass cd
3. ……..
7. ……..
8. □→○
you can box and cancel:
1. Show □→○
2. □ ass cd
3. ……..
7. ……..
8. □→○ dd
This is a "mixed" derivation: an assumption is made for constructing a conditional derivation, and then 'dd'
is used instead of 'cd' to complete it. That's OK because this is just a shortcut. Whenever you are in the
position described above, you could instead add a step to the end of the derivation and then conclude the
derivation as a conditional derivation. Just add step 9:
1. Show □→○
2. □ ass cd
3. ……..
4. ……..
7. ……..
8. □→○
9. ○ 2 8 mp
7. ……..
8. □→○
9. ○ 2 8 mp cd
Similarly, if you are trying to do an indirect derivation and you end up deriving the sentence on the show
line, you may use dd. That is, if you have:
1. Show □
2. ~□ ass id
3. ……..
4. ……..
7. ……..
8. □
you may box and cancel with dd:
1. Show □
2. ~□ ass id
3. ……..
4. ……..
7. ……..
8. □ dd
Here the shortcut is obvious; you are already in a position to use id since you already have the
contradictory sentences that you need; you can instead cite the line number of the other contradictory and
use id:
1. Show □
2. ~□ ass id
3. ……..
4. ……..
7. ……..
8. □ 2 id
Similarly you can use cd when you are set up for a direct derivation of a conditional, or when you are trying
to derive a conditional using id; and you can use id when you have derived contradictories even if you are
set up for a direct or conditional derivation.
In allowing for mixed derivations, we are not actually changing anything. Our rules already allow for them.
So this is a summary of things we can already do with the rules as stated:
Mixed derivations
You may use dd, cd, and id to complete a derivation by boxing and canceling
whenever they apply, whether or not an assumption has been made, and
regardless of the type of assumption if any.
EXERCISES
1. Each of the following derivations is a mixed derivation. In each case produce another derivation which
is not mixed.
a. P→R
Q → ~R
~Q → Q
∴ P→Q
1. Show P → Q
2. P ass cd
3. P→R pr
4. R 2 3 mp
5. ~~R 4 dn
6. Q → ~R pr
7. ~Q 5 6 mt
8. ~Q → Q pr
9. Q 7 8 mp 7 id
b. Q→U
Q → ~U
R→Q
R
∴ P
1. Show P
2. R pr
3. R→Q pr
4. Q 2 3 mp
5. Q→U pr
6. U 4 5 mp
7. Q → ~U pr
8. ~U 4 7 mp 6 id
c. U → (V→W)
X→U
~X → W
∴ V→W
1. Show V → W
2. ~(V → W) ass id
3. U → (V→W) pr
4. ~U 2 3 mt
5. X→U pr
6. ~X 4 5 mt
7. ~X → W pr
8. W 6 7 mp cd
2. Do the derivations from 1a-c above using premise line numbers instead of rule pr.
3. Earlier we stipulated that a conditional sentence is false when its antecedent is true and its consequent
false, and true in all other cases. This was not arbitrary. Given the rules and derivation procedures that
we have adopted, these choices are forced on us. For, using our rules and procedures, we can produce
derivations to show each of the following arguments to be valid:
P P ~P ~P
Q ~Q Q ~Q
∴ P→Q ∴ ~(P→Q) ∴ P→Q ∴ P→Q
The first of these tells us that when the antecedent and consequent of a conditional are both true, so is the
conditional. The second tells us that when the antecedent of a conditional is true and the consequent
false, the conditional is false. The third and fourth tell us that in either case in which the antecedent is
false, the conditional is true.
Produce short derivations for each of those four arguments.
1. Try to reason out the argument for yourself. If you can do that, then write down the steps you went
through in your own reasoning. These steps will often be an outline of a good derivation. This idea of
reasoning things out and then turning the steps into a derivation has been illustrated above when
introducing direct, conditional, and indirect derivations. This is by far the best approach to thinking up a
derivation.
2. Begin with a sketch of an outline of a derivation, and then fill in the details. For example, in
thinking up a derivation for this argument:
P → (Q→R)
Q
∴ P→R
you might begin by saying: "I'll do a conditional derivation: I'll assume P and then use this together with
the premises to derive R. Then I'll box and cancel by cd." This outline gives you this much of a derivation:
1. Show P→ R
2. P ass cd
:::::
:::::
:::::
13. R cd . . . . then box and cancel
All you need to do then is to fill in lines 3 to 12. (I'm just guessing that it will take exactly eight more steps
to finish the derivation. If it takes more or less, change the '13' to the appropriate number.)
3. Write down obvious consequences: Write down obvious consequences of premises or of sentences
that have already been derived. If you write down the simple consequences of things that you already
have, then you have lots of resources right in front of you. Of course, you can do this in your head instead
of on paper, and sometimes that is sufficient. But if things are not completely clear to you, writing down
obvious consequences can be useful.
P → (Q→R)
S→Q
Q→P
S
∴ R
Let's say that you have written down the show line:
1. Show R
but you are momentarily stuck. Write down some obvious consequences of the premises:
2. Q pr4 pr2 mp
3. P pr3 2 mp
4. Q→R pr1 3 mp
At this point, if you look over what is available, it will be obvious that you can get the desired conclusion in
one additional step:
5. R 2 4 mp
4. If you are trying to derive a conditional, use conditional derivation. This is almost always the
easiest way to derive a conditional. This has been amply illustrated above.
5. If you have a conditional, try to derive its antecedent (and then use modus ponens), or try to
derive the negation of its consequent (and then use modus tollens). Here is an illustration:
P → (Q → R)
S→Q
~P → ~S
S
∴ R
The only premise in which R occurs is the first one, which is a conditional. You could apply modus ponens
to this conditional if you could prove P. Our strategy rule suggests that you try to derive P. That is not
difficult to do:
1. Show R
2. ~~S pr4 dn
3. ~~P 2 pr3 mt
4. P 3 dn
This immediately gives us:
5. Q→R 4 pr1 mp
It is then easy to complete the derivation:
6. Q pr2 pr4 mp
7. R 5 6 mp
and you are ready to box and cancel.
6. Try indirect derivation. When you reach a place where none of the other strategies clearly apply,
assume the negation of what you are trying to derive and try to derive a contradiction. This too has been
amply illustrated above.
7. When doing an indirect derivation, try to derive the negation of a premise or the negation of
something that you already have derived. This is especially useful if you already have the negation of
a conditional, for you can try to derive the conditional by using conditional derivation, and then you will
have both the conditional and its negation, and you can box and cancel with id. An example. You have:
R
(Q → S) → ~R
∴ ~S
Now begin a derivation, and follow strategy rule 1: write out obvious consequences of what you have:
1. Show ~S
2. S ass id
3. ~~R pr1 dn
4. ~(Q→S) 3 pr2 mt
The moves so far are pretty straightforward. But it may not be clear what to do next. Strategy rule 7
suggests that you try to derive the conditional 'Q→S', in order to contradict '~(Q→S)'. You do this with a
conditional subderivation:
5. Show Q → S
6. Q ass cd
7. S 2 r cd (this step is obvious once you notice it)
Having derived the conditional on line 5 and its negation on line 4 the indirect derivation is just about
complete. The complete derivation is:
1. Show ~S
2. S ass id
3. ~~R pr1 dn
4. ~(Q→S) 3 pr2 mt
5. Show Q → S
6. Q ass cd
7. S 2 r cd
8. 4 5 id
EXERCISES
1. S
(R→S) → W
∴ W
2. P → (S→R)
P → (W→S)
W→P
∴ W→R
3. (P→Q) → S
S→T
~T → Q
∴ T
11 THEOREMS
A truth of logic (a sentence that is logically true) is a sentence that is true in any logically possible situation.
It must be true no matter what. Because of this, a truth of logic is like the conclusion of a valid argument
which has no premises. If such an argument is valid, it does not have all true premises and a false
conclusion in any logically possible situation. When there aren’t any premises, this is equivalent to saying
that it does not have a false conclusion in any logically possible situation. That is, its conclusion is true in
every logically possible situation. It is a truth of logic.
Since derivations show arguments valid, if a derivation is used to show an argument with no premises to
be valid, that amounts to showing that the conclusion is logically true. There is a special word, ‘theorem’,
that refers to any sentence shown by a technique like a derivation when no premises at all are used. So
the topic of this section is Theorems.
It is customary to indicate a theorem by placing a "therefore" sign in front of it, as if it were an argument
with its premises missing. So writing "∴□" indicates that □ is a theorem.
In simple cases, theorems are obviously trivial statements. Even when they are complex, they are still
trivial in the sense that they say nothing beyond what is logically true. In this book we will list several
theorems, giving them the names "T1", "T2", and so on. Here are some given in increasing degrees of
complexity. Some of them have common names; these are indicated to the right.
T2 Q → (P→Q)
1. Show Q → (P→Q)
2. Q ass cd
3. Show P→Q
4. P ass cd
5. Q 2 r cd
6. 3 cd
T3 P → ((P→Q) → Q)
1. Show P → ((P→Q) → Q)
2. P ass cd
3. Show (P→Q) → Q
4. P→Q ass cd
5. Q 2 4 mp cd
6. 3 cd
EXERCISES
Theorems: Any instance of any previously derived theorem may be entered on any
line of a derivation. As justification, write the name of the theorem. (E.g. ‘T13’.)
An instance of a theorem is what you get by considering the theorem as a pattern, and filling in the pattern
uniformly with sentences. For example, T1 is ∴P→P. This validates the pattern ‘□→□’. Anything got by
filling in that pattern, putting the same sentence in for each occurrence of □, can be written on any line of
a derivation. For example, you can write:
21. (S→W) → (S→W) T1
More complicated theorems are more useful. For example, T4 is:
(P→Q) → ((Q→R) → (P→R))
This gives us the pattern:
(□→○) → ((○→△) → (□→△))
Putting ~R in for □, (U→V) for ○, and W for △ we have:
(~R→(U→V)) → (((U→V)→W) → (~R→W))
Here is an example of a use of this theorem. Suppose you wish to do a derivation for this argument:
R→~S
~S→~T
∴ R→~T
A very short derivation could be given using theorem 4:
1. Show R→~T
2. (R→~S) → ((~S→~T) → (R→~T)) T4
3. (~S→~T) → (R→~T) 2 pr1 mp
4. R→~T 3 pr2 mp dd
EXERCISES
1. Produce short derivations for these arguments using instances of the theorems listed above.
a. X → ~(Y → Z)
∴ (Y → Z) → ~X
b. R → (~P→S)
R → ~P
∴ R→S
c. ~(R → (S→T))
R→P
P → (Q → (S→T))
∴ ~Q
d. Q→R
R→S
∴ Q→S
SECTION 1
1. a Sentence, official notation
~~~P
|
~~P
|
~P
|
P
~Q→~R
/\
~Q ~R
| |
Q R
(P→Q) → (R→~Q)
/\
(P→Q) (R→~Q)
/\ /\
P Q R ~Q
|
Q
Copyrighted material Chapter One -- 45 -- Answers to the Exercises Version of Aug 2013
Answers to the Exercises -- CHAPTER 1
SECTION 2
SECTION 3
1. a If Veronica doesn’t leave William won’t either
If Veronica doesn’t leave William won’t leave
If Veronica doesn’t leave then William won’t leave
~V→~W
c If Yolanda doesn’t leave, then Veronica will leave only if William doesn’t
~Y → (V → ~W)
d If Yolanda doesn’t leave then Veronica will leave, given that William doesn’t
If Yolanda doesn’t leave then Veronica will leave, if William doesn’t
Copyrighted material Chapter One -- 46 -- Answers to the Exercises Version of Aug 2013
Answers to the Exercises -- CHAPTER 1
If William doesn’t [leave], then (if Yolanda doesn’t leave then Veronica will leave)
~W → (~Y→V)
h William will leave only if Veronica leaves, only provided that Yolanda will leave
(William will leave only if Veronica leaves) only if Yolanda will leave
(W → V) → Y
SECTION 4
1. a None of the above; it might look like a modus tollens inference, but the second premise, Q, would
have to first be changed to ~~Q by applying double negation; so while the argument is valid, it is
not a one-step application of modus tollens.
b None of the above
c Double negation
d Modus ponens
e Modus tollens
f None of the above
g Modus ponens; the consequent of the conditional does not need to be an atomic sentence, it can
be molecular as well.
h None of the above; it may look like a modus tollens inference, but the second premise is not
actually the negation of the consequent of the first premise.
i Double negation and none of the above are both good answers; the conclusion can be inferred by
double negation from the first premise, but since the second premise is not involved in that
Copyrighted material Chapter One -- 47 -- Answers to the Exercises Version of Aug 2013
Answers to the Exercises -- CHAPTER 1
2. In all cases we can validly infer any sentence which results from putting '~~' in front of either
premise by the rule double negation. Such results are not enumerated below.
a ~X may be inferred by modus ponens.
b X may be inferred by double negation; ~~W may be inferred by modus tollens.
c Nothing additional
d (R→X) may be inferred by modus ponens.
e Nothing additional; Modus tollens cannot be applied because the second premise is not actually
the negation of the consequent of the first premise.
f (W→X) may be inferred from the first premise by double negation; this must be done before you
apply modus tollens with the second premise, so you can't apply modus tollens in one step to get
~W.
g Nothing additional; if you apply double negation to the second premise you can then apply modus
tollens as a second step to get ~W.
h Nothing additional
i (W→X) follows by double negation.
SECTION 5
1. Only errors are listed.
First derivation
Line 6 -- line 6 is not available at line 6; derivation can be corrected by writing "5 dn".
Second derivation
Line 3 -- when justifying writing a premise, no line citation is given
Line 4 -- the sentence on line 2 is not the negation of the consequent of the sentence on line 3; in this
case we would need to first apply dn to line 2 as an intermediate step. Then we could apply mt
with line 3, which would result in ~~P. P could then be inferred on the following line by dn.
Line 5 -- two lines must be cited with mp; the sentence inferred does follow from line 4 together with the
first premise, but the first premise must be cited somehow.
Line 7 -- "5 6 mt" would result in ~R rather than ~~R.
Line 9 -- "5 8 mp" is OK, but the derivation is not done; we set out to show ~R, but line 9 displays ~Q, so
we cannot conclude the derivation at this point and so it is incorrect to write "dd" to mark the
conclusion of the derivation of ~R.
Third derivation
Line 3 -- "~S" is not one of the premises.
Line 7 -- neither line 5 nor line 6 is a conditional, so mt cannot possibly apply to that pair of lines.
2. In each case the derivation displayed does not represent the only possible derivation; many alternate,
equally correct derivations can be given.
P
Q → ~P
R→Q
∴ ~R
Copyrighted material Chapter One -- 48 -- Answers to the Exercises Version of Aug 2013
Answers to the Exercises -- CHAPTER 1
1. Show ~R
2. P pr
3. ~~P 2 dn
4. Q → ~P pr
5. ~Q 3 4 mt
6. R→Q pr In this derivation and those below, the
7. ~R 5 6 mt "dd" could occur at the end of the
8. 7 dd previous line.
W → ~(V→~Y)
X → (V→~Y)
V→Y
(V→Y) → X
∴ ~W
1. Show ~W
2. V→Y pr
3. (V→Y) → X pr
4. X 2 3 mp
5. X → (V→~Y) pr
6. V→~Y 4 5 mp
7. ~~(V→~Y) 6 dn
8. W → ~(V→~Y) pr
9. ~W 7 8 mt
10. 9 dd
(W→Z) → (Z→W)
(Z→W) → ~X
P→X
~~P
∴ ~(W→Z)
1. Show ~(W→Z)
2. ~~P pr
3. P 2 dn
4. P→X pr
5. X 3 4 mp
6. ~~X 5 dn
7. (Z→W) → ~X pr
8. ~(Z→W) 6 7 mt
9. (W→Z) → (Z→W) pr
10. ~(W→Z) 8 9 mt
11. 10 dd
Copyrighted material Chapter One -- 49 -- Answers to the Exercises Version of Aug 2013
Answers to the Exercises -- CHAPTER 1
SECTION 6
1. Only errors are listed.
Derivation a
All correct
Derivation b
Line 3 -- Line 1 is not available at line 3 because when line 3 is written, line 1 is still an un-cancelled show
line.
Line 7 -- No problem with the use of cd to box and cancel, but mt cannot be applied to lines 5 and 6
because line 6 does not contain the negation of the consequent of line 5; you would have to add a
line and apply double negation to line 6 first.
Derivation c
Line 2 -- You can only assume the antecedent of the conditional to be shown.
Line 3 -- While the sentence on line 3 does logically follow from line 2 and premise 1, you can't apply mp
to line 2 alone.
Line 9 -- The application of dn to line 8 is OK, but you can't end a conditional derivation on a line that does
not contain the consequent of the conditional you set out to show.
2. In each case the derivation displayed does not represent the only possible derivation; many alternate,
equally correct derivations can be given.
a. P → (Q → (R→S))
~Q → ~R
R
∴ P→S
1. Show P → S
2. P ass cd
3. P → (Q → (R→S)) pr
4. Q → (R→S) 2 3 mp
5. R pr
6. ~~R 5 dn
7. ~Q → ~R pr
8. ~~Q 6 7 mt
9. Q 8 dn
10. R→S 4 9 mp
11. S 5 10 mp
12. 11 cd
b. Q → ~(R→S)
P → (R→S)
~Q → R
∴ P→S
1. Show P → S
2. P ass cd
3. P → (R→S) pr
4. R→S 2 3 mp
5. ~~(R→S) 4 dn
6. Q → ~(R→S) pr
7. ~Q 5 6 mt
8. ~Q → R pr
9. R 7 8 mp
10. S 4 9 mp
11. 10 cd
Copyrighted material Chapter One -- 50 -- Answers to the Exercises Version of Aug 2013
Answers to the Exercises -- CHAPTER 1
c. U → (U→V)
~R → ~(U→V)
R → ~S
∴ U → ~S
1. Show U → ~S
2. U ass cd
3. U → (U→V) pr
4. U→V 2 3 mp
5. ~~(U→V) 4 dn
6. ~R → ~(U→V) pr
7. ~~R 5 6 mt
8. R 7 dn
9. R → ~S pr
10. ~S 8 9 mp
11. 10 cd
3. a P→T
~X → ~T
~S → ~X
∴ P→S
1. Show P → S
2. P ass cd
3. P→T pr
4. T 2 3 mp
5. ~~T 4 dn
6. ~X → ~T pr
7. ~~X 5 6 mt
8. ~S → ~X pr
9. ~~S 7 8 mt
10. S 9 dn
11. 10 cd
b T→S
Y → (S→P)
P → ~X
Y
∴ T → ~X
1. Show T → ~X
2. T ass cd
3. T→S pr
4. S 2 3 mp
5. Y → (S→P) pr
6. Y pr
7. S→P 5 6 mp
8. P 4 7 mp
9. P → ~X pr
10. ~X 8 9 mp cd
c S → ~T
~S → ~R
~R → X
Y→T
∴ ~X → ~Y
Copyrighted material Chapter One -- 51 -- Answers to the Exercises Version of Aug 2013
Answers to the Exercises -- CHAPTER 1
1. Show ~X → ~Y
2. ~X ass cd
3. ~R → X pr
4. ~~R 2 3 mt
5. ~S → ~R pr
6. ~~S 4 5 mt
7. S 6 dn
8. S → ~T pr
9. ~T 7 8 mp
10. Y→T pr
11. ~Y 9 10 mt cd
SECTION 7
1. Only errors are listed.
Derivation a
All correct
Derivation b
Line 3 -- The sentence on line 3 is not a premise.
Line 4 -- Line 2 is not the negation of the consequent of line 3 so mt doesn't apply. You would first have to
apply dn to line 2. Even in that case the result would be ~~S rather than ~S.
Line 5 -- "ass id" may only appear on the line immediately following a show line.
Line 7 -- The mt inference is OK, but 5 and 7 do not directly contradict so id is used incorrectly. The
derivation could be concluded on line 7 with "4 id" since 4 and 7 contradict directly.
Derivation c
Line 2 -- The sentence on the line is not the negation (or the un-negation, for that matter) of the show line.
Line 4 -- There is no way to apply mt with lines 2 and 3.
Line 6 -- 2 is not the negation of the consequent of 5 (though it should have been).
Line 7 -- There is no way that you can apply mt with lines 5 and 6; 2 and 7 don't contradict directly, so it is
premature to conclude with id.
2. In each case the derivation displayed does not represent the only possible derivation; many alternate,
equally correct derivations can be given.
a. ~Q → R
S → ~R
~S → Q
∴ Q
1. Show Q
2. ~Q ass id
3. ~Q → R pr
4. R 2 3 mp
5. ~~R 4 dn
6. S → ~R pr
7. ~S 5 6 mt
8. ~S → Q pr
9. Q 7 8 mp
10. 2 9 id
Copyrighted material Chapter One -- 52 -- Answers to the Exercises Version of Aug 2013
Answers to the Exercises -- CHAPTER 1
b. (P→Q) → R
S → (P→Q)
~S → R
∴ R
1. Show R
2. ~R ass id
3. (P→Q) → R pr
4. ~(P→Q) 2 3 mt
5. S → (P→Q) pr
6. ~S 4 5 mt
7. ~S → R pr
8. R 6 7 mp
9. 2 8 id
c. ~P → (R→S)
(R→S) → T
~T
Q → (R→S)
∴ ~(P → Q)
1. Show ~(P → Q)
2. P→Q ass id
3. ~T pr
4. (R→S) → T pr
5. ~(R→S) 3 4 mt
6. Q → (R→S) pr
7. ~Q 5 6 mt
8. ~P 2 8 mt
9. ~P → (R→S) pr
10. R→S 8 9 mp
11. 5 10 id
SECTION 8
1. Only errors are listed.
Derivation a
All correct
Derivation b
Line 8 -- At this point in the derivation, line 5 is still an un-cancelled show line so line 4 can't be cited to
conclude the sub-derivation; you could, however, use the rule r (repetition) to repeat line 4 within
the sub-derivation and then use the repeated line to conclude the sub-derivation with id.
Derivation c
Line 6 -- Line 6 is not available on line 6. The problem would b resolved if we put 5 for 6.
Line 13 -- Strictly speaking there is no error here, but it was un-necessary to repeat line 2 in order to apply
id; we could have just cited "2 12 id" because line 2 is not separated from line 13 by any un-
cancelled show lines (only by cancelled ones).
2. In each case the derivation displayed does not represent the only possible derivation; many alternate,
equally correct derivations can be given.
a. P → (Q→R)
Copyrighted material Chapter One -- 53 -- Answers to the Exercises Version of Aug 2013
Answers to the Exercises -- CHAPTER 1
S→Q
∴ S → (P→R)
1. Show S → (P→R)
2. S ass cd
3. Show P→R
4. P ass cd
5. P → (Q→R) pr
6. Q→R 4 5 mp
7. S→Q pr
8. Q 2 7 mp
9. R 6 8 mp
10. 9 cd
11 3 cd
b. (P→Q) → Q
P→R
Q → ~Q
∴ ~(R → Q)
1. Show ~(R → Q)
2. R→Q ass id
3. Show P → Q
4. P ass cd
5. P→R pr
6. R 4 5 mp
7. Q 2 6 mp cd
8. (P→Q) → Q pr
9. Q 3 8 mp
10. Q → ~Q pr
11. ~Q 9 10 mp 9 id
c. (U → V) → (W→X)
U→Z
~V → ~Z
X→Z
∴ W→Z
1. Show W → Z
2. W ass cd
3. Show U → V
4. U ass cd
5. U→Z pr
6. Z 4 5 mp
7. ~~Z 6 dn
8. ~V → ~Z pr
9. ~~V 7 8 mt
10. V 9 dn cd
11. (U → V) → (W→X) pr
12. W→X 3 11 mp
13. X 2 12 mp
14. X→Z pr
15. Z 13 14 mp cd
Copyrighted material Chapter One -- 54 -- Answers to the Exercises Version of Aug 2013
Answers to the Exercises -- CHAPTER 1
SECTION 9
1. a P→R
Q → ~R
~Q → Q
∴ P→Q
1. Show P → Q
2. P ass cd
3. P→R pr
4. R 2 3 mp
5. ~~R 4 dn
6. Q → ~R pr
7. ~Q 5 6 mt
8. ~Q → Q pr
9. Q 7 8 mp cd
The only change was to conclude with cd instead of id.
b Q→U
Q → ~U
R→Q
R
∴ P
1. Show P
2. ~P ass id
3. R pr
4. R→Q pr
5. Q 3 4 mp
6. Q→U pr
7. U 5 6 mp
8. Q → ~U pr
9. ~U 5 8 mp 7 id
c U → (V→W)
X→U
~X → W
∴ V→W
1. Show V → W
2. ~(V→W) ass id
3. U → (V→W) pr
4. ~U 2 3 mt
5. X→U pr
6. ~X 4 5 mt
7. ~X → W pr
8. W 6 7 mp
9. Show V → W
10. V ass cd
11. W 8 r cd
12. 2 9 id
Only added lines 9-12.
Copyrighted material Chapter One -- 55 -- Answers to the Exercises Version of Aug 2013
Answers to the Exercises -- CHAPTER 1
2. a P→R
Q → ~R
~Q → Q
∴ P→Q
1. Show P → Q
2. P ass cd
3. R 2 pr1 mp
4. ~~R 3 dn
5. ~Q 4 pr2 mt
6. Q 5 pr3 mp cd
b Q→U
Q → ~U
R→Q
R
∴ P
1. Show P
2. ~P ass id
3. Q pr4 pr3 mp
4. U 3 pr1 mp
5. ~U 3 pr2 mp 4 id
c U → (V→W)
X→U
~X → W
∴ V→W
1. Show V → W
2. ~(V→W) ass id
3. ~U 2 pr1 mt
4. ~X 3 pr2 mt
5. W 4 pr3 mp cd
3
P
Q
∴ P→Q
1. Show P → Q
2. Q pr2 cd
P
~Q
∴ ~(P→Q)
1. Show ~(P→Q)
2. P→Q ass id
3. Q pr1 2 mp
4. ~Q pr2 3 id
Copyrighted material Chapter One -- 56 -- Answers to the Exercises Version of Aug 2013
Answers to the Exercises -- CHAPTER 1
~P
Q
∴ P→Q
1. Show P → Q
2. Q pr2 cd
~P
~Q
∴ P→Q
1. Show P → Q
2. P ass cd
3. ~P pr1 2 id
SECTION 10
1. S
(R→S) → W
∴ W
1. Show W
2. Show R → S
3. S pr1 cd
4. W 2 pr2 mp dd
2. P → (S→R)
P → (W→S)
W→P
∴ W→R
1. Show W → R
2. W as cd
3. P 2 pr3 mp
4. S→R 3 pr1 mp
5. W→S 3 pr2 mp
6. S 2 5 mp
7. R 4 6 mp cd
3. (P→Q) → S
S→T
~T → Q
∴ T
1. Show T
2. ~T ass id
3. Q 2 pr3 mp
4. Show P→Q
5. Q 3 r cd
6. S 4 pr1 mp
7. T 6 pr2 mp 2 id
Copyrighted material Chapter One -- 57 -- Answers to the Exercises Version of Aug 2013
Answers to the Exercises -- CHAPTER 1
SECTION 12
1. a X → ~(Y → Z)
∴ (Y → Z) → ~X
1. Show (Y → Z) → ~X
2. (X → ~(Y → Z)) → ((Y → Z) → ~X) T14
3. (Y → Z) → ~X 2 pr1 mp dd
b R → (~P→S)
R → ~P
∴ R→S
1. Show R → S
2. (R → (~P→S)) → ((R→~P) → (R→S)) T6
3. (R→~P) → (R→S) 2 pr1 mp
4. R→S 3 pr2 mp dd
c ~(R → (S→T))
R→P
P → (Q → (S→T))
∴ ~Q
1. Show ~Q
2. Q ass id
3. ~(R → (S→T)) → R T21
4. R 3 pr1 mp
5. P 4 pr2 mp
6. Q → (S→T) 5 pr3 mp
7. S→T 2 6 mp
8. ~(R → (S→T)) → ~(S→T) T22
9. ~(S→T) 8 pr1 mp 7 id
d Q→R
R→S
∴ Q→S
1. Show Q → S
2. (Q→R) → ((R→S) → (Q→S)) T4
3. (R→S) → (Q→S) 2 pr1 mp
4. Q→S 3 pr2 mp dd
Copyrighted material Chapter One -- 58 -- Answers to the Exercises Version of Aug 2013
9/17/2013 CHAPTER 2 SECTION 1
Chapter Two
Sentential Logic with 'and', 'or', if-and-only-if'
1 SYMBOLIC NOTATION
In this chapter we expand our formal notation by adding three two-place connectives, corresponding
roughly to the English words 'and', 'or' and 'if and only if':
∧ and
∨ or
↔ if and only if
Conjunction: The first of these, '∧', is the conjunction sign; it has the same logical import as 'and'. It
goes between two sentences to form a complex sentence which is true if both of the parts (called
'conjuncts') are true, and is otherwise false:
□ ○ (□∧○)
T T T
T F F
F T F
F F F
Disjunction: The disjunction sign, '∨', makes a sentence that is true in every case except when its parts
(its disjuncts) are both false. This corresponds to one use (the "inclusive" use) of 'or' in English:
□ ○ (□ ∨ ○)
T T T
T F T
F T T
F F F
Biconditional: The biconditional sign, '↔', states that both of the parts making it up (its constituents)
are the same in truth value. It works like this:
□ ○ (□ ↔ ○)
T T T
T F F
F T F
F F T
Each of these new connectives behaves syntactically just like the conditional sign, '→': you make a bigger
sentence out of two sentences plus a pair of parentheses:
(□∧○)
(□ ∨ ○)
(□ ↔ ○)
Our expanded definition of a sentence in official notation is now:
As before, we allow ourselves informally to omit the outer parentheses when the sentence occurs alone
on a line. It is also customary (and convenient) to omit parentheses around conjunctions or disjunctions
when they are combined with a conditional or biconditional sign. The sentence:
P∧Q → R
is to be considered to be an informally worded conditional whose antecedent is a conjunction:
(P∧Q) → R
If we want to make a conjunction whose second conjunct is a conditional, we must use parentheses
around the parts of the conditional:
P ∧ (Q→R)
Likewise, this sentence:
P ↔ Q∨R
is an informally written biconditional whose second constituent is a disjunction:
P ↔ (Q∨R)
If we wish to write a disjunction whose first disjunct is a biconditional, we need to use parentheses around
the biconditional:
(P↔Q) ∨ R.
Finally, we may use two or more conjunction signs or disjunction signs (but not a mix of conjunctions with
disjunctions) as abbreviations for what you get by restoring the parentheses by grouping the left parts
together, so that: 'P ∧ Q ∧ R' is an abbreviation for '(P∧Q) ∧ R'.
Informal Conventions
Outermost parentheses may be omitted.
Conjunction signs or disjunction signs may be used with conditional signs or
biconditional signs with the understanding that this is short for a conditional or
biconditional which has a conjunction or disjunction as a part. For example:
P∨Q → R is informal notation for (P∨Q) → R
P ↔ Q∧R is informal notation for P ↔ (Q∧R)
Repeated conjuncts or disjuncts without parentheses are short for the result of putting
parentheses around the part to the left of the last conjunction or disjunction sign. For
example:
P∨Q∨R is informal notation for (P∨Q) ∨ R
P∧Q∧R is informal notation for (P∧Q) ∧ R
Sentences with the new connectives may be parsed as we did in the previous chapter:
P∧Q → R P ↔ Q∨R
2 2
P∧Q R P Q∨R
2 2
P Q Q R
Determining Truth Values Using such parsings, there is a mechanical way to determine whether any
given sentence is true or false if you know the truth values of the sentence letters making it up. First,
make a parse tree as above by taking the sentences on any given line and writing their immediate parts
below them. A parse tree for '(P∧Q) → (P∨R)' is:
(P ∧ Q) → (P ∨ R)
2
(P ∧ Q) (P ∨ R)
2 2
P Q P R
Then write the truth values of the sentence letters below them. For example, if P and Q are both true but
R false, you would have:
(P ∧ Q) → (P ∨ R)
2
(P ∧ Q) (P ∨ R)
2 2
P Q P R
T T T F
Then go up the parse tree, placing a truth value under the major connective of each sentence based on
the truth values of its parts given below. For example, the truth value under '(P ∧ Q)' would be 'T' because
it is a conjunction, and both of its parts are T:
(P ∧ Q) → (P ∨ R)
2
(P ∧ Q) (P ∨ R)
T
2 2
P Q P R
T T T F
Filling in the remaining parts gives you a truth value for the whole sentence at the top:
( (P ∧ Q) → (P ∨ R) )
T
2
(P ∧ Q) (P ∨ R)
T T
2 2
P Q P R
T T T F
Sometimes not all of the parse tree needs to be filled out; this happens when partial information below a
sentence is sufficient to decide its truth value. In the example just given it is not necessary to figure out
the truth value of '(P ∧ Q)', since the conditional on the top line is determined to be true based on the
information that '(P ∨ R)' is true. So the following parse tree is sufficient to show that the main sentence is
true if the sentence letters have the indicated truth values:
(P ∧ Q) → (P ∨ R)
T
2
(P ∧ Q) (P ∨ R)
T
2 2
P Q P R
T F
EXERCISES
1. For each of the following state whether it is a sentence in official notation, or a sentence in informal
notation, or not a sentence at all. If it is a sentence, parse it as indicated above.
a. P↔Q→R
b. ~Q↔~R
c. ~(Q↔R)
d. P∧Q∨R
e. (P→Q) ∨ (R→~Q)
f. P ↔ (Q∧R) → Q
g. P∧Q → (Q→R∨Q)
h. P ↔ (P↔Q∧R)
i. P ∨ (Q→P)
2. If 'P' and 'Q' are both true and 'R' is false, what are the truth values of the official or informal sentences
in 1? (Use the parses that you give in 1 to guide the determination of truth values.)
In certain cases, use of a relative pronoun is logically equivalent to a use of '∧': the sentence 'Maria, who
was late, greeted the vice-counsel' is equivalent to 'Maria was late ∧ Maria greeted the vice-counsel'.
Disjunctions: The English word 'or' can be taken in two ways: inclusively or exclusively. If you are asked
to contribute food or money, you will probably take this as saying that you may contribute either or both;
the invitation is inclusive. But if a menu says that you may have soup or salad the normal interpretation is
that you may have either, but not both; the offer is exclusive. The difference in logical import appears in
the first row here:
□ ○ (□ inclusive-or ○) (□ exclusive-or ○)
T T T F
T F T T
F T T T
F F F F
If the English 'or' can be read either inclusively or exclusively, we will need to have a convention for how to
interpret it when it is used in exercises. Our convention will be that 'or' is always meant inclusively when it
is used in problems and examples in this text. That is, it coincides in logical import with our disjunction
sign '∨'.
A common synonym of 'or' is 'unless'. The sentence 'Wilma will leave unless there is food' is false if there
is no food but Wilma doesn’t leave; otherwise it is true, just like 'or' when read inclusively.
Disjunctions: □ ∨ ○
□ or ○
either □ or ○
□ unless ○
Biconditionals: We will see below that a biconditional sign is equivalent to two conditionals made from
its constituents. The sentence '(□ ↔ ○)' is equivalent to:
(□ → ○) ∧ (○ → □)
This can be read in English as '○ if □, and ○ only if □'; thus it is often pronounced 'if and only if'. The
English phrase 'just in case' or 'exactly in case' are sometimes used to state the equivalence of two
claims; the biconditional can be used to symbolize them:
The game will be called off just in case it rains: Q↔R
The game will be played exactly in case it is sunny: P↔S
Biconditionals: □ ↔ ○
□ if and only if ○
□ exactly on condition that ○
□ just in case ○
EXERCISES
1. For each of the following sentences say which symbolic sentence it is equivalent to.
a. It will rain, but the game will be played anyway.
R∧P
R→P
R↔P
b. Willa drove or got a ride
W∨R
W↔R
c. Robert, who didn't get a ride, was tardy
~R → T
~R ∧ T
d. It rained; the sell-a-thon was called off
R↔S
R∧S
e. The quilting bee will be called off just in case it rains
Q∧R
Q↔R
Q→R
R→Q
3. What are the truth values of the sentences in 2 when all of the simple sentences are false?
3 COMPLEX SENTENCES
Complex sentences of English generally translate into complex sentences of the logical notation. As
usual, it is important to be clear about the grouping of clauses in the English sentence.
The following sentence is a simple conjunction:
Polk and Quincy were presidents P∧Q
This is a complex sentence, with at least two different but equivalent symbolizations.
Neither Polk nor Quincy was president.
One symbolization is the negation of 'Either Polk or Quincy was president; in this symbolization 'neither'
means 'not either': ~(P ∨ Q). An equivalent symbolization is a conjunction of negations; 'neither P nor Q'
is equivalent to "not P and not Q": ~P ∧ ~Q
As in chapter 1, these principles do not eliminate all ambiguity. The sentence 'Wilma will leave and Steve
will stay or Tom will dance' is ambiguous between these two symbolizations:
W & (S∨T)
(W&S) ∨ T
The use of 'either' will sometimes disambiguate; the only symbolization of 'Wilma will leave and either
Steve will stay or Tom will dance' is:
W & (S∨T)
This is because 'either' and 'or' exactly enclose 'Steve will stay', and so 'S' must be a disjunct. But it is not
a disjunct in '(W&S) ∨ T'.
Commas play their usual role of grouping items on each side. The sentence 'Wilma will leave and Steve
will stay, or Tom will dance' has only the symbolization:
(W&S) ∨ T
Conjunction and disjunction signs inside of sentences: Sometimes 'and' and 'or' occur within
sentences, as in:
Wilma sang and danced
Tom or Sam left
In such cases you need to fill in a missing part to get a sentence that we already know how to symbolize.
Sometimes 'and' or 'or' occurs inside a simple sentence, where only the subject is
conjoined or disjoined, and there is a single predicate, or only the predicate is conjoined
or disjoined, and there is a single subject. If you fill in a copy of the shared part, you will
get a synonymous sentence that we already know how to symbolize.
There may also be a 'not' after the compound subject, or before a compound predicate. If the negation is
after a compound subject, it forms part of the predicate, and it is filled in with that predicate:
Wilma or Veronica didn't sing Wilma [didn't sing] or Veronica didn't sing.
If the negation is before a compound predicate, it yields a negation sign that applies to the whole
compound:
Wilma didn't sing or dance ~ (Wilma sang or danced)
Wilma didn't sing and dance ~ (Wilma sang and danced)
Compounds within simple sentences affect how sentences are grouped after symbolization:
When connectives occur inside otherwise simple sentences, the symbolizations of the
sentences form a unit.
For example, the sentence 'Ruth tap-dances or sings and she plays the clarinet' must be grouped like this:
(T ∨ S) & P
This is because the disjunction with 'T' and 'S' must be a unit. In 'Ruth tap-dances or she sings and plays
the clarinet' the opposite happens; you must have:
T ∨ (S & P)
because the conjunction with 'S' and 'P' must form a unit.
Synonyms of 'and', 'or', and 'if and only it' are subject to the conditions described above.
Robert will attend if Sally does, but she won't attend if neither Tom nor Wilma attend.
Robert will attend if Sally does [attend], but she won't attend if neither Tom [attends] nor Wilma
attends.
R if S, but not S if neither T nor W
(S→R) ∧(~(T∨W) → ~S)
Neither Sally nor Robert will run, but if either Tom or Quincy run, Veronica will win.
Neither S nor R, but if either T or Q, V
~(S∨R) ∧(T∨Q → V).
Given that Sally and Robert won't both run, Tom will run exactly if Q does.
Given that not both S and R, T exactly if Q.
~(S∧R) → (T↔Q)
A variety of English expressions that we have not mentioned affect how a sentence is to be symbolized.
Examples:
Quincy will whistle if Reggie sings without Susan singing or Susan sings without Reggie, but he
won't whistle if they both sing
If Sally runs, Rob will run, in which case Theodore will leave
(S→R) ∧ (R→T)
Here ' in which case' means "if Rob runs".
If a symbolization of a sentence is a correct one, then it and the English sentence being symbolized must
agree in truth value no matter what truth values the simple sentences have. If they agree for every
assignment of truth values, then the symbolization is correct. If not, it is incorrect. (To tell whether an
English sentence is true or false given a specification of truth values for its simple parts you must rely on
your understanding of English. To tell whether a symbolic sentence is true or false given the truth values
of its sentential letters, you parse it and figure out its truth value as in section 1.)
EXERCISES
1. If 'P' is true and both 'Q' and 'R' are false, what are the truth values of the following? (In answering, give
a parse tree for the sentence.)
a. ~(P ∨ (Q ∧ R))
b. ~P ∨ (Q ∧ R)
c. ~(P ∨ R) ↔ ~P ∨ R
d. ~Q ∧ (P ∨ (Q↔R))
e. P → (~Q ↔ (~R → Q))
For questions 2 and 3, use this translation scheme:
V Veronica will leave
W William will leave
Y Yolanda will leave
2. For each of the following say which of the proposed translations is correct.
a. Veronica won't leave if and only if William won’t leave
~(V ↔ ~W)
~V ↔ ~W
V ↔ ~~W
b. William and Veronica will both leave if Yolanda does, provided that Veronica doesn’t
Y∧~V → W∧V
(Y→W∧V) → ~V
~V → (Y→W∧V)
c. Unless Yolanda leaves, Veronica or William will leave
Y ∨ (W ∨ V)
Y → (W ∨ V)
Y↔W∧V
d. Either Yolanda leaves and Veronica doesn't, or Veronica leaves and William doesn’t
(Y ↔ ~V) ∨ (V ↔ ~W)
(Y ∧ ~V) ∨ (V ∧ ~W)
Y ∧ ~V ↔ V ∧ ~W
4. What are the truth values of 3a-d if Veronica leaves but neither William nor Yolanda leaves?
For question 5 use this translation scheme:
R Sally will run
W Sally will win
Q Sally will quit
5. For each of the following produce a correct symbolization
a. Sally will run and win unless she quits
b. Sally will win exactly in case she runs without quitting
c. Sally, who will run, will win if she doesn't quit
d. Sally will run and quit, but she will win anyway
4 RULES
Each new connective comes with two new rules. As earlier, it should be obvious from the truth-table
descriptions of each connective that instances of these rules are formally valid arguments.
Conjunction rules:
Rule s (simplification) Rule adj (adjunction)
□∧○ or □∧○ □
∴ □ ∴○ ○
∴ □∧○
Disjunction rules:
Rule add (addition) Rule mtp (modus tollendo ponens)
□ or □ □∨○ or □∨○
∴ □∨○ ∴○∨□ ~○ ~□
∴ □ ∴○
Biconditional rules:
Rule bc (biconditional-to-conditional) Rule cb (conditionals-to-biconditional)
□↔○ or □↔○ □→○
∴ □→○ ∴○→□ ○→□
∴ □↔○
Simplification indicates that if you have a conjunction, you may infer either conjunct. For example, both
of these valid arguments are instances of rule s:
Polk was a president and so was Whitney P∧W
∴ Polk was a president ∴ P by rule s
∴ Whitney was a president ∴ W by rule s
Adjunction indicates that if you have any two sentences, you may infer their conjunction, in either order.
For example, these valid arguments are instances of rule adj:
Polk was a president P
Whitney was a president W
∴ Polk was a president and so was Whitney ∴ P∧W by rule adj
∴Whitney was a president and so was Polk ∴ W∧P by rule adj
This derivation illustrates how the conjunction rules are used:
P∧Q
∴ Q∧P
1. Show Q ∧ P
2. Q pr1 s
3. P pr1 s
4. Q∧P 2 3 adj dd
Addition indicates that from any sentence you may infer its disjunction with any other sentence.
Polk was a president P
∴ Polk was a president or Whitney was ∴ P∨W by rule add
∴ Whitney was a president or Polk was ∴ W∨P by rule add
Rule add lets you add any disjunct, no matter how irrelevant. So from 'Cynthia left' you may infer 'Cynthia
left ∨ Fido barked'. This is legitimate because '∨' is used inclusively, and all that you need for a disjunction
to be true is that either disjunct be true. So if 'Cynthia left' is true, 'Cynthia left ∨ Fido barked' must be true
too.
Modus tollendo ponens indicates that from a disjunction and the negation of one of its disjuncts you may
infer the other disjunct.
Polk was a president or Whitney was P∨W
Whitney wasn't a president ~W
∴ Polk was a president ∴ P by rule mtp
Polk was a president or Whitney was P∨W
Polk wasn't a president ~P
∴ Whitney was a president ∴ W by rule mtp
Note that the following is not an instance of modus tollendo ponens:
Whitney was a president or Truman was W∨T
Truman was a president T
∴ Whitney wasn't a president ∴ ~W
For mtp you need the negation of a disjunct. In the case given, if 'T' and 'W' were both true, then the
argument would have true premises and a false conclusion.
Here is a derivation illustrating the disjunction rules. It is a derivation for this argument:
P
R ∨ ~P
∴ R∨S
1. Show R ∨ S
2. ~~P pr1 dn
3. R 2 pr2 mtp
4. R∨S 3 add dd
Biconditional-to-conditional indicates that from a biconditional you may infer either of the corresponding
conditionals:
Polk was a president if and only if Whitney was P↔W
∴ If Polk was a president, so was Whitney ∴ P→W by rule bc
∴ If Whitney was a president, so was Polk ∴ W→P by rule bc
Conditionals-to-biconditional indicates that from two conditionals where the antecedent of one is the
consequent of the other, and vice versa, you may infer a bicondtional containing the parts of the
conditionals:
If Polk was a president, so was Whitney P→W
If Whitney was a president, so was Polk W→P
∴ Polk was a president if and only if Whitney was ∴ P↔W by rule cb
1. Show Q
2. P pr1 s
3. P∨Q 2 add
4. ~R 3 pr2 mp
5. Q 4 pr3 mtp dd
R ↔ ~P
~Q ↔ R
∴ P↔Q
1. Show P ↔ Q
2. Show P → Q
3. P ass cd
4. ~~P 3 dn
5. R → ~P pr1 bc
6. ~R 4 5 mt
7. ~Q → R pr2 bc
8. ~~Q 6 7 mt
9. Q 8 dn cd
10. Show Q → P
11. Q ass cd
12. ~~Q 11 dn
13. R → ~Q pr2 bc
14. ~R 12 13 mt
15. ~P → R pr1 bc
16. ~~P 14 15 mt
17. P 16 dn cd
18. P↔Q 2 10 cb dd
EXERCISES
1. For each of the following arguments, say which rule it is an instance of (or say "none").
a. P ∨ ~Q b. ~P ∧ Q c. ~~(P→Q)
Q ∴ ~P ∴ P→Q
∴ P
d. ~P∨Q e. ~P → ~Q f. P∨Q
~Q ~Q → ~P ~R
∴ ~P ∴ ~Q ↔ ~P ∴ P
g. ~~P ↔ R h. Q i. P∨Q
∴ R → ~~P ∴ ~P ∨ Q ∴ Q
2. Given the sentences below, say what can be inferred in one step by s, mtp, bc, cb using all of the
premises.
a. ~W → ~X b. ~W ∨ ~X c. W→X
~X → ~W ~~X ~W
∴ ? ∴ ? ∴ ?
d. ~W ∧ ~X e. W ↔ ~X f. W∨X
∴ ? ∴ ? ∴ ?
These strategy hints will be put to use below, as we extend our list of Theorems from Chapter 1.
Theorem 24 is the commutative law for conjunction; it says that turning the conjuncts of a sentence
around produces a logically equivalent sentence:
T24 P∧Q ↔ Q∧P "commutative law for conjunction"
This is easy to derive if you follow the last two strategy hints. You will be deriving a biconditional, so you
will try to derive both conditionals: P∧Q → Q∧P and Q∧P → P∧Q and then combine them using rule cb.
While deriving each conditional you will derive a conjunction by deriving its conjuncts and then using rule
adj. Rule s is used whenever you want to get one of the conjuncts of an existing conjunction alone.
The next theorem is the associative law for conjunction; it says that regrouping successive conjuncts
produces a logically equivalent sentence. The strategy here is the same as that above: to derive the
biconditional you derive the corresponding conditionals, and use rule cb. In deriving the conditionals you
derive conjunctions using rule adj. Again, rule s is used to simplify conjunctions that you already have.
Notice that T26 and T4 from the previous chapter are both called "hypothetical syllogism".
T4 (P→Q) → ((Q→R) → (P→R)) "hypothetical syllogism"
These two theorems are closely related. They are related to one another as the following two patterns:
□∧○→△
□ → (○ → △)
where each theorem has 'P→Q' in place of □, 'Q→R' in place of ○, and 'P→R' in place of △. Our next
theorem says that these two patterns are equivalent.
The derivation of this theorem is also relatively straightforward: derive two conditionals and put them
together by rule cb. Each conditional itself has conditionals as parts, so the derivation calls for two
conditional subderivations (one of which itself contains another conditional subderivation).
This derivation is complex. It may be useful to see how we might think up how to construct it. First, our
main strategy is to derive a biconditional by deriving two conditionals. So our plan predicts that the
derivation will have this overall structure:
1. Show (P∧Q → R) ↔ (P→ (Q→R))
Line 2 will require a conditional derivation, and so will line 12. So the completed derivation will take this
form:
1. Show (P∧Q → R) ↔ (P→ (Q→R))
2. Show (P∧Q → R) → (P→ (Q→R))
3. P∧Q → R ass cd
P → (Q→R) xxx cd
P∧Q → R xxx cd
21. (P∧Q → R) ↔ (P→ (Q→R)) 2 12 cb dd
Lines 3-11 and 14-20 are taken up with completing the subderivations. Each of these itself uses a
conditional subderivation, giving the following structure:
10. xxx cd
11. 4 cd
19. xxx cd
20. 14 cd
21. (P∧Q → R) ↔ (P→ (Q→R)) 2 12 cb dd
The rest of the work is filling in the remaining subderivations. It is often useful to develop a derivation as
we did here by first sketching its overall structure, and then flesh it out with details afterwards.
EXERCISES
1. Produce derivations for theorems T28-T30, T33, T36-37, which are included among the theorems
stated here:
T28 (P∧Q → R) ↔ (P∧~R → ~Q)
T29 (P→ Q∧R) ↔ (P→Q) ∧(P→R) "distribution of → over ∧"
T30 (P→Q) → (R∧P → R∧Q)
T31 (P→Q) → (P∧R → Q∧R)
T32 (P→R) ∧(Q→S) → (P∧Q → R∧S) "Leibniz's praeclarum theorema"
T33 (P→Q) ∧(~P→Q) → Q "separation of cases; constructive dilemma"
T34 (P→Q) ∧(P→~Q) → ~P "reductio ad absurdum"
T35 (~P→R) ∧(Q→R) ↔ ((P→Q) → R)
T36 ~(P ∧ ~P) "non-contradiction"
T37 (P→Q) ↔ ~(P ∧ ~Q)
6 ABBREVIATING DERIVATIONS
It is useful in writing derivations to be able to combine two or more steps into one. For example, here is a
derivation in which double negation is used twice:
P
~Q → ~P
Q→R
∴ R
1. Show R
2. ~~P pr1 dn
3. ~~Q 2 pr2 mt
4. Q 3 dn
5. R 4 pr3 mp dd
One can shorten this derivation by two steps by combining the double negations with other rules, like this:
1. Show R
2. ~~Q pr1 dn pr2 mt
3. R 2 dn pr3 mp dd
4. T∧P pr1 s 3 adj dd simplify pr1 to get P and then adjoin this with the
sentence on line 3 to get T∧P.
Abbreviations of this sort may always be interpreted by the following "decoding procedure", starting at the
left and moving right:
A line number or premise number gives you a sentence -- the sentence on that line.
Rule r also gives you a sentence -- the sentence on the line cited.
A sentence followed by 'dn', 's', 'add' or 'bc' gives you the result of applying that rule to that
sentence. (The old sentence is no longer available for further use.)
Two sentences followed by 'mp', 'mt', 'adj', 'cb' give you the result of applying that rule to them.
If you can apply this decoding and end up with the sentence on the line which has the abbreviations at its
end, the line is correct. If you can't, the line is not correct. (There is sometimes more than one way to
apply a rule to a sentence, so there may be many ways to use the decoding process. If at least one way
of using it ends you up with the sentence on the line, the abbreviation is correct; otherwise it is incorrect.)
Applied to the abbreviations on lines 2, 3 and 4 above the decoding looks like this. We work from the left.
First, the leftmost 'pr1' is replaced by the first premise:
2. pr1 s dn pr2 mt
2. P∧Q s dn pr2 mt
2. Q dn pr2 mt
2. ~~Q pr2 mt
2. ~~Q R → ~Q mt
Finally, rule mt acts on '~~Q' and 'R → ~Q' to give you '~R', which is the sentence that actually appears on
line 2:
2. ~~Q pr2 mt
2. ~R
3. ~R add pr3 mp
3. S ∨ ~R pr3 mp
3. S ∨ ~R S∨~R → T mp
3. T
4. pr1 s 3 adj
4. P∧Q s 3 adj
4. P 3 adj
4. P T adj
4. T∧P
A long string of abbreviations can be difficult to decode, so we will confine ourselves to simple cases.
EXERCISES
1. Use the method of abbreviating derivations to produce shortened derivations for T38, T40-43.
For example, T13 ("transposition") is (P→Q) → (~Q→~P). This validates the rule:
□→○
∴ ~○ → ~□
We name such a rule by writing 'R' in front of the name of the theorem being used. An example of a use
of a theorem as a rule is:
8. S → T
9. ~T → ~S 8 RT13
Here are two arguments, and derivations, that use some theorems from Chapter 1 as rules.
~(Q∧~R) → P
P→Q
R → ~P
∴ ~(Q → R)
1. Show ~(Q → R)
2. Q→R ass id
3. (Q→R) → (P→R) pr2 RT4 T4 is (P→Q) → ((Q→R) → (P→R))
4. P→R 2 3 mp
5. (R→~P) → (P→~P) 4 RT4
6. P → ~P pr3 5 mp
7. ~P 6 RT20 T20 is (P→~P) → ~P
8. ~~(Q∧~R) pr1 7 mt
9. Q ∧ ~R 8 dn
10. Q 9s
11. R 2 10 mp
12. ~R 9 s 11 id
S→T
T → (Q → P)
S→Q
∴ S→P
1. Show S → P
2. S → (Q→P) pr1 pr2 RT4 T4 is (P→Q) → ((Q→R) → (P→R))
3. (S→Q) → (S→P) 2 RT6 T6 is (P→(Q→R)) → ((P→Q) → (P→R))
4. S→P pr3 3 mp dd
Theorems can be used to make rules in two more ways. One way applies when a theorem is a
biconditional. Since a biconditional is logically equivalent to two conditionals, it makes sense to use the
theorem as if it were two conditionals.
A final additional way to use theorems as rules is possible when a theorem is a conditional whose
antecedent is a conjunction. This gives a rule which has multiple premises.
These options combine, so that if one side of a biconditional is a conjunction, it validates a rule with
multiple premises. For example, T38 (below) is 'P∧Q ↔ ~(P→~Q)', and one of the rules that it validates
is:
RT38 □
○
∴ ~(□→~○)
EXERCISES
1. For each of the following derivations, determine which lines are correct and which incorrect. (In
assessing a line, assume that previous lines are correct.)
∴ ((U→V) → S) → (~S → U)
1. Show ((U→V) → S) → (~S → U)
2. (U→V) → S ass cd
3. Show ~S → U
4. ~S ass cd
5. ~S → ~(U→V) 2 RT13 T13 is (P→Q) → (~Q → ~P)
6. ~(U → V) 4 5 mp
7. U 6 RT21 cd T21 is ~(P→Q) → P
8. 3 cd
2. Construct a derivation for T45, and then use RT45 to derive T46
3. Construct derivations for T47 and T48, and construct a derivation for T49 using RT47 and RT48
5. Derive T53.
8 DERIVED RULES
We now have over fifty theorems that can be used as rules, with more to come. There are too many of
these to remember easily. It is customary to isolate a small number of rules based on the theorems and
give them special names, and use these rules frequently in derivations. In this section we look at five of
these.
Rule nc
~(□→○) □ ∧ ~○
∴ □ ∧ ~○ ∴ ~(□→○)
This rule is often useful when you are trying to derive a conditional when a conditional derivation isn't
working for you. Instead of assuming the antecedent of the conditional in order to use cd, assume the
negation of the conditional for an indirect derivation. Then turn this negated conditional into a conjunction
of the antecedent with the negation of its consequent. This gives you a lot to work with in continuing the
derivation. As an example, suppose you are trying to validate this argument:
R→Q
R∨S
S→R
∴ P→Q
You begin the derivation:
1. Show P → Q
You may now assume 'P' for purposes of doing a conditional derivation. But 'P' does not occur among the
premises, and you may not see how to proceed. Instead of trying a conditional derivation, begin an
indirect derivation:
1. Show P → Q
2. ~(P→Q) ass id
1. Show P → Q
2. ~(P→Q) ass id
3. P ∧ ~Q 2 nc
It is now fairly easy to simplify off '~Q', and use it to derive a contradiction:
1. Show P → Q
2. ~(P→Q) ass id
3. P ∧ ~Q 2 nc
4. ~Q 3s
5. ~R 4 pr1 mt
6. S 5 pr2 mtp contradictories
7. R 6 pr3 mp
So you can finish the indirect derivation:
1. Show P → Q
2. ~(P→Q) ass id
3. P ∧ ~Q 2 nc
4. ~Q 3s
5. ~R 4 pr1 mt
6. S 5 pr2 mtp
7. R 6 pr3 mp 5 id
Rule cdj (conditional as disjunction): This rule is constituted by T45 and T46, which together assert
the equivalence of a conditional with a disjunction whose left disjunct is the negation (or unnegation) of the
antecedent of the conditional and whose right disjunct is the consequent. Rule cdj has four cases:
Rule cdj
□→○ ~□ ∨ ○ ~□ → ○ □∨○
∴ ~□ ∨ ○ ∴ □→○ ∴ □∨○ ∴ ~□ → ○
This rule can be useful when attempting to derive a disjunction. Instead of deriving the disjunction directly,
derive the conditional whose antecedent is the (un)negation of the left disjunct and whose consequent is
the other disjunct. This can usually be done using a conditional derivation. Then turn the result of the
conditional derivation into the disjunction you are after using derived rule cdj.
Here is a derivation of T54 using cdj together with T53, which was derived in the exercises for the last
section. The overall structure of the derivation is to derive the biconditional by using two conditional
derivations to get the corresponding conditionals, and then use rule cb.
Rule sc (separation of cases): This is a combination of RT33 and RT49. It validates these inferences:
Rule sc
□∨○ □→△
□→△ ~□ → △
○→△ ∴△
∴ △
The first form of rule sc (on the left) says that if you are given that at least one of two cases hold (the first
premise), and if each of them imply something (the second and third premises), then you can conclude
that thing.
The second form of rule sc (on the right) applies when one of the two cases is the negation of the other.
Then their disjunction (P ∨ ~P) is logically true, and needn't be stated as an additional premise. (See
below for illustration.)
Rule sc is especially useful when other attempts to produce a derivation have failed. For example, if you
have a disjunction on an available line, then see if you can do two conditional derivations, each starting
with one of the disjuncts, and each reasoning to the desired conclusion. If you can do this, the first form of
sc applies. As an example, suppose you are given this argument:
V∨W
W → ~X
~U → X
∴ U∨V
It may not be apparent how to proceed. So consider separation of cases. You have available a
disjunction, 'V ∨ W', which is the first premise. If you can derive both V → U∨V and W → U∨V, the rule sc
will give you the desired conclusion:
1. Show U ∨ V
2. Show V → U∨V
3. V ass cd
4. U∨V 3 add cd
5. Show W → U∨V
6. W ass cd
7. ~X 6 pr2 mp
8. U 7 pr3 mt dn
9. U∨V 8 add cd
10. U∨V pr1 2 5 sc dd
When you don't have a disjunction to work with, you may be able to use the second form of sc. Suppose
you have this argument:
R∧S → Q
R→S
∴ R→Q
In applying the second form of sc, you need to choose something which will serve as the antecedent for a
conditional whose consequent is the desired conclusion, and whose negation will also serve as the
antecedent for a conditional whose consequent is the desired conclusion. What should you choose?
Often there is more than one choice that will work. In the case we are given, 'R' will work for this purpose.
That is, you will indeed be able to derive both of these:
R → (R→Q)
~R → (R→Q)
The second form of rule sc will then give you the desired conclusion:
1. Show R → Q
2. Show R → (R→Q) derive 'R → (R→Q)'
3. R ass cd
4. S 3 pr2 mp
5. Q 3 4 adj pr1 mp cd
6. Show ~R → (R → Q) derive '~R → (R→Q)'
7. ~R ass cd
8. Show R → Q
9. R ass cd
10. ~R 7 r id
11. 8 cd
12. R→Q 2 6 sc dd apply the second form of sc
Rule dm (DeMorgan's): This is a very useful rule. It lets you replace negations of conjunctions with
modified disjunctions, and vice versa. It consists of any application of the rules based on theorems T63-
T66:
T63 P ∧ Q ↔ ~(~P∨~Q)
T64 P ∨ Q ↔ ~(~P∧~Q)
T65 ~(P∧Q) ↔ ~P ∨ ~Q
T66 ~(P∨Q) ↔ ~P ∧ ~Q
So it allows any of the following inferences:
Rule dm
□∧○ □∨○ ~(□∧○) ~(□∨○)
∴ ~(~□∨~○) ∴ ~(~□∧~○) ∴ ~□ ∨ ~○ ∴ ~□ ∧ ~○
~(~□∨~○) ~(~□∧~○) ~□ ∨ ~○ ~□ ∧ ~○
∴ □∧○ ∴ □∨○ ∴ ~(□∧○) ∴ ~(□∨○)
It may be easiest to remember these forms by remembering T63 and T64 in this form:
conjunction disjunction
A negation of a is equivalent to the of the negations of its parts.
disjunction conjunction
DeMorgan's rule can be handy when you are trying to derive a disjunction. To use it, you assume the
negation of the disjunction for an indirect derivation. Rule dm lets you turn that negation into a
conjunction, and then you have both conjuncts to use in deriving a contradiction. Example:
P→U
P∨Q
Q→V
∴ U∨V
1. Show U ∨ V
2. ~(U∨V) ass id
3. ~U ∧ ~V 2 dm
4. ~P 3 s pr1 mt
5. Q 4 pr2 mtp
6. V 5 pr3 mp
7. ~V 3 s 6 id
~(P↔Q) ↔ (P↔~Q)
Rule nb
~(□↔○) □ ↔ ~○
∴ □ ↔ ~○ ∴ ~(□↔○)
The first form is handy if you have the negation of a biconditional. The rule lets you infer a biconditional,
which simplifies into two conditionals, which can be very useful. Here is an example:
~(P↔Q)
~Q
∴ P
1. Show P
2. ~(P ↔ Q) pr
3. P ↔ ~Q 2 nb
4. ~Q → P 3 bc
5. P 4 pr2 mp dd
The second form is handy if you want to derive the negation of a biconditional. Just derive the related
biconditional, say by using conditional derivations to derive the associated conditionals. Example:
P → (R↔Q)
R → ~Q
S→Q
~R → S
∴ ~P
1. Show ~P
2. Show ~Q → R
3. ~Q ass cd
4. ~S 3 pr3 mt
5. ~~R 4 pr4 mt
6. R 5 dn cd
7. R ↔ ~Q pr2 2 cb
8. ~(R↔Q) 7 nb
9. ~P 8 pr1 mt dd
EXERCISES
1. For each of the following derivations, determine which lines are correct and which incorrect. (In
assessing a line, assume that previous lines are correct.)
a. (U→S) → Q
P∨R → S
~(T→Q)
∴ ~P
1. Show ~P
2. T ∧ ~Q pr3 nc
3. ~(U→S) 2 s pr1 mt
4. U ∧ ~S 3 nc
5. ~(P∨R) 4 s pr2 mt
6. ~P ∧ ~R 5 dm
7. ~P 6 s dd
b. ~X ∨ W
~(V ↔ W)
~(W ↔X) ∨ V
∴ ~W
1. Show ~W
2. Show W → X
3. W ass cd
4. X 3 pr1 mtp
5. Show X → W
6. X ass cd
7. X→W pr1 cdj dd
8. W↔X 2 5 bc
9. V 8 dn pr3 mtp
10. V ↔ ~W pr2 nb
11. ~W 9 10 mp dd
c. (X →U) → (Y→Z)
~(Y ∨ ~Z)
∴ ~U
1. Show ~U
2. ~Y ∧ Z pr2 dm
3. ~(Y → Z) 2 nc
4. ~(X → U) 3 4 mt
5. X ∧ ~U 4 nc
6. ~U 5 s dd
2. Construct correct derivations for each of the following arguments using derived rules when convenient.
a. U∧V → X <use dm>
~V → Y
X∨Y → Z
∴ ~Z → ~U
UNABBREVIATED DERIVATIONS
A derivation from a set of sentences P consists of a sequence of lines that is built up in order, step by
step, where each step is in accordance with these provisions:
• Show line: A show line consists of the word "Show" followed by a symbolic sentence. A show
line may be introduced at any step. Show lines are not given a justification.
• Premise: At any step, any symbolic sentence from the set P may be introduced, justified with the
notation "pr".
• Theorem: At any step, an instance of a previously proved theorem may be entered with the name
of the theorem given as justification. (e.g. "T32")
• Rule: At any step, a line may be introduced if it follows by a rule from sentences on previous
available lines; it is justified by citing the numbers of those previous lines and the name of the rule.
This includes the following basic rules:
r mp mt dn s
adj add mtp bc cb
It also includes rules based on previously derived theorems, where the name of a rule based on a
theorem is "R" followed by the name of the theorem; e.g. "RT32". If the appropriate enabling
theorems have been derived, these rules are also available for use:
nc cdj sc dm nb
• Direct derivation: When a line (which is not a show line) is introduced whose sentence is the
same as the sentence on the closest previous uncancelled show line, one may, as the next step,
write "dd" following the justification for that line, draw a line through the word "Show", and draw a
box around all the lines below the show line, including the current line.
• Assumption for conditional derivation: When a show line with a conditional sentence is
introduced, as the next step one may introduce an immediately following line with the antecedent
of the conditional on it; the justification is "ass cd".
• Conditional derivation: When a line (which is not a show line) is introduced whose sentence is
the same as the consequent of the conditional sentence on the closest previous uncancelled
show line, one may, as the next step, write "cd" at the end of that line, draw a line through the
word "Show", and draw a box around all the lines below the show line, including the current line.
• Assumption for indirect derivation: When a show line is introduced, as the next step one may
introduce an immediately following line with the [un]negation of the sentence on the show line; the
justification is "ass id".
• Indirect derivation: When a sentence is introduced on a line which is not a show line, if there is a
previous available line containing the [un]negation of that sentence, and if there is no uncancelled
show line between the two sentences, as the next step you may write the line number of the first
sentence followed by "id" at the end of the line with the second sentence. Then you cancel the
closest previous "show", and box all sentences below that show line, including the current line.
Except for steps that involve boxing and canceling, every step introduces a line. When writing out a
derivation, every line that is introduced is written directly below previously introduced lines.
Optional variant: When boxing and canceling with direct or conditional derivation, the "dd" or "cd"
justification may be written on a later line which contains no sentence at all, and which is followed by the
number of the line that completes the derivation. With indirect derivation, the "id" justification may be
written on a later line which contains no sentence at all, and which is followed by the numbers of the two
lines containing contradictory sentences. In all cases, the lines cited must be available from the later line.
Now that we have connectives in addition to the negation and conditional signs, we can give some general
hints for doing derivations containing them. These have all been illustrated above, and they will simply be
stated here for convenience. First are strategies that are often useful for deriving certain forms of
sentences.
If you want to derive a Conjunction □ ∧ ○
Derive each conjunct (perhaps by id) and adjoin them
If you want to derive a Disjunction □ ∨ ○
Derive either disjunct and use add.
Assume '~(□ ∨ ○)' for id, and use dm.
Derive '~□ → ○', perhaps by cd, and use cdj
If you want to derive a Biconditional □ ↔ ○
Derive each conditional and use cb.
If you want to derive a Negation of a conjunction ~(□ ∧ ○)
Use id.
If you want to derive a Negation of a disjunction ~(□ ∨ ○)
Derive '~□ ∧ ~○' and use dm.
Perhaps assume '□ ∨ ○' for id, and try to derive both '□ → P∧~P' and '○→P∧~P'. Then use sc
(applied to the assumed '□ ∨ ○' and the conditionals) to derive 'P∧~P'.
If you want to derive a Negation of a biconditional ~(□ ↔ ○)
Derive '□ ↔ ~○' and use nb.
Then there are situations in which you have available a certain form of sentence, and want to know how to
make use of it.
EXERCISES
c. P ∨ (Q∧S)
R∨Q
S ∨ ~P
Q → ~S
∴ R
This pattern of reasoning can be summed up using a truth table. The table begins with listing the two
options for the truth value of 'P' in a class of situations:
P ~P
situations in which 'P' is true T
situations in which 'P' is false F
P ~P
situations in which 'P' is true T F
situations in which 'P' is false F T
and that information determines the truth value of 'P ∨ ~P' in each class:
P ~P P ∨ ~P
situations in which 'P' is true T F T
situations in which 'P' is false F T T
This is an example in which no matter what truth value the simple parts of the sentence have, the
sentence itself is true. Such a sentence is called a "tautology":
The truth table just given shows that 'P ∨ ~P' is a tautology, because it shows that 'P ∨ ~P' is true no
matter how truth values are assigned to its atomic parts.
This use of truth tables can be applied to a sentence of any degree of complexity. Here is an example
showing that 'P∧Q → Q' is a tautology. We begin by listing all of the possible combinations of truth values
that 'P' and 'Q' might have. There are four of these: the sentences are both true, the first is true and the
second false, the first is false and the second true, or they are both false:
P Q P∧Q P∧Q → Q
T T
T F
F T
F F
This assignment of truth values to 'P' and to 'Q' determines the truth values of 'P ∧ Q' and of 'P∧Q → Q';
for 'P∧Q → Q':
P Q P∧Q P∧Q → Q
T T T T
T F F T
F T F T
F F F T
In this table there are all T's under 'P∧Q → Q', showing that it is a tautology.
To handle sentences of arbitrary numbers of sentence letters, we need to have a systematic way of
representing all of the possible combinations of truth values that the sentence letters can receive. One
way of doing this is to list all of the atomic parts of the sentence on the top of the table. Then, underneath
the rightmost letter, write alternations of T and F:
P Q R
T
F
T
F
T
F
T
F
Under the next letter to its left write alterations of TT and FF:
P Q R
T T
T F
F T
F F
T T
T F
F T
F F
P Q R
T T T
T T F
T F T
T F F
F T T
F T F
F F T
F F F
Do this until the leftmost letter has gone through one whole set of alterations. If there is one sentence
letter, only two rows are required. If there are two, the table will contain four rows. If three, then eight.
n
And so on. There are always 2 rows in the table when there are n sentence letters.
Next, write the sentence to be tested, and underneath it write in each row the truth value that it has when
its parts have the truth values appearing on that row. An example is:
P Q R P∧Q → P∧R
T T T T
T T F F not a tautology
T F T T
T F F T
F T T T
F T F T
F F T T
F F F T
Since there is a row (the second row) in which 'P∧Q → P∧R' does not have a T, that sentence is not a
tautology.
If there is a T in every row, it is a tautology, as in this case:
P Q R P∧Q → P∨R
T T T T
T T F T
T F T T
T F F T
F T T T
F T F T
F F T T
F F F T
This method is completely mechanical and it always yields an answer in a finite amount of time.
If a sentence is a tautology, then you need to fill in every one of its rows to show that every one is T. But if
a sentence is not a tautology, you need only find one row in which the sentence comes out F. For
example, the following partial truth table shows that 'P∨Q ↔ R∨Q' is not a tautology:
P Q R P∨Q ↔ R∨Q
T T T
T T F
T F T
T F F F
F T T
F T F
F F T
F F F
If a sentence is not a tautology and if you can identify what assignment of truth values to the simple parts
will show this, you needn't set up a whole truth table. Just give a single row:
P Q R P∨Q ↔ R∨Q
T F F F
and state that this assignment of truth values to sentence letters makes the sentence false.
It turns out that every theorem of the first two chapters of this text is a tautology. This is because the rules
and techniques in these chapters only allow derivations of tautologies when no premises are used. It is
also a fact that any tautology can be derived by the rules we have. So we have two ways to show
tautologies: theorems and truth tables, and we have one way to show non-tautologies: truth tables.
EXERCISES
1. Use truth tables or truth value assignments to determine whether each of these is a tautology.
a. (R↔S) ∨ (R↔~S)
b. R ↔ (S↔R)
c. R ∨ (S∧T) → R ∧ (S∨T)
d. ~U → (U→~V)
e. (~R↔R) → S
f. (S∧T) ∨ (S∧~T) ∨ ~S
11 TAUTOLOGICAL VALIDITY
It is easy to show by doing a derivation that this argument is valid:
P∧P
∴ P
It is also possible to show that the argument is valid using a technique like that of truth tables. Just show
that there is no logically possible situation in which the premise is true and the conclusion false. This can
be done as follows:
All logically possible situations can be divided into two classes. In one class of situations, 'P' is
true; no situation of this sort can be one in which the argument has true premises and a false
conclusion, because in any of these situations the conclusion, 'P', is true. In all other situations,
'P' is false. But then so is 'P∧P'. So none of these are situations in which the argument has true
premises and a false conclusion. So it is valid.
Generalizing, we can say that an argument is valid whenever the premises "tautologically imply" the
conclusion. This relation is called Tautological Validity. It is defined as follows:
There is a mechanical way to test an argument to see if it is tautologically valid. Just create a truth table in
which all of the premises and conclusion appear at the top of some column. If there is no row in which all
of the premises have T's under them and the conclusion has an F, then the argument is tautologically
valid. If there is such a row, then the argument is not tautologically valid; instead, we say that it is
tautologically invalid.
Suppose that we are wondering whether this argument:
P → ~Q
R ↔ P∧Q
Q∨R
∴ Q∧R
is tautologically valid. Here is a truth table to test this:
There is in fact a row (the sixth) in which the premises are all true and the conclusion is false. So the
argument is not tautologically valid.
Using the same technique, we can show that this argument:
P → ~Q
R ↔ P∧Q
Q∨R
∴ P↔R
is tautologically valid. The truth table is:
P Q R P → ~Q R ↔ P∧Q Q∨R P↔R
T T T F T T T
T T F F F T F
T F T T F T T
T F F T T F F
F T T T F T F
F T F T T T T
F F T T F T F
F F F T T F T
Here, there is no row in which the premises are all true and the conclusion 'P ↔ R' is false. So that
argument is tautologically valid.
Although we haven’t shown this here, if there is a derivation using the rules and techniques of Chapters 1
and 2 showing that an argument is valid, then the argument is indeed tautologically valid. And vice versa:
if an argument is tautologically valid, then there is a derivation to show that the argument is valid using the
techniques of chapters 1 and 2.
So we have two ways to show that an argument is tautologically valid: it can be done with a truth table or
with a derivation. We have only one way to show that some argument is not tautologically vaild: find a way
to assign truth values to the sentential letters so that all the premises are true and the conclusion false.
EXERCISES
For each of the following arguments, either show that it is tautologically valid, or show that it is
tautologically invalid.
a. U∧V → X
~V → U
X∨V → U
∴ V → ~U
b. (X→Y) → Z
~Z
∴ ~Y
c. ~(P ↔ Q)
R∨P
~Q → R
∴ R
d. S∨T
W∨S
~T ∨ ~S
∴ ~S
e. W→U
~W → V
∴ U∨V
f. P ↔ ~Q
Q → R∨P
R → ~Q ∨ ~P
∴ Q∨R
g. P ∨ (Q∧S)
S∨Q
S ∨ ~P
∴ S
h. P ∧ (Q∨S)
S∨Q
S∨P
∴ S
Conjunction rules:
Rule s (simplification) Rule adj (adjunction)
□∧○ or □∧○ □
∴ □ ∴○ ○
∴ □∧○
Disjunction rules:
Rule add (addition) Rule mtp (modus tollendo ponens)
□ or □ □∨○ or □∨○
∴ □∨○ ∴○∨□ ~○ ~□
∴ □ ∴○
Biconditional rules:
Rule bc (biconditional-to-conditional) Rule cb (conditionals-to-biconditional)
□↔○ or □↔○ □→○
∴ □→○ ∴○→□ ○→□
∴ □↔○
DERIVED RULES (May be used if the theorems on which they are based have been
derived.)
~(~□∨~○) ~(~□∧~○) ~□ ∨ ~○ ~□ ∧ ~○
∴ □∧○ ∴ □∨○ ∴ ~(□∧○) ∴ ~(□∨○)
~(□↔○) □ ↔ ~○
∴ □ ↔ ~○ ∴ ~(□↔○)
UNABBREVIATED DERIVATIONS
A derivation from a set of sentences P consists of a sequence of lines that is built up in order, step by
step, where each step is in accordance with these provisions:
• Show line: A show line consists of the word "Show" followed by a symbolic sentence. A show
line may be introduced at any step. Show lines are not given a justification.
• Premise: At any step, any symbolic sentence from the set P may be introduced, justified with the
notation "pr".
• Theorem: At any step, an instance of a previously proved theorem may be entered with the name
of the theorem given as justification. (e.g. "T32")
• Rule: At any step, a line may be introduced if it follows by a rule from sentences on previous
available lines; it is justified by citing the numbers of those previous lines and the name of the rule.
This includes the following basic rules:
r mp mt dn s
adj add mtp bc cb
It also includes rules based on previously derived theorems, where the name of a rule based on a
theorem is "R" followed by the name of the theorem; e.g. "RT32". If the appropriate enabling
theorems have been derived, these rules are also available for use:
nc cdj sc dm nb
• Direct derivation: When a line (which is not a show line) is introduced whose sentence is the
same as the sentence on the closest previous uncancelled show line, one may, as the next step,
write "dd" following the justification for that line, draw a line through the word "Show", and draw a
box around all the lines below the show line, including the current line.
• Assumption for conditional derivation: When a show line with a conditional sentence is
introduced, as the next step one may introduce an immediately following line with the antecedent
of the conditional on it; the justification is "ass cd".
• Conditional derivation: When a line (which is not a show line) is introduced whose sentence is
the same as the consequent of the conditional sentence on the closest previous uncancelled
show line, one may, as the next step, write "cd" at the end of that line, draw a line through the
word "Show", and draw a box around all the lines below the show line, including the current line.
• Assumption for indirect derivation: When a show line is introduced, as the next step one may
introduce an immediately following line with the [un]negation of the sentence on the show line; the
justification is "ass id".
• Indirect derivation: When a sentence is introduced on a line which is not a show line, if there is a
previous available line containing the [un]negation of that sentence, and if there is no uncancelled
show line between the two sentences, as the next step you may write the line number of the first
sentence followed by "id" at the end of the line with the second sentence. Then you cancel the
closest previous "show", and box all sentences below that show line, including the current line.
Except for steps that involve boxing and canceling, every step introduces a line. When writing out a
derivation, every line that is introduced is written directly below previously introduced lines.
Optional variant: When boxing and canceling with direct or conditional derivation, the "dd" or "cd"
justification may be written on a later line which contains no sentence at all, and which is followed by the
number of the line that satisfies the conditions for direct or conditional derivation. With indirect derivation,
the "id" justification may be written on a later line which contains no sentence at all, and which is followed
by the numbers of the two lines containing contradictory sentences. In all cases, the lines cited must be
available from the later line.
STRATEGY HINTS
Begin with a sketch of an outline of a derivation, and then fill in the details.
Negation of Use dm to turn this into '~□ ∨ ~○', and then try to derive either '□' or '○' to use mtp.
conjunction
~(□ ∧ ○)
Negation of Use dm to turn this into '~□ ∧ ~○'; then simplify and use the conjuncts singly.
disjunction
~(□ ∨ ○)
Negation of Use nc to derive '□ ∧ ~○', then simplify and use the conjuncts singly.
conditional
~(□ → ○)
Negation of Use nb to turn this into '□ ↔ ~○', and use bc to get the corresponding conditionals.
biconditional
~(□ ↔ ○)
SECTION 1
Exercises 1 and 2 answered together:
a. Not a sentence
b. Informal notation
~Q↔~R
F
/\
~Q ~R
F T
| |
Q R
T F
c. Official notation
~(Q↔R)
T
|
Q↔R
F
/\
Q R
T F
d. Not a sentence
e. Informal notation
(P→Q) ∨ (R→~Q)
T
/\
P→Q R→~Q
T
/\ /\
P Q R ~Q
F |
Q
f. Not a sentence
g. Informal notation
P∧Q → (Q→R∨Q)
T
/\
P∧Q Q→R∨Q
/\ T
P Q /\
Q R∨Q
T
/\
R Q
T
h. Informal notation
P ↔ (P↔Q∧R)
F
/\
P P↔Q∧R
T F
/\
P Q∧R
T F
/\
Q R
F
i. Informal notation
P ∨ (Q→P)
T
/\
P Q→P
T /\
Q P
SECTION 2
1. a. R∧P
b. W∨R
c. ~R ∧ T
d. R∧S
e. Q↔R
SECTION 3
1. a. ~(P ∨ (Q ∧ R))
F
|
P ∨ (Q ∧ R)
T
/\
P Q∧R
T /\
Q R
b. ~P ∨ (Q ∧ R)
F
/\
~P Q∧R
F F
| /\
P Q R
T F F
c. ~(P ∨ R) ↔ ~P ∨ R
T
/\
~(P ∨ R) ~P ∨ R
F F
| /\
P∨R ~P R
T F F
/\ |
P R P
T T
d. ~Q ∧ (P ∨ (Q↔R))
T
/\
~Q P ∨ (Q↔R)
T T
| /\
Q P Q↔R
F T
/\
Q R
F F
3. a. Only if Veronica doesn't leave will William leave, or Veronica and William and Yolanda will all leave.
(Only if Veronica doesn't leave will William leave) ∨ (Veronica and William and Yolanda will leave)
(William will leave → Veronica doesn't leave) ∨ (V ∧ W ∧ Y)
(W → ~V) ∨ (V ∧ W ∧ Y)
b. If neither William nor Veronica leaves, Yolanda won't either
If neither William [leaves] nor Veronica leaves, [then] Yolanda won't [leave]
~(W ∨ V) → ~Y
c. If William will leave if Veronica leaves, then he will surely leave if Yolanda leaves
If (William will leave if Veronica leaves) then ([William] will leave if Yolanda leaves)
(V → W) → (Y → W)
4. " Veronica leaves but neither William nor Yolanda leaves" corresponds to the truth-value
assignment: V --- true; W --- false; Y --- false. We use parse trees to compute the truth values of the
complex sentences.
a. (W → ~V) ∨ (V ∧ W ∧ Y)
T
/\
W → ~V V ∧ W ∧ Y
T F
/\ /\
W ~V V∧W Y
F F F F
| /\
V V W
T T F
b. ~(W ∨ V) → ~Y
T
/\
~(W ∨ V) ~Y
F T
| |
W∨V Y
T F
/\
W V
F T
c. (V → W) → (Y → W)
T
/\
V→W Y→W
F T
/\ /\
V W Y W
T F F F
d. ~(W ∨ V ∨ Y)
F
/\
W∨V∨Y
T
/\
W∨V Y
T F
/\
W V
F T
SECTION 4
1. a. None; if we had ~~Q instead of Q it would be an instance of MTP.
b. Simplification
c. Double Negation
d. MTP
e. CB
f. None.
g. BC
h. Addition
i. None
2. a. ~W ↔ ~X by CB; also ~X ↔ ~W by CB
b. ~W by MTP
c. Nothing
d. ~W by S; also ~X by S
e. W → ~X by BC; also ~X → W by BC
f. Nothing
SECTION 7
1. a. All fine
b. In line 8, the sentence that can be inferred from 7 by RT39 is W → ~S.
2, 3, 4, 5: Derivations of numbered theorems not given
SECTION 8
1. a. All fine
b. Line 4: MTP does not apply;
Line 8: BC (biconditional to conditional) does not apply; we could use CB;
Line 11: MP does not apply to biconditionals; you have to split the biconditional into conditionals
first using BC.
c. Line 2: the result of applying DM to pr2 is ~Y ∧ ~~Z rather than ~Y ∧ Z.
Line 3: NC doesn't apply; the NC would generate line 3 if line 2 were Y ∧ ~Z.
Line 4: Line 4 is not available at line 4; it may not be cited to justify itself. The sentence could be
1. Show ~Z → ~U
2. ~Z ass cd
3. ~(X ∨ Y) pr3 2 mt
4. ~X ∧ ~Y 3 dm
5. ~X 4s
6. ~Y 4s
7. ~(U ∧ V) pr1 5 mt
8. ~U ∨ ~V 7 dm
9. ~~V 6 pr2 mt
10. ~U 8 9 mtp cd
1. Show ~V
2. ~(X→Y) pr1 pr2 mt
3. X ∧ ~Y 2 nc
4. ~Y 3s
5. ~V 4 pr3 mt dd
c. P∨Q
Q→S
U ∨ ~S
P∨S → R
R→U
∴ U
1. Show U
2. Show P → U
3. P ass cd
4. P∨S 3 add
5. R 4 pr4 mp
6. U 5 pr5 mp cd
7. Show Q → U
8. Q ass cd
9. S 8 pr2 mp
10. ~~S 9 dn
11. U 10 pr3 mtp cd
12. U pr1 2 7 sc
13. 12 dd
SECTION 9
c. P ∨ (Q∧S)
R∨Q
S ∨ ~P
Q → ~S
∴ R
1. Show R
2. ~R ass id
3. Q 2 pr2 mtp
4. ~S 3 pr4 mp
5. ~Q ∨ ~S 4 add
6. ~(Q∧S) 5 dm
7. P 6 pr1 mtp
8. ~~P 7 dn
9. S 8 pr3 mtp
10. 4 9 id
SECTION 10
d. ~U → (U→~V); tautology
U V ~U → (U→~V)
T T T
T F T
F T T
F F T
e. (~R↔R) → S; tautology
R S (~R↔R) → S
T T T
T F T
F T T
F F T
SECTION 11
a. U∧V → X NO
~V → U
X∨V → U
∴ V → ~U
U V X U∧V → X ~V → U X∨V → U V → ~U
T T T T T T F
b. (X→Y) → Z YES
~Z
∴ ~Y
X Y Z (X→Y) → Z ~Z ~Y
T T T T F F
T T F F T F
T F T T F T
T F F T T T
F T T T F F
F T F F T F
F F T T F T
F F F F T T
c. ~(P ↔ Q) YES
R∨P
~Q → R
∴ R
P Q R ~(P ↔ Q) R∨P ~Q → R R
T T T F T T T
T T F F T T F
T F T T T T T
T F F T T F F
F T T T T T T
F T F T F T F
F F T F T T T
F F F F F F F
d. S∨T NO
W∨S
~T ∨ ~S
∴ ~S
S T W S∨T W∨S ~T ∨ ~S ~S
T F T T T T F
e. W→U YES
~W → V
∴ U∨V
U V W W→U ~W → V U∨V
T T T T T T
T T F T T T
T F T T T T
T F F T F T
F T T F T T
F T F T T T
F F T F T F
F F F T F F
f. P ↔ ~Q NO
Q → R∨P
R → ~Q ∨ ~P
∴ Q∨R
P Q R P ↔ ~Q Q → R∨P R → ~Q ∨ ~P Q∨R
T F F T T T F
g. P ∨ (Q∧S) YES
S∨Q
S ∨ ~P
∴ S
P Q S P ∨ (Q∧S) S∨Q S ∨ ~P S
T T T T T T T
T T F T T F F
T F T T T T T
T F F T F F F
F T T T T T T
F T F F T T F
F F T F T T T
F F F F F T F
h. P ∧ (Q∨S) NO
S∨Q
S∨P
∴ S
P Q S P ∧ (Q∨S) S∨Q S∨P S
T T F T T T F
Chapter Three
Name letters, Predicates, Variables and Quantifiers
1 NAME LETTERS AND PREDICATES
In chapters 1 and 2 we studied logical relations that depend only on the sentential connectives: '~', '→', '∧',
'∨', '↔'. The atomic sentences -- those that contain no connectives -- were symbolized by sentential
letters, and we paid no attention to any internal structure that they might have. It is now time to study that
structure. The Predicate Calculus is a system of logic that studies the ways in which sentences are
constructed out of name letters, predicates, variables, and quantifiers, as well as connectives. We have
already studied connectives; in this section we introduce name letters, predicates, variables, and
quantifiers.
In our logical symbolism, name letters are written as the small letters: a, b, c, d, e, f, g, h (and with
subscripts, such as ‘c3’). Any small letter between 'a' and 'h' can be used as a name letter. Name letters
in the logical symbolism correspond to names of English:
Carlos, Agatha, Dr. Samuelson, Ms. Bernstein, Madame Curie, David Rockefeller, San
Diego, Germany, UCLA, General Electric, Microsoft, Google, Macy's, The Los Angeles
Times, I-405, Memorial Day, the FBI, ...
Any one of these may be symbolized by means of a name letter:
h Henry
c California
g General Electric
The simplest way to make a sentence containing a name letter is to combine it with a one-place
predicate. One-place predicates appear in our logical symbolism as the capital letters from A to O (and
with subscripts, such as ‘G2’). One-place predicates correspond roughly to grammatical predicates in
English; in the following examples, the underlined phrases would be symbolized as one-place predicates:
Agatha is clever.
Henry is a giraffe.
Ferdy dances well.
Georgia is a state
Ann will run for re-election.
(The parts that are not underlined are symbolized with name letters.)
Whereas English proper names are usually capitalized, the logical name letters that represent them are
not, and whereas English predicates are typically not capitalized, the logical predicates that represent
them are capitalized. There is nothing "logical" about this reverse convention; it is an historical accident,
but it has now become part of the tradition of symbolic logic Further, in the usual formulations of the
predicate calculus the predicate comes before the name letter, instead of after it as in English. This, too,
is an historical accident. So the sentences given above can be symbolized as follows:
Agatha is clever. Ca
Henry is a giraffe. Gh
Ferdy dances well. Df
Georgia is a state. Ag
Ann will run for re-election. Ea
A one-place ("monadic") predicate is any capital letter between 'A' and 'O' (optionally with a
numerical subscript).
A name letter is any small letter from 'a' to 'h' (optionally with a numerical subscript).
An atomic sentence may be formed by writing a one-place predicate followed by a name
letter.
EXERCISES
1. Symbolize each of the following sentences:
a. Fred is an orangutan.
b. Gertrude is an orangutan but Fred isn't.
c. Tony Blair will speak first.
d. Gary lost weight recently; he is happy.
e. Felix cleaned and polished.
f. Darlene or Abe will bat clean-up.
We assume that a one-place predicate is true of certain things, and that a name letter stands for a unique
thing. A sentence consisting of a one-place predicate together with a name letter is true if and only if the
predicate is true of the thing that the name letter stands for. Thus, taking the examples listed above, we
assume that 'C' is true of all and only clever things, that 'a' stands for Agatha (presumably a person or
animal), and then:
Ca
is true if and only if Agatha is one of the clever things that the predicate is true of. Similarly, if `G' is true of
giraffes, then `Gh' is true if Henry is one of the giraffes. If `E' is true of the things that will run for
re-election, and if 'a' stands for Ann, then `Ea' is true if and only if Ann will run for reelection.
Predicates are generally true of several specific things, but a predicate might be true of only one thing ('is
a moon of the earth') or might not be true of anything at all. If there are in fact no dragons, the sentence:
Df Fred is a dragon
contains a predicate 'D' that is true of nothing at all. This means that the sentence `Df' will be false, no
matter who or what `Fred' stands for.
In this chapter we assume that each name letter in our logical symbolism stands for a unique thing. This
assumption is an idealization, for it is not true that the words of English that we are representing by name
letters always succeed in naming something. If there is no such person as Paul Bunyan, then `Paul
Bunyan' is a "name" that names nothing at all. In some systems of logic it is possible to use name letters
which do not stand for anything; these systems of logic are called "free logics". (They are called "free"
because they are "free of" the assumption that the name letters they contain actually stand for things.)
Free logics are a bit more complicated than standard logic. (Studies of free logic assume that the reader
is already acquainted with the standard logic taught here.) In this text we assume that any name letter
that we use stands for something.
EXERCISES
2. Symbolize each of the following, assuming:
`D' is true of doctors
'L' is true of people who are in love
'h' stands for Hans
'a' stands for Amanda
a. Hans is a doctor but Amanda isn't.
b. Hans, who is a doctor, is in love
c. Hans is in love but Amanda isn't
d. Neither Hans nor Amanda is in love
f. Hans and Amanda are both doctors.
Variables: Any small letter from 'i' to 'z' is a variable; also small letters between
'i' and 'z' with numerical subscripts.
The universal quantifier sign is '∀'.
The existential quantifier sign is '∃'.
A quantifier is either quantifier sign followed by a variable:
∀x ∀z ∀s ∃x ∃z ∃s
Here is how we use quantifiers. Suppose that we wish to say -- as some philosophers have said -- that
everything in the universe is either mental or physical. Suppose that `M' is the one-place predicate `is
mental', and `H' is the one-place predicate `is physical'. Then we symbolize the claim that everything is
either mental or physical as follows:
∀x(Mx ∨ Hx).
The initial `∀x' is a universal quantifier phrase. This is followed by something, `(Mx ∨ Hx)', which we will
call a symbolic formula. A formula is just like a symbolic sentence except that instead of a name letter
following each predicate we may have a variable, such as `x' above. The displayed formula says that
everything satisfies a certain condition. The universal quantifier is responsible for the "everything" part,
and the combination of variables and predicates tells us what the condition is. In the case in point, the
condition is that it is either mental or physical:
∀x (Mx ∨ Hx)
An existential quantifier can appear in a formula in the same place that a universal quantifier may appear:
∃x (Mx ∨ Hx)
In order to construct sentences in our new extended notation, we begin by defining what a symbolic
formula is. Intuitively, a symbolic formula is like a sentence, except that it may contain variables in places
where name letters otherwise would appear. We use the word 'term' to cover both name letters and
variables.
So 'a' and 'x' are both terms. A formula is built up in steps, as follows:
Both 'Henry is a giraffe' and 'x is a giraffe' are symbolized as atomic formulas:
Gh Gx
We can also make formulas out of other formulas by "generalizing" them with quantifiers:
Quantified formulas: If □ is a formula, and 'x' is a variable, then these are quantified
formulas:
∀x□ ∃x□
We may informally omit parentheses exactly as we did in the last chapter, to produce informal notation:
∀xGx ∧ ∃xFx ∃xFx ∨ ∀y(Gy→Fy)
(Note that '∀yGy→Fy', is a conditional; it is not equivalent to '∀y(Gy→Fy)', which is a universal
generalization of a conditional.)
Likewise, we can add a quantifier to a formula that already has one or several quantifiers within it:
∀x(Gx → ∃yFy) ∀x~∃y(Gx ∨ ~Fy) ∀x∀y∀z(Gx → Fz)
Every formula is either atomic, or it has a main connective or a quantifier with scope over the whole
formula. The main connective or quantifier in a formula is the last connective or quantifier that was added
in constructing the formula. Formulas may be parsed as in chapters 1 or 2. Some examples are:
∀x(Gx → ∃yFy) ∀x~∃y(Gx ∨ ~Fy)
| |
(Gx → ∃yFy) ~∃y(Gx ∨ ~Fy)
2 |
Gx ∃yFy ∃y(Gx ∨ ~Fy)
| |
Fy (Gx ∨ ~Fy)
2
Gx ~Fy
|
Fy
EXERCISES
1. For each of the following, say whether it is a formula in official notation, or in informal notation, or not a
formula at all. If it is a formula, parse it.
∀x(Fx → Gx)
∃x(Fx ∧ ∀yGy)
Using the notion of the scope of a quantifier, we can say when a quantifier occurrence binds an
occurrence of a variable in a formula:
(Notice that a variable occurrence that is part of a quantifier is automatically bound by that quantifier.)
The arrows here indicate which variables are bound by the quantifier:
∀x(Fx → Gx)
The initial quantifier binds both occurrences of 'x' because (1) they are within its scope, (2) they are the
same letter as the one in the quantifier itself, and (3) they are not already bound by another quantifier in
the formula. These examples are similar:
∃xFx ∧ ∃y(Gy ∧ Hy)
∃x(Fx ∧ ∀yGy)
The following example illustrates a case in which an occurrence of 'x' (the last one) is not bound by the
initial quantifier '∃x′, even though it is within its scope. This is because there is another quantifier inside
that already binds that occurrence of 'x':
∃x(Fx ∧ ∃x ( ∃zGz ∧ Hx))
Using the notion of a quantifier binding an occurrence of a variable, we can define what a sentence is:
A variable occurrence that is not bound is called "free". So a sentence can also be defined as a formula
that contains no free occurrences of variables.
All of the examples given above are sentences. The following formulas are not sentences because
certain occurrences of variables in them are not bound any of their quantifiers:
∃xFx ∧ ∃y(Gx ∧ Hy) the scope of the initial quantifier does not include the second 'x'
∃x(Fx ∧ ∃y (∃zGz ∧ Hz)) the scope of the quantifier with 'z' does not extend far enough
EXERCISES
1. For each of the following, say whether it is a sentence, a formula that is not a sentence, or not a
formula at all. (Include sentences and formulas in informal notation as sentences and formulas.) If it is a
sentence or formula, indicate which quantifiers bind which variables.
What do quantifiers mean? This can be answered indirectly by giving a way to read symbolic formulas in
English. We already know how to read the parts of formulas without quantifiers or variables; we have:
Gh Henry is a giraffe
Ea Ann with run for reelection
Gh ∧ Ea Henry is a giraffe and Ann will run for reelection.
Gh → Ea If Henry is a giraffe then Ann will run for reelection.
Read any universal quantifier as "everything is such that", while reading any variable that it
binds as a pronoun which has the 'everything' as its antecedent.
Read any existential quantifier as "something is such that" while reading any variable that it
binds as a pronoun which has the 'something' as its antecedent.
As in the case of connectives, we need to distinguish carefully between the official definition of the
quantifiers and the question of how best to read them in English. The official definition of the quantifiers
has to do with the truth-values of the sentences that are produced using them:
This test explains why we read `∀x(Mx ∨ Hx)' in English as `Everything is either mental or physical'. It is
because the test for the truth of `∀x(Mx ∨ Hx)' succeeds if everything is indeed either mental or physical,
and it fails if not everything is either mental or physical. To see that this is so, compare the meaning of the
English sentence with the official statement of the conditions under which the symbolized version is true:
Suppose that certain philosophers are right, and everything is either mental or physical. Then if
we treat `x' as a name letter, the phrase `Mx ∨ Hx' must be true no matter what `x' stands for.
Because it can only stand for something that is mental or physical (that's all there is), and if it
stands for something mental the first disjunct is satisfied, and if it stands for something physical
then the second disjunct is satisfied.
Suppose on the other hand that not everything is either mental or physical. (Suppose, as some
philosophers have argued, that the number 4 is neither a mental thing nor a physical thing.) Then
if we treat `x' as a name letter, we will not find that the phrase `Mx ∨ Hx' is true no matter what `x'
stands for. For if `x' stands for the number 4, neither disjunct will be satisfied.
These considerations do not settle the question of whether everything is either mental or physical. Instead
they show that there is an equivalence between the truth-value, in English, of the sentence `Everything is
either mental or physical', and the truth-value, according to our official account, of the predicate calculus
sentence `∀x(Mx ∨ Hx)'.
EXERCISES
1. Suppose that `A' stands for `is a sofa', `B' stands for `is well-built' and `C' stands for `is comfortable'.
For each of the following sentences, produce an accurate but "cumbersome" reading in English as well as
a natural idiomatic reading if possible.
a. ∃x(Ax ∧ Bx) e. ∀y~Ay
b. ∀x(Ax → Bx) f. ∀z(Az ∧ Bz → Cz)
c. ∀x(Ax ∨ Bx) g. ∃xCx ∧ ∀yBy
d. ∃x~Ax h. ∃x(Cx → ∀yBy)
2. Assume that all giraffes are friendly, and that some giraffes are clever and some aren't. What are the
truth-values of these sentences?
a. ∀x(Gx → Fx) d. ∃y(Fy ∧ Cy)
b. ∀x(Gx → Cx) e. ∃z(Gz ∧ Cz)
c. ∃x(~Fx ∧ Gx) f. ∀x(Gx → ~Gx)
5A CATEGORICAL SENTENCES
The ancient Greek philosopher Aristotle is generally credited with the invention of formal logic. He devised
a fairly complete and accurate study of the logical relations among sentences of a certain special sort.
These are called "categorical" sentences, and they include any sentence which has one of the following
forms (with Aristotle's titles):
Universal affirmative: Every A is B
Particular affirmative: Some A is B
Universal negative: No A is B
Particular negative: Some A is not B
These categorical sentences are only a few of the forms that can be represented in modern predicate
logic, but they are simple and basic, and their treatment provides a nice introduction to the symbolism.
The particular affirmative form -- "Some A is B" -- is easy to symbolize; it gets represented as:
∃x(Ax ∧ Bx),
that is, "Something is such that it is both A and B."
Plural forms of categorical sentences are symbolized just like the singular forms:
All A's are B Every A is B ∀x(Ax → Bx)
Some A's are B Some A is B ∃x(Ax ∧ Bx)
This might seem wrong if you think that the use of the plural in English commits you to the view that there
is more than one A which is B. (The symbolized version has no such commitment.) The answer seems
to be that we sometimes use the plural to convey the thought that there is more than one A, but
sometimes we are neutral about this. In this text we will adopt the weaker interpretation, which makes
"Some A's are B" true whenever there is at least one A that is B.
There are two traps to beware of when symbolizing categorical sentences. They both involve trying to
make the symbolizations of "universal" and "particular" sentences look alike. Suppose that we want to
symbolize:
Some dogs are brown.
It will not be correct to symbolize this as:
∃x(Dx → Bx),
that is:
Something is such that if it's a dog then it's brown.
This would be wrong because in some possible situations the symbolized version would differ in
truth-value from the English version. Consider a possible situation which is just like the actual one except
that all dogs are black, white, or grey. The English sentence 'Some dogs are brown' would be false in that
situation. But the symbolized version would be true in that situation. It would be true for the totally
irrelevant reason that not everything is a dog!!! Remember the official account of the existential quantifier;
the sentence:
EXERCISES
(The combination Adjective + Noun, such as 'brown dog', gets symbolized as a conjunction. For the
cases under consideration in this text, that is always the way to symbolize a combination consisting of an
adjective modifying a noun.)
As we have seen, categorical sentences can themselves be combined with connectives. Another
example is:
If every dog is well-fed, and every dog is an animal, and every animal is happy, then every dog is
both well-fed and happy.
This is a complex of categorical sentences:
If ∀x(Dx → Fx) and ∀y(Dy → Ay) and ∀z(Az → Hz) then ∀z(Dz → Fz ∧ Hz)
that is:
∀x(Dx → Fx) ∧∀y(Dy → Ay) ∧∀z(Az → Hz) → ∀z(Dz → Fz ∧ Hz)
Sometimes a sentence is apparently ambiguous, but variable binding resolves the ambiguity. This
happens in the example
Each dog is happy unless it isn't well-fed
We decided above to include the 'unless' as part of the consequent of the quantified conditional. We
might try instead to make 'unless' be the major connective:
EXERCISES
2. Suppose that `A' stands for `is a U.S. state', `C' for `is a city', `L' for `is a capital', and `E' for `is in the
Eastern time zone'. What are the truth values of these sentences?
a. ∀x(Cx → Lx)
b. ∃x(Cx ∧ Lx)
c. ∃x(Cx ∧ Lx ↔ Ex)
d. ∀x(Cx ∧ Ex → Ax)
e. ~∃x(Ax ∧ Ex)
f. ∃x(Cx ∧ Ex) ∧ ∃x(Cx ∧ ~Ex)
g. ∃x(Cx ∧ Ex ∧ Ax)
h. ~∃x(Cx ∧ ~Cx)
3. Symbolize the following sentences:
a. All giraffes are spotted.
b. All clever giraffes are spotted.
c. No clever giraffes are spotted.
d. Every giraffe is either spotted or drab.
e. Some giraffes are clever.
f. Some spotted giraffes are clever.
g. Some giraffes are clever and some aren't.
h. Some spotted giraffes aren't clever.
i. No spotted giraffe is clever but every unspotted one is.
j. Every clever spotted giraffe is either wise or foolhardy.
k. Either all spotted giraffes are clever, or all clever giraffes are spotted.
l. Every clever giraffe is foolhardy.
m. If some giraffes are wise then not all giraffes are foolhardy.
n. All giraffes are spotted if and only if no giraffes aren't spotted.
o. Nothing is both wise and foolhardy.
Recall that the effect of 'only' on 'if' is to reverse antecedent and consequent. Something like that occurs
here too; compare the sentences:
All dogs are happy ∀x(Dx → Hx)
Only dogs are happy ∀x(Hx → Dx)
They look pretty much the same except that the antecedent and consequent of the quantified conditional
are switched.
Dogs are happy and frisky; giraffes are happy, but only the well-fed ones are frisky.
∀x(Dx → Hx ∧ Fx) ∧∀x(Gx → Hx) ∧∀x(Gx ∧ Fx → Ex)
(Using 'E' for 'is well-fed'.)
Notice that the last conjunct is not symbolized as:
∀x(Fx → Gx ∧ Ex)
This would say that everything that is frisky is a well-fed giraffe, which is not what is intended. The point is
that among giraffes only the well-fed ones are frisky. The last conjunct could also be symbolized as:
∀x(Gx → (Fx → Ex))
EXERCISES
4. Symbolize these sentences. If a sentence is ambiguous, give all pertinent symbolizations.
a. Only friendly elephants are handsome
b. If only elephants are friendly, no giraffes are friendly
c. Only the brave are fair.
d. If only elephants are friendly then every elephant is friendly
e. All and only elephants are friendly.
f. If every elephant is friendly, only friendly animals are elephants
g. If any elephants are friendly, all and only giraffes are nasty
h. Among spotted animals, only giraffes are handsome.
i. Among spotted animals, all and only giraffes are handsome
j. Only giraffes frolic if annoyed.
5D RELATIVE CLAUSES
Relative clauses modify nouns, as adjectives do, although relative clauses are typically more complex.
There are two sorts of relative clause: restrictive and non-restrictive, illustrated by:
Non-restrictive Dogs, which are frisky, are cute
Restrictive Dogs which are frisky are cute
Non-restrictive relative clauses do not affect the noun they follow; instead they are used to insert a
comment in addition to what the main sentence says. The main sentence of the non-restrictive example is
that dogs are cute, and the additional comment is that they are frisky. The entire sentence is used to
make both of these claims. If we want to capture the whole content of a sentence with a non-restrictive
relative clause the best we can do is to conjoin the two claims:
Dogs are frisky ∧ Dogs are cute ∀x(Dx → Fx) ∧∀x(Dx → Cx)
A restrictive relative clause restricts the content of the noun to which it is adjoined. In the restrictive
example above, it is frisky dogs that are said to be cute, not dogs in general. The symbolization is:
Dogs which are frisky are cute ∀x(Dx ∧ Fx → Cx)
You can usually tell a non-restrictive relative clause, for it is set off from its surroundings by commas
before and after it. When there are no commas, we assume in this text that the reading is restrictive.
Restrictive relative clauses are like adjectives, in that in logical form they are conjoined with the noun that
they modify. In the above example 'dogs which are frisky' becomes the conjunction 'Dx ∧ Fx'. When the
relative clause is more complex, it gives you something complex to conjoin to the part originating with the
noun that is modified. This is seen in:
Every dog which is neither cute nor frisky is not happy.
∀x(Dx ∧ ~(Cx ∨ Fx) → ~Hx)
EXERCISES
5. Symbolize these sentences.
a. Every giraffe which frolics is happy
b. Only giraffes which frolic are happy
c. Only giraffes are animals which are long-necked.
d. If only giraffes frolic, every animal which is not a giraffe doesn't frolic.
e. Some giraffe which frolics is long-necked or happy.
f. No giraffe which is not happy frolics and is long-necked.
g. Some giraffe is not both long-necked and happy.
Sometimes a universal quantification originates with an English indefinite article 'a' or 'an'. This happens
in:
A dog that is well-fed is happy.
This sentence is most naturally treated as conveying a universal claim, that any dog that is well-fed is
happy:
∀x(Dx ∧ Fx → Hx)
This is in spite of the fact that the indefinite article often conveys an existential claim, as in:
A girl left early
∃x(Gx ∧ Lx)
A good test for this is whether the indefinite article can be paraphrased by 'each'; this is natural in the first
example, but not in the second.
A more interesting case is when an indefinite article occurs inside a sentence, indicating a universal
quantification with scope over the whole sentence. This happens in:
If a dog is well-fed, it is happy
This appears to be a conditional of the form:
a dog is well-fed → it is happy
But that won't do, since there is nothing to bind the variable that comes from the 'it' in the consequent.
Instead, the indefinite article indicates a universal quantification of dog, with the rest of the sentence within
its scope. That is, it has the form:
∀x(x is a dog → (x is well-fed → x is happy))
∀x(Dx →( Fx → Hx))
The idea that indefinite phrases sometimes correspond to universal quantifiers with wide scope applies
also to plural indefinites -- to plural nouns or noun phrases which have no article or quantifier word before
them. An example is:
If dogs are well-fed, then they are happy
∀x(x is a dog → (x is well-fed → x is happy))
∀x(Dx → (Ex → Hx))
EXERCISES
Rule ui (universal instantiation): The first rule is simple; it says that if everything satisfies a certain
condition, any particular thing satisfies that condition. That is, from any universally quantified formula one
may infer the result of removing the initial quantifier, and replacing every occurrence of the variable that it
was binding by a name letter or by a variable:
Every occurrence of 'x' that '∀x' was binding must be replaced with the same name or
variable.
An example of this rule is to validate the argument from 'everything is either mental or physical' to
'Disneyland is either mental or physical':
∀x(Mx ∨ Px)
∴ Ma ∨ Pa by rule ui
A more typical application would be to use rule ui to validate an inference like this:
Every giraffe is happy
Fido is a giraffe
∴ Fido is happy
∀x(Gx → Hx)
Gf
∴ Hf
The universal instantiation step takes us from "everything is such that if it is a giraffe then it is happy" to "if
Fido is a giraffe then Fido is happy". Modus ponens does the rest.
In using rule ui the quantifier must be on the front of the formula and it must have scope over the whole
formula. If it has a narrower scope, then it is fallacious to apply the rule. For example, this inference is
not permitted:
∀xFx → Fg If everything is happy, Gertrude is happy (logically true)
∴ Fb → Fg If Betty is happy, Gertrude is happy (not logically true)
The existential quantifier that is put on the front must have scope over the whole formula. If the formula
you start with is in informal notation, you may need to restore the dropped parentheses before applying the
rule, as we did here.
Here is a little derivation that uses both of these rules. It validates the argument:
Every dog is happy
Fido is a dog
∴ Something is happy
∀x(Dx → Hx)
Df
∴ ∃xHx
1. Show ∃xHx
2. Df → Hf pr1 ui
3. Hf 2 pr2 mp
4. ∃xHx 3 eg dd
There is a difference between Rules ui and eg. When using rule ui, you must replace every occurrence of
the variable that the initial quantifier binds with a name or variable. For example, you cannot do this:
∀x(Dx → Hx)
∴ Dx → Hb
That is:
Everything is such that if it is a dog then it is happy.
∴ If it is a dog then Bob is happy
Rule eg is different. When using rule eg you needn't replace all of the occurrences. For example, from:
Bob is happy or Bob is sad
you may infer
Something is such that Bob is happy or it is sad.
This conclusion looks odd, but it should be clear that it follows logically.
There is a constraint on both of these rules: there must be no "capturing". If a new variable appears in the
conclusion of either rule that was not there previously, it must not be "captured" by a quantifier in the
formula. Specifically, if a new variable appears, none of its new occurrences may be bound by a quantifier
already in the formula. For example, this use of rule eg is not permitted:
Df ∧ ∀x(Hf → Gx)
∴ ∃x(Dx ∧ ∀x(Hx → Gx)) the universal quantifier captures the variable 'x' that replaces the
second 'f'
No capturing:
When using rule ui or rule eg a new variable must not be introduced if some of its
new occurrences are bound by a quantifier in the original formula.
You will not often encounter cases of capturing; they usually happen by accident. The possibility of
capturing can be avoided by always choosing a variable that does not already occur in the formula.
EXERCISES
The reader should check to see that each of the new rules is properly used.
This derivation illustrates an important strategy rule. Often you will have an opportunity to apply ei to
introduce a variable, and then use ui to instantiate to that variable. In the derivation just given, ei
introduces 'u' on line 2 and ui is used twice to instantiate to 'u', on lines 3 and 6. The strategy rule is that
when this is a possibility, you should always apply rule ei before you apply rule ui.
Strategy hint: When using both ei and ui to instantiate to the same variable, apply
rule ei before rule ui.
This is because if you try using ui first, you will not then be able to use ei to instantiate to the same
variable, because the variable will not then be new. For example, suppose that you started the above
derivation with:
1. Show ∃z(Dz ∧ Ez)
2. Fu → Bu pr3 ui
3. Du ∧ Fu pr2 ei
Line 3 is fallacious because you have instantiated to 'u', but 'u' has already occurred in the derivation,
which violates the constraint that the variable used in ei must be new.
(In doing this derivation recall that 'P ∧ Q ∧ R' is informal notation for '((P ∧ Q) ∧R)'.)
Notice that the ei step in line 2 precedes the ui steps in lines 3 and 8, and that the ei step in line 12
precedes the ui steps in lines 14 and 16.
EXERCISES
2. Here is a fallacious derivation to validate this argument:
∃x(Nx ∧ Ex) some number is even
∃x(Nx ∧ Ox) some number is odd
∴ ∃x(Nx ∧ Ox ∧ Ex) some number is both odd and even
Identify the error in the derivation.
1. Show ∃x(Nx ∧ Ox ∧ Ex)
2. Nz ∧ Ez pr1 ei
3. Nz ∧ Oz pr2 ei
4. Nz ∧ Oz ∧ Ez 2 s 3 adj
5. ∃x(Nx ∧ Ox ∧ Ex) dd
3. Produce derivations for each of the following (be careful to obey the strategy rule just given):
a. theorem T202: ∴ ∀x(Fx → Gx) → (∃xFx → ∃xGx)
b. half of T203: ∴ ∃x~Fx → ~∀xFx
c. half of T204: ∴ ∀x~Fx → ~∃xFx
T201 ∀x(Fx → Gx) → (∀xFx → ∀xGx)
T202 ∀x(Fx → Gx) → (∃xFx → ∃xGx)
T203 ~∀xFx ↔ ∃x~Fx
T204 ~∃xFx ↔ ∀x~Fx
The requirement that x be completely arbitrary is realized by the technical requirement that 'x' shall not
have occurred free anywhere in the derivation available from the show line, or in a premise cited in such a
line.
The reasoning suggested above may now be incorporated into a derivation like this:
∀x(Dx → Mx) Every dog is a mammal
∀y(My → Ay) Every mammal is an animal
∴ ∀z(Dz → Az) ∴ Every dog is an animal
The reader should check that this derivation meets the conditions necessary for a ud derivation.
In a previous exercise we proved half of theorem 203. The other half of T203 is more difficult, but we can
do it using a universal derivation.
∴ ~∀xFx → ∃x~Fx
It is easy to begin the derivation, setting up a conditional derivation:
1. Show ~∀xFx → ∃x~Fx
2. ~∀xFx ass cd
3. ?????
EXERCISES
1. Produce derivations for each of the following (be careful to obey the strategy rule just given):
a. theorem T201: ∴ ∀x(Fx → Gx) → (∀xFx → ∀xGx)
b. half of T204: ∴ ~∃xFx → ∀x~Fx (similar to the derivation of half of T203)
c. half of theorem T205: ∴ ∀zFx → ~∃x~Fx
Strategy hint: When a line is available that begins with a universal or existential
quantifier, apply an instantiation rule, ei or ui, to derive an instance.
When the conclusion is a universally quantified formula, it will very likely be derived by using a universal
derivation. When a universal derivation is used, it is usually best to set up that derivation as early as
possible. Consider this example:
Every jaguar is a fast cat
Every cat is an animal
∴ Every jaguar is a fast animal.
∀x(Jx → Fx ∧ Cx)
∀x(Cx → Ax)
∴ ∀x(Jx → Fx ∧ Ax)
Since line 2 has been shown, we may infer line 1 by universal derivation:
1. Show ∀x(Jx → Fx ∧ Ax)
2. Show Jx → Fx ∧ Ax
3. Jx ass cd
4. Jx → Fx ∧ Cx pr1 ui
5. Fx ∧ Cx 3 4 mp
6. Cx → Ax pr2 ui
7. Ax 5 s 6 mp
8. Fx ∧ Ax 5 s 7 adj cd
9. 2 ud
When the conclusion has both universal and existential quantifiers, the strategy is essentially to combine
those above, applying whichever strategy is relevant at the time. Consider this argument:
For every giraffe, there is a leopard which is happy if and only if it (the giraffe) is.
For every leopard, there is a monkey that is happy if and only if it (the leopard) is.
∴ For every giraffe, there is a monkey which is happy if and only if it (the giraffe) is.
∀x(Gx → ∃y(Ly ∧ (Hy ↔ Hx)))
∀x(Lx → ∃y(My ∧ (Hy ↔ Hx)))
∴ ∀x(Gx → ∃y(My ∧ (Hy ↔ Hx)))
The conclusion to be shown is universally quantified, so set up a universal derivation. In fact, this should
generally be done as early as possible.
It is often convenient to immediately follow the show line containing ∀x□ by another containing □.
This is done in line 2 here:
1. Show ∀x(Gx → ∃y(My ∧ (Hy ↔ Hx)))
2. Show Gx → ∃y(My ∧ (Hy ↔ Hx))
Line 2 is a conditional, so try conditional derivation:
3. Gx ass cd
Universally instantiating the first premise and using modus ponens is a natural thing to try:
4. Gx → ∃y(Ly ∧ (Hy ↔ Hx)) pr1 ui
5. ∃y(Ly ∧ (Hy ↔ Hx)) 3 4 mp
We now have derived an existentially quantified formula, and there are some universally quantified ones in
the premises. Generally, when both rules ei and ui are possible, as we stated above, you should use rule
ei first. This is because rule ei introduces a variable which must be brand new in the derivation. If you do
ei first, then you can do ui using the variable introduced by ei. But if you do ui first, you cannot do ei using
that variable. In our derivation, the "ei before ui" strategy is relevant. Apply ei to line 5 using a variable
that does not already occur in the derivation:
6. Lz ∧ (Hz ↔ Hx)
We can now make use of our second premise to get:
7. Lz → ∃y(My ∧ (Hy ↔ Hz)) pr2 ui
We can obviously use line 6 to get the consequent of line 7. That consequent is also existentially
quantified, so we apply ei:
8. ∃y(My ∧ (Hy ↔ Hz) 6 s 7 mp
9. Mu ∧ (Hu ↔ Hz) 8 ei
Now look over what we have and what we want. We are in a conditional derivation, and we need to show
'∃y(My ∧ (Hy ↔ Hx))' to complete that derivation. This formula is existentially quantified, and so we will
probably derive it by existentially generalizing something. That is, we will existentially generalize
something of the form:
M_ ∧ (H_ ↔ Hx)
We already have something very close to that, on line 9; we could get what we want by deriving a formula
just like line 9 but with 'x" instead of 'z'. So suppose we try to derive ' Mu ∧ (Hu ↔ Hx)'. We already have
the left conjunct, so the job is to derive the right conjunct 'Hu ↔ Hx'. This is a biconditional, so we need to
derive two conditionals, probably by conditional derivation, and then put them together by cb. That in fact
is easy to do:
10. Show Hu → Hx
11. Hu ass cd
12. Hz 9 s bc 11 mp
13. Hx 6 s bc 12 mp cd
14. Show Hx → Hu
15. Hx ass cd
16. Hz 6 s bc 15 mp
17. Hu 9 s bc 16 mp cd
18. Hu ↔ Hx 10 14 cb
Line 2 has now been shown by the conditional derivation. Now we only need to add line 21, and box and
cancel, finishing the universal derivation.
21. 2 ud
EXERCISES
1. Symbolize these arguments and provide derivations to validate them. Give an explicit scheme of
abbreviation for each.
a. If history is right, then if anyone was strong, Hercules was strong.
Only those who work out are strong, and only those with self-discipline work out.
∴ If Hercules does not have self-discipline, then either history is not right or nobody is strong.
b. If some giraffes are not happy, then all giraffes are morose.
Some giraffes ponder the mysteries of life.
∴ If some giraffes are not morose, then some who ponder the mysteries of life are happy.
c. There is not a single critic who either likes art or can paint.
Some level-headed people are critics.
Anyone who can't paint is uneducated.
∴ Some level-headed people are uneducated.
d. No astronaut is a good dancer.
Every singer is warm-blooded.
If something is warm-blooded and is not a good dancer, then nothing that is either a singer or
an astronaut is exultant.
∴ If some astronaut is a singer, then no singer is exultant.
e. All students who have a sense of humor or are brilliant seek fame.
Anyone who seeks fame and is brilliant is insecure.
Whoever is a mogul is brilliant.
∴ Every student who is a mogul is insecure.
g. For every astronaut that writes poetry, there is one that doesn't.
For every astronaut that doesn't write poetry, there is one that does.
∴ If there are any astronauts, some write poetry and some don't.
This kind of indirect strategy is typical of how to handle derivations with sentences that begin with negated
quantifiers when we use only our basic rules for quantifiers. However, it is usually more useful to use
some derived rules that let us replace initial negated quantifiers by unnegated ones of the opposite sort,
which may be used directly. The rule called quantifier negation does this. It lets you replace a negated
initial quantifier by the opposite quantifier followed by a negation. If we lump in all applications of double
negation, we get eight cases:
These derived rules are based on T203-206, which are given in the last set of exercises.
Here is how we can use rule qn to shorten the derivation above. We begin as before:
∀x(Ax → Bx)
~∃x(Bx ∧ Cx)
∴ ∀x(Ax → ~Cx)
The advantage is not just that the derivation is two lines shorter, but the reasoning is simpler, and it is
easier to think up. For that reason we have this strategy hint:
Strategy hint: If an available formula begins with a negation sign immediately followed
by a quantifier which has scope over the rest of the formula, convert it to a more useful
formula by applying rule qn to it.
Here is another example of the use of rule qn. We are given this argument to validate:
~∃x(Ax ∧ Bx)
∀y(Ay ↔ ~Cy)
∀y(Dy → By)
~∀xCx
∴ ∃x~Dx
Neither the first nor the fourth premise may be used as an input to one of the basic quantifier rules.
However, rule qn turns them into useful forms.
1. Show ∃x~Dx
2. ∃x~Cx pr4 qn
3. ~Ck 2 ei
4. Ak ↔ ~Ck pr2 ui
5. Ak 4 bc 3 mp
6. ∀x~(Ax ∧ Bx) pr1 qn
7. ~(Ak ∧ Bk) 6 ui
8. ~Ak ∨ ~Bk 7 dm
9. ~Bk 5 dn 8 mtp
10. Dk → Bk pr3 ui
11. ~Dk 9 10 mt
12. ∃x~Dx 11 eg dd
As an example, from
∀z(Dz ∧ Ez → ∃u(Du ∨ Fz))
you may infer
∀w(Dw ∧ Ew → ∃u(Du ∨ Fw)).
But you may not infer
∀u(Du ∧ Eu → ∃u(Du ∨ Fu))
because that violates the no capturing rule.
Because we used 'x' instead of 'u', we did not encounter any capturing problems in applying rule ui. Now
we merely apply rule av to line 2, and we are done:
1. Show ∀u(Du → ~Eu)
2. Show ∀w(Dw → ~Ew)
3. Show Dw → ~Ew
4. Dw ass cd
5. Show ~Ew
6. Ew ass id
7. Dw ∧ Ew → ∃u(Du ∧ Fw) pr1 ui
8. ∃u(Du ∧ Fw) 4 6 adj 7 mp
9. Ds ∧ Fw 8 ei
10. Fw 9s
11. Dw → ~Fw pr2 ui
12. ~Fw 4 11 mp 10 id
13. 3 ud
14. ∀u(Du → ~Eu) 2 av dd
EXERCISES
1. Provide derivations for these arguments.
a. ~∃x(Ax ∨ Bx)
∀x∀y(Gx ∧ Hy → By)
∃xGx
∴ ∀x~Hx
b. ∃x(Hx ∧ ~∃y(Gy ∧ Hx))
∴ ∀y~Gy
c. ∀x(Ax → ∀y(Bx ↔ By))
∃zBz
∴ ∀y(Ay → By)
d. ~∀x(Dx ∨ Ex)
∃x(Fx ↔ ~Ex) → ∀zDz
∴ ∃x~Fx
e. Jc ∧ ~Jd
∀xKx ∨ ∀x~Kx
∃x(Jx ∧ Kx) → ∀x(Kx → Jx)
∴ ~Kc
2. Provide derivations for these theorems:
T229 ∃x(∃xFx → Fx)
T230 ∃x(Fx → ∃xFx)
T234 ∀x((Fx → Gx) ∧ (Gx → Hx) → (Fx → Hx))
T235 ∀x(Fx → Gx) ∧ ∀x(Gx → Hx) → ∀x(Fx → Hx)
T236 ∀x(Fx ↔ Gx) ∧ ∀x(Gx ↔ Hx) → ∀x(Fx ↔ Hx)
T237 ∀x(Fx → Gx) ∧ ∀x(Fx → Hx) → ∀x(Fx → Gx ∧ Hx)
T238 ∀xFx → ∃xFx
T242 ~∀x(Fx → Gx) ↔ ∃x(Fx ∧ ~Gx)
T243 ~∃x(Fx ∧ Gx) ↔ ∀x(Fx → ~Gx)
T248 ∃xFx ∧ ∃x~Fx ↔ ∀x∃y(Fx ↔ ~Fy)
(An argument may be valid even if it is not MPC valid if its validity is due to something in addition to how it
is built up from names, variables, monadic predicates, quantifiers, and connectives. Some examples of
this are:
Some boy fed every cat <Uses the two-place predicate 'fed'>
∴ Every cat was fed by a boy
There are infinitely many prime numbers <Uses the quantifier 'infinitely many'>
∴ There is at least one prime number.
Dr. Jekyll is tall <Uses 'is' in the sense of identity>
Dr. Jekyll is Mr. Hyde
∴ Mr. Hyde is tall
Even though MPC validity is not the whole story, it remains an important kind of validity.)
So far in this chapter we have learned how to show that arguments are MPC valid by means of giving
derivations which validate the arguments. We have not yet focused on how to show that an argument is
not MPC valid. To do that we may describe a logically possible situation in which the argument has true
premises and a false conclusion. It is convenient in doing this to consider very "small" situations -- that is,
situations in which only a small number of things exist. To illustrate this, suppose we are given this
argument:
There are some fibers
Every fiber is green
Something isn't green
∴ Everything green is a fiber
Its MPC form is:
∃xFx
∀x(Fx → Gx)
∃x~Gx
∴ ∀x(Gx → Fx)
Now consider the following "small" situation:
There are three things:
The first is a fiber; the others are not.
The first and the second are green; the third is not.
In this situation the first premise, '∃xFx', is true because the first thing is a fiber. The second premise,
'∀x(Fx → Gx)', is true because there is only one fiber, and it is green. The third premise is true because
something isn't green (the third thing). The conclusion is false because not everything that is green is a
Universe: 0 1 2
F: {0}
G: {0, 1}
This information describes a counter-example for the original argument, because it describes, in minimal
terms, the structure of a situation in which the premises of the argument are true and the conclusion false.
Here are some more arguments that are not MPC valid, and counter-examples for them.
Counter-example #2:
∃x(Fx ∧ ~Gx)
∀x(Hx → ~Gx)
∃x(Hx ∧ Fx)
∴ ∀x(Fx → ~Gx)
Universe: 0 1 2
F: {0, 1}
G: {0, 2}
H: {1}
The first premise is true in this interpretation because 'F' is true of 1 and G isn't. The second premise is
true because everything that 'H' is true of, namely 1, 'G' is not true of, and the third premise is true
because both 'F' and 'H' are true of 1. But the conclusion is not true, because not everything that 'F' is true
of is something that 'G' is not true of; 0 is an example. (Removing 2 from the universe will also yield a
counter-example.)
Universe: 0 1 2
F: {0}
G: {1}
H: {1, 2}
If you check through the parts of the argument, you will see that the premises are all true and the
conclusion false.
Sometimes if you start with no predicate being true of anything, a counter-example falls into your lap.
Here is such a case. The argument is:
∀x(Jx → Kx ∨ Hx)
~∀x(~Kx → Jx)
~∃x(Kx ∧ ~Hx)
Hc → ∃xJx
∴ ~∃x(Hx ∨ ~Jx)
Begin with this minimal proposed counter-example:
Universe: 0
H: { }
J: { }
K: { }
c: 0
Let us see what we need to add to what the predicates are true of to make this a counter-example. The
first premise is already true because it is a quantified conditional with an antecedent that is false for each
thing in the universe. The second premise is true because its unnegation '∀x(~Kx → Jx)' is false. This is
false because the part following the quantifier: '~Kx → Jx' is not true for every way of treating 'x' like a
name; it is false when 'x' stands for 0. The third is true because there is nothing that is K. The fourth is
true because it is a conditional with a false antecedent. And the conclusion is false because there is
indeed something that is either H or not J; 0 is not J, so it is either H or not J. In short, the counter-
example works as stated. (Usually, of course, more work will be needed.)
(Exercise for the reader: In the above calculation we have supposed that if there is a counter-example, we
can find one using a maximum size universe. We have ignored the possibility that there is, say, a
counter-example using a universe of size 3 but none using a universe of size 4. Why are we justified in
making that assumption?)
EXERCISES
1. Give counter-examples for each of the following arguments.
a. ∀x(Ax → ∃y(By ∧ ~Ay))
~∀xBx
~∃x(Bx ∧ Cx)
∴ ∃x(Ax ∧ Cx)
b. ∃x(Dx ∧ Ex ∧ ~Fx)
∃x(~Dx ∧ ~Ex)
∀x(Ex → Dx ∨ Fx)
∴ ∀x(Dx ∧ Ex → ~Fx)
c. ∃x(Fx ∧ Gx)
∃x(Fx ∧ ~Gx)
∃x(~Fx ∧ Gx)
∴ ∀x(~Fx → Gx) <requires more than three things in the universe>
d. ∀x∃y(Fx ↔ (Gy ∨ Fx))
∴ ~∃xFx → ~∃xGx
e. Ha ∧ ~Hb
∀x(Kx → Hx ∧ Jx)
∃x(Jx ∧ ~Kx)
∴ ∃x(Hx ∧ ~Jx)
Universe: 0 1 2
A: {0}
B: {0}
It is clear that this makes the conclusion false, and the second premise true. What about the first
premise? It makes that true too. The first premise says that every thing in the universe is such that, there
is a thing in the universe such that it isn't A if the first thing is A, and it is A if the first thing isn't. This is in
fact true in the counter-example. But this may not be obvious to you. If not, there is a mechanical way to
answer such a question. It resembles truth tables in that it will automatically give you a yes or no answer,
but it may involve complexity. The technique is based on the idea that if there are a small number of
things in the universe, then a universally quantified claim is equivalent to a conjunction of unquantified
claims got by removing the quantifier and applying each resulting claim to a thing in the universe. And an
existentially quantified claim, in turn, is equivalent to a disjunction of such claims that are applied to each
thing in the universe.
Let us introduce a convention for naming things in a universe. When there are three things the names will
be 'i0', 'i1', and 'i2', where:
'i0' stands for 0
'i1' stands for 1
'i2' stands for 2.
(If there are fewer things, leave out 'i2', or both 'i1' and 'i2'. If there are more things add 'i4', 'i5', and so on.)
Now consider the sentence '∀xAx'. This says that everything in the universe is A. This is equivalent to
saying that the first thing is A and the second thing is A and the third thing is A. That is, it is equivalent to
the conjunction:
∀xAx is equivalent to Ai0 ∧ Ai1 ∧ Ai2
It is easy to check that this conjunction is false, because not all conjuncts are true.
The second premise is '∃x(Ax ∧ Bx)'. This is equivalent to saying that either the first thing is both A and B,
or the second thing is, or the third. That is, the quantified sentence is equivalent to this disjunction:
∃x(Ax ∧ Bx) is equivalent to (Ai0 ∧ Bi0) ∨ (Ai1 ∧ Bi1) ∨ (Ai2 ∧ Bi2)
It is easy to check that this disjunction is true, because at least one disjunct is true; the first disjunct is true.
The first premise, '∀x∃y(Ax ↔ ~Ay)', is more interesting. It is universally quantified, so it is equivalent to
the following conjunction:
∃y(Ai0 ↔ ~Ay) ∧ ∃y(Ai1 ↔ ~Ay) ∧ ∃y(Ai2 ↔ ~Ay)
It may be easy to determine that this is true.
The first conjunct is true because there is something which is not A if and only if 0 is A. We know
that 0 is A, and there is indeed at least one thing which is not A; for example, 1 is not A.
The second conjunct is true because there is something which is not A if and only if 1 is A. We
know that 1 is not A, and there is indeed at least one thing which is A; for example, 0 is A.
If the counter-example has a universe of only one thing, then this device is very easy to apply. Consider
this argument and the accompanying counter-example:
∀x∀y(Jx ↔ ∃z(Kz ↔ Jy))
∴ ∀xJx
Universe: 0
J: { }
K: {0}
It is clear that the conclusion is false, because 'J' is not true of 0. The premise is universally quantified, so
it is equivalent to a conjunction of all of its instances using names of things in the universe. Since there is
only one thing in the universe, this conjunction has only one conjunct. So:
∀x∀y(Jx ↔ ∃z(Kz ↔ Jy)) is equivalent to ∀y(Ji0 ↔ ∃z(Kz ↔ Jy))
Universe: 0 1
F: {0}
G: {1}
It is pretty clear that this proposed counter-example makes the conclusion false, since something is G,
namely, 1. The third premise is true since not everything is G; 0 isn't G. Likewise, the second premise is
true since not everything is F; 1 is not F. What about the first? If you are not certain, you can expand it.
In this proposed counter-example, the sentence '∀x∃y(Fx ∨ Gy)', which starts with a universal quantifier, is
equivalent to this conjunction:
∃y(Fi0 ∨ Gy) ∧ ∃y(Fi1 ∨ Gy)
Each of the existentially quantified sentences is equivalent to a disjunction, so we have:
((Fi0 ∨ Gi0) ∨ (Fi0 ∨ Gi1)) ∧ ((Fi1 ∨ Gi0) ∨ (Fi1 ∨ Gi1))
evaluating the parts we have:
((Fi0 ∨ Gi0) ∨ (Fi0 ∨ Gi1)) ∧ ((Fi1 ∨ Gi0) ∨ (Fi1 ∨ Gi1))
true false true true false false false true
true true
Each conjunct is true, so the sentence is itself true.
EXERCISES
0. For each of the following arguments use the method of expansions to determine whether the following
is a counterexample for it or not.
Universe: 0 1 2
F: {0}
G: {0, 2}
H: {2}
b. ∃x(Gx ∧ Hx ∧ ~Fx)
∃x(~Gx ∧ ~Hx)
∀x(Hx → Gx ∨ Fx)
∴ ∀x(Gx ∧ Hx → ~Fx)
c. ∃x(Fx ∧ Gx)
∃x(Fx ∧ ~Gx)
∃x(~Fx ∧ Gx)
∴ ∀x(~Fx → Gx)
d. ∀x∃y(Fx ↔ (Gy ∨ Fx))
∴ ~∃xFx → ~∃xGx
e. Ha ∧ ~Hb
∀x(Fx → Hx ∧ Gx)
∃x(Gx ∧ ~Fx)
∴ ∃x(Hx ∧ ~Gx)
Every occurrence of 'x' that '∀x' was binding must be replaced with the same name or
variable.
A new variable must not be introduced if some of its new occurrences are bound by a
quantifier in the original formula.
Universal derivation:
If you have a derivation of the following form:
Show ∀x . . . x . . . x . . .
:::::
:::::
...x...x...
Then if there are no uncancelled show lines in between the first and last lines
displayed, and if 'x' does not occur free anywhere in the derivation that is available
from the show line (or in a premise that has been cited on an available line), you may
box and cancel, using the notation 'ud'.
All of the strategy hints from chapters 1 and 2 still apply. These are new:
Universal Quantification Set up a universal derivation. Write a show line containing ∀x□, and
∀x□ then immediately follow this with a show line containing □. When the
second show is cancelled, use rule ud to cancel the first.
Or write a show line with '∀x□', and then assume '~∀x□' for an indirect
derivation. Turn this into '∃x~□', and proceed from there.
Existential Quantification Derive an instance and then use rule eg.
∃x□
Or write a show line with '∃x□', and then assume '~∃x□' for an indirect
derivation. Turn this into '∀x~□', and proceed from there.
Negation of a Universal State a show line with '~∀x□', and then assume '∀x□' for an indirect
Quantification derivation.
~∀x□
Or derive '∃x~□' and apply derived rule qn.
Negation of an Existential State a show line with '~∃x□', and then assume '∃x□' for an indirect
Quantification derivation.
~∃x□
Or derive '∀x~□' and apply derived rule qn.
Use rule av if necessary: If you are having difficulty with capturing when you use rule ui or ei, change
what you are trying to derive to an alphabetic variant. Complete the derivation, and then use derived rule
av to convert this into a derivation of what you are after.
CHAPTER 3 THEOREMS
LAWS OF DISTRIBUTION:
T201 ∀x(Fx → Gx) → (∀xFx → ∀xGx)
T202 ∀x(Fx → Gx) → (∃xFx → ∃xGx)
T207 ∃x(Fx ∨ Gx) ↔ ∃xFx ∨ ∃xGx
T208 ∀x(Fx ∧ Gx) ↔ ∀xFx ∧ ∀xGx
T209 ∃x(Fx ∧ Gx) → ∃xFx ∧ ∃xGx
T210 ∀xFx ∨ ∀xGx → ∀x(Fx ∨ Gx)
T211 (∃xFx → ∃xGx) → ∃x(Fx → Gx)
T212 (∀xFx → ∀xGx) → ∃x(Fx → Gx)
T213 ∀x(Fx ↔ Gx) → (∀xFx ↔ ∀xGx)
T214 ∀x(Fx ↔ Gx) → (∃xFx ↔ ∃xGx)
LAWS OF CONFINEMENT
T215 ∀x(P∧Fx) ↔ P∧∀xFx
T216 ∃x(P∧Fx) ↔ P∧∃xFx
T217 ∀x(P∨Fx) ↔ P∨∀xFx
T218 ∃x(P∨Fx) ↔ P∨∃xFx
T219 ∀x(P→Fx) ↔ (P→∀xFx)
T220 ∃x(P→Fx) ↔ (P→∃xFx)
T221 ∀x(Fx→P) ↔ (∃xFx→P)
T222 ∃x(Fx→P) ↔ (∀xFx→P)
T223 ∀x(Fx↔P) → (∀xFx→P)
T224 ∀x(Fx↔P) → (∃xFx→P)
T225 (∃xFx↔P) → ∃x(Fx↔P)
T226 (∀xFx↔P) → ∃x(Fx↔P)
SECTION 1
1. a. Fred is an orangutan.
Of
b. Gertrude is an orangutan but Fred isn't.
Gertrude is an orangutan [and] Fred is not [an orangutan].
Og ∧ ~Of
c. Tony Blair will speak first.
Fb
d. Gary lost weight recently; he is happy.
Gary lost weight recently [and] [Gary] is happy.
Lg ∧ Hg
e. Felix cleaned and polished.
Felix cleaned and [Felix] polished.
Cf ∧ Of
f. Darlene or Abe will bat clean-up.
Darlene [will bat clean-up] or Abe will bat clean-up.
Bd ∨ Ba
Copyrighted material Chapter Three -- 54 -- Answers to the Exercises Version of Aug 2013
Answers to Exercises -- Chapter 3
a. Eileen and Cosi both live in Brea.
Eileen lives in Brea and Cosi loves in Brea
Le ∧ Lc
SECTION 2
1. For each of the following, say whether it is a formula in official notation, or in informal notation, or not a
formula at all. If it is a formula, parse it.
a. Official notation
~∀x(Fx → (Gx ∧ Hx))
|
∀x(Fx → (Gx ∧ Hx))
|
(Fx → (Gx ∧ Hx))
/\
Fx (Gx ∧ Hx)
/\
Gx Hx
b. Informal notation
∃x~~Gx → Hx ∨ ∃yGy
/\
∃x~~Gx Hx ∨ ∃yGy
| /\
~~Gx Hx ∃yGy
| |
~Gx Gy
|
Gx
c. Official notation
~(Gx ↔ ~Hx)
|
(Gx ↔ ~Hx)
Copyrighted material Chapter Three -- 55 -- Answers to the Exercises Version of Aug 2013
Answers to Exercises -- Chapter 3
/\
Gx ~Hx
|
Hx
e. Informal notation
Fa → (Gb ↔ Hc)
/\
Fa (Gb ↔ Hc)
/\
Gb Hc
f. Not a formula; a variable can only occur in an atomic formula or a quantifier phrase, and never by itself.
g. Informal notation
∀x(Gx ↔ Hx) → Ha ∧ ∃zKz
/\
∀x(Gx ↔ Hx) Ha ∧ ∃zKz
| /\
Gx ↔ Hx Ha ∃zKz
/\ |
Gx Hx Kz
SECTION 3
1. a. Sentence ∃x(Fx ∧ ∀y(Gy ∨ Hx))
Copyrighted material Chapter Three -- 56 -- Answers to the Exercises Version of Aug 2013
Answers to Exercises -- Chapter 3
SECTION 4
1. a. Something is a sofa and is well built. There is a well-built sofa..
b. Everything is such that if it is a sofa then it is well-built. All sofas are well-built.
c. Everything is either a sofa or is well-built. Everything is a sofa, unless it's well-built.
d. Something is such that it is not a sofa. Something isn't a sofa.
e. Everything is such that it is not a sofa. There are no sofas.
f. Everything is such that if it is both bell-built and a sofa, then it is comfortable. Every well-built sofa is
comfortable.
g. Something is comfortable and everything is well-built.
h. Something is such that if it is comfortable, then everything is well-built.
2. Assume that all giraffes are friendly, and that some giraffes are clever and some aren't.
a. ∀x(Gx → Fx) True, since all giraffes are friendly.
b. ∀x(Gx → Cx) False, since not every giraffe is clever.
c. ∃x(~Fx ∧ Gx) False, since every giraffe is friendly.
d. ∃y(Fy ∧ Cy) True, since giraffes are friendly, and some of them are clever.
e. ∃z(Gz ∧ Cz) True, since some giraffes are clever.
f. ∀x(Gx → ~Gx) False, since not every giraffe isn't a giraffe. (In fact, no giraffe isn't a giraffe, but it
only takes one to falsify the symbolic sentence.)
SECTION 5a
1. a. Every Handsome Elephant is Friendly.
∀x((Hx ∧ Ex) → Fx)
b. No handsome elephant is friendly.
~∃x((Hx ∧ Ex) ∧ Fx)
c. Some elephants are not handsome.
∃x(Ex ∧ ~Hx)
d. Some handsome elephants are friendly.
∃x((Hx ∧ Ex) ∧ Fx)
e. Each friendly elephant is handsome.
∀x((Fx ∧ Ex) → Hx)
f. A handsome elephant is not friendly.
∃x((Hx ∧ Ex) ∧ ~Fx)
g. No friendly elephant is handsome.
~∃x((Fx ∧ Ex) ∧ Hx)
SECTION 5b
1. Suppose that `A' stands for `is a U.S. state', `C' for `is a city', `L' for `is a capital', and `E' for `is in the
Eastern time zone'. What are the truth values of these sentences?
a. ∀x(Cx → Lx) --- False; Los Angeles is a city but not a capital.
b. ∃x(Cx ∧ Lx) --- True; Sacramento is a city and a capital.
c. ∃x(Cx ∧ Lx ↔ Ex) --- True, because something makes the biconditional true, by making both
sides false. For example, Los Angeles is not a capital, and it is not in the Eastern time zone.
d. ∀x(Cx ∧ Ex → Ax) --- False; Philadelphia is not a state.
e. ~∃x(Ax ∧ Ex) --- False; Delaware is a state in the Eastern time zone.
f. ∃x(Cx ∧ Ex) ∧ ∃x(Cx ∧ ~Ex) --- True; Philadelphia is a city in the Eastern time zone and LA is a
city outside the eastern time zone.
g. ∃x(Cx ∧ Ex ∧ Ax) --- False; no city is also a state.
h. ~∃x(Cx ∧ ~Cx) --- True. There is no city which isn't a city.
Copyrighted material Chapter Three -- 57 -- Answers to the Exercises Version of Aug 2013
Answers to Exercises -- Chapter 3
2. a. All Giraffes are spOtted.
∀x(Gx → Ox)
b. All Clever giraffes are spotted.
∀x(Gx ∧ Cx → Ox)
c. No clever giraffes are spotted.
~∃x(Gx ∧ Cx ∧ Ox)
d. Every giraffe is either spotted or Drab.
∀x(Gx → (Ox ∨ Dx))
e. Some giraffes are clever.
∃x(Gx ∧ Cx)
f. Some spotted giraffes are clever.
∃x(Ox ∧ Gx ∧ Cx)
g. Some giraffes are clever and some aren't.
Some giraffes are clever and some [giraffes are not clever].
∃x(Gx ∧ Cx) ∧ ∃x(Gx ∧ ~Cx)
h. Some spotted giraffes aren't clever.
∃x(Ox ∧ Gx ∧ ~Cx)
i. No spotted giraffe is clever but every unspotted one is.
No spotted giraffe is clever [and] every un-spotted [giraffe] is [clever].
~∃x(Ox ∧ Gx ∧ Cx) ∧ ∀x(~Ox ∧ Gx → Cx)
j. Every clever spotted giraffe is either wIse or Foolhardy.
∀x(((Cx ∧ Ox) ∧ Gx) → (Ix ∨ Fx))
k. Either all spotted giraffes are clever, or all clever giraffes are spotted.
∀x(Ox ∧ Gx → Cx) ∨ ∀x(Cx∧Gx → Ox)
l. Every clever giraffe is foolhardy.
∀x(Cx ∧ Gx → Fx)
m. If some giraffes are wise then not all giraffes are foolhardy.
∃x(Gx ∧ Ix) → ~∀x(Gx → Fx)
n. All giraffes are spotted if and only if no giraffes aren't spotted.
∀x(Gx → Ox) ↔ ~∃x(Gx ∧ ~Ox)
o. Nothing is both wise and foolhardy.
~∃x(Ix ∧ Fx)
SECTION 5c
1. a. Only Friendly Elephants are Handsome (ambiguous)
i. ∀x(Hx → (Fx ∧ Ex))
ii. ∀x((Ex ∧ Hx) → Fx)
b. If only elephants are friendly, no Giraffes are friendly
∀x(Fx → Ex) → ~∃x(Gx ∧ Fx)
c. Only the Brave are fAir.
∀x(Ax → Bx)
d. If only elephants are friendly then every elephant is friendly
∀x(Fx → Ex) → ∀x(Ex → Fx)
e. All and only elephants are friendly.
All elephants are friendly [and] Only elephants are friendly.
∀x(Ex → Fx) ∧ ∀x(Fx → Ex)
f. If every elephant is friendly, only friendly Animals are elephants (ambiguous)
i. ∀x(Ex → Fx) → ∀x(Ex → (Fx ∧ Ax))
ii. ∀x(Ex → Fx) → ∀x((Ex ∧ Ax) → Fx)
Copyrighted material Chapter Three -- 58 -- Answers to the Exercises Version of Aug 2013
Answers to Exercises -- Chapter 3
g. If any elephants are friendly, all and only giraffes are nasty
If some elephants are friendly, (all giraffes are Nasty and only giraffes are nasty)
∃x(Ex ∧ Fx) → (∀x(Gx → Nx) ∧ ∀x(Nx → Gx))
h. Among spOtted animals, only giraffes are handsome.
∀x(Ox → (Hx → Gx))
i. Among spotted animals, all and only giraffes are handsome
∀x(Ox → ((Gx → Hx) ∧ (Hx → Gx))
j. Only giraffes frolic if annoyed.
If a thing froLics if aNnoyed, it is a giraffe.
∀x((Nx → Lx) → Gx)
SECTION 5d
1. Symbolize these sentences.
a. Every Giraffe which Frolics is Happy
∀x(Fx ∧ Gx → Hx)
b. Only giraffes which frolic are happy (ambiguous)
i. ∀x(Gx ∧ Hx → Fx)
ii. ∀x(Hx → Gx ∧ Fx)
c. Only giraffes are Animals which are Long-necked.
∀x(Ax ∧ Lx → Gx)
d. If only giraffes frolic, every animal which is not a giraffe doesn't frolic.
∀x(Fx → Gx) → ∀x(Ax ∧ ~Gx → ~Fx)
e. Some giraffe which frolics is long-necked or happy.
∃x((Fx ∧ Gx) ∧ (Lx ∨ Hx))
f. No giraffe which is not happy frolics and is long-necked.
~∃x((~Hx ∧ Gx) ∧ (Fx ∧ Lx))
g. Some giraffe is not both long-necked and happy.
∃x(Gx ∧ ~(Lx ∧ Hx))
SECTION 5e
1. a. If a Giraffe is Happy then it Frolics unless it is Lame.
∀x(Gx ∧ Hx → Fx ∨ Lx)
b. A Monkey frolics unless it is not happy.
∀x(Mx → Fx ∨ ~Hx)
c. Among giraffes, only happy ones frolic.
∀x(Gx → (Fx → Hx))
d. All and only giraffes are happy if they are not lame.
∀x(Gx ↔ (~Lx → Hx))
e. A giraffe frolics only if it is happy.
∀x(Gx ∧ Fx → Hx) or ∀x(Gx → (Fx→Hx))
f. Only giraffes frolic if happy.
∀x((Hx → Fx) → Gx)
g. All monkeys are happy if some giraffe is.
∃x(Gx ∧ Hx) → ∀x(Mx → Hx)
h. Cute monkeys frolic.
∀x(Cx ∧ Mx → Fx)
i. Giraffes ruN and frolic if and only if they are Blissful and Exultant.
∀x(Gx → (Nx ∧ Fx ↔ Bx ∧ Ex))
Copyrighted material Chapter Three -- 59 -- Answers to the Exercises Version of Aug 2013
Answers to Exercises -- Chapter 3
j. If those who are heAlthy are not lame, then if they are exultant, they will frolic.
∀x((Ax → ~Lx) → (Ex → Fx))
k. Only giraffes and monkeys are blissful and exultant.
∀x(Bx ∧ Ex → Gx ∨ Mx)
l. The brave(I) are happy.
∀x(Ix → Hx)
m. If a giraffe frolics, then no monkey is blissful unless it is.
∀x((Gx ∧ Fx) → (Bx ∨ ~∃y(My ∧ By)))
n. Giraffes and monkeys frolic if happy.
∀x(Gx ∨ Mx → (Hx → Fx))
SECTION 6
1. a. The sky is Blue
Everything that is blue is prEtty
∴ Something is pretty
Be
∀x(Bx → Ex)
∴ ∃xEx
1 Show ∃xEx
2 Be → Ee pr2 ui
3 Ee 2 pr1 mp
4 ∃xEx 3 eg dd
Copyrighted material Chapter Three -- 60 -- Answers to the Exercises Version of Aug 2013
Answers to Exercises -- Chapter 3
2. The error is at line 3. It is not permissible to use EI to get an instance of pr2 in the variable z because z
occurs already on line 2; this would violate the restriction on EI.
SECTION 8
1. Symbolize these arguments and provide derivations to validate them. Give an explicit scheme of
abbreviation for each.
a. If history is right (P), then if anyone was strOng, hercules was strong.
Only those who work out (M) are strong, and only those with self-Discipline work out.
∴ If Hercules does not have self-discipline, then either history is not right or nobody is strong.
Copyrighted material Chapter Three -- 61 -- Answers to the Exercises Version of Aug 2013
Answers to Exercises -- Chapter 3
P → (∃xOx → Oh)
∀x(Ox → Mx) ∧ ∀x(Mx → Dx)
∴ ~Dh → (~P ∨ ~∃xOx)
b. If some Giraffes are not Happy, then all giraffes are Morose.
Some giraffes pOnder the mysteries of life.
∴ If some giraffes are not morose, then some who ponder the mysteries of life are happy.
∃x(Gx ∧ ~Hx) → ∀x(Gx → Mx)
∃x(Gx ∧ Ox)
∴ ∃x(Gx ∧ ~Mx) → ∃x(Ox ∧ Hx)
1 Show ∃x(Gx ∧ ~Mx) → ∃x(Ox ∧ Hx)
2 ∃x(Gx ∧ ~Mx) ass cd
3 Gi ∧ ~Mi 2 ei
4 Show ~∀x(Gx → Mx)
5 ∀x(Gx → Mx) ass id
6 Gi → Mi 5 ui
7 Mi 3 s 6 mp
8 ~Mi 3 s 7 id
9 ~∃x(Gx ∧ ~Hx) 4 pr1 mt
10 Gj ∧ Oj pr2 ei
11 Show Hj
12 ~Hj ass id
13 Gj ∧ ~Hj 10 s 12 adj
14 ∃x(Gx ∧ ~Hx) 13 eg
15 ~∃x(Gx ∧ ~Hx) 9 r id
16 Oj ∧ Hj 10 s 11 adj
17 ∃x(Ox ∧ Hx) 16 eg cd
Copyrighted material Chapter Three -- 62 -- Answers to the Exercises Version of Aug 2013
Answers to Exercises -- Chapter 3
c. There is not a single Critic who either Likes art or can pAint.
Some level-Headed peOple are critics.
Anyone who can't paint is unEducated.
∴ Some level-headed people are uneducated.
∀x(Cx → ~(Lx ∨ Ax))
∃x((Hx ∧ Ox) ∧ Cx)
∀x(Ox → (~Ax → ~Ex))
∴ ∃x((Hx ∧ Ox) ∧ ~Ex)
Copyrighted material Chapter Three -- 63 -- Answers to the Exercises Version of Aug 2013
Answers to Exercises -- Chapter 3
e. All stuDents who have a sense of Humor or are Brilliant seek Fame.
Anyone who seeks fame and is brilliant is Insecure.
Whoever is a Mathematician is brilliant.
∴ Every student who is a mathematician is insecure.
∀x((Dx ∧ (Hx ∨ Bx)) → Fx)
∀x(Fx ∧ Bx → Ix)
∀x(Mx → Bx)
∴ ∀x((Dx ∧ Mx) → Ix)
1 Show ∀x((Dx ∧ Mx) → Ix)
2 Show (Dx ∧ Mx) → Ix
3 Dx ∧ Mx ass cd
4 Dx 3s
5 Mx 3s
6 Mx → Bx pr3 ui
7 Bx 5 6 mp
8 Hx ∨ Bx 7 add
9 Dx ∧ (Hx ∨ Bx) 4 8 adj
10 (Dx ∧ (Hx ∨ Bx)) → Fx pr1 ui
11 Fx 9 10 mp
12 Fx ∧ Bx 7 11 adj
13 Fx ∧ Bx → Ix pr2 ui
14 Ix 12 13 mp cd
15 2 ud
Copyrighted material Chapter Three -- 64 -- Answers to the Exercises Version of Aug 2013
Answers to Exercises -- Chapter 3
f. There is a Monkey that is Happy if and only if some Giraffe is happy.
There is a monkey that is happy if and only if some giraffe is not happy.
All monkeys are happy.
∴ It is not the case that either every giraffe is happy or none are.
∃x(Mx ∧ (Hx ↔ ∃x(Gx ∧ Hx))
∃x(Mx ∧ (Hx ↔ ∃x(Gx ∧ ~Hx))
∀x(Mx → Hx)
∴ ~(∀x(Gx → Hx) ∨ ∀x(Gx → ~Hx))
1 Show ~(∀x(Gx → Hx) ∨ ∀x(Gx → ~Hx))
2 ∀x(Gx → Hx) ∨ ∀x(Gx → ~Hx) ass id
3 Mi ∧ (Hi ↔ ∃x(Gx ∧ Hx) pr2 ei
4 Mj ∧ (Hj ↔ ∃x(Gx ∧ ~Hx) pr3 ei
5 Mi → Hi pr4 ui
6 Mj → Hj pr4 ui
7 Hi 3 s 5 mp
8 Hj 3 s 6 mp
9 ∃x(Gx ∧ Hx) 3 s bc 7 mp
10 ∃x(Gx ∧ ~Hx) 4 s bc 7 mp
11 Gk ∧ Hk 9 ei
12 Gm ∧ ~Hm 10 ei
13 Show ~∀x(Gx → Hx)
14 ∀x(Gx → Hx) ass id
15 Gm → Hm 14 ui
16 Hm 12 s 15 mp
17 ~Hm 12 s id
18 ∀x(Gx → ~Hx) 2 13 mtp
19 Gk → ~Hk 18 ui
20 ~Hk 11 s 19 mp
21 Hk 11 s id
g. For every Astronaut that writes pOetry, there is one that doesn't.
For every astronaut that doesn't write poetry, there is one that does.
∴ If there are any astronauts, some write poetry and some don't.
∀x((Ax ∧ Ox) → ∃x(Ax ∧ ~Ox))
∀x((Ax ∧ ~Ox) → ∃x(Ax ∧ Ox))
∴ ∃xAx → ∃x(Ax ∧ Ox) ∧ ∃x(Ax ∧ ~Ox)
Copyrighted material Chapter Three -- 65 -- Answers to the Exercises Version of Aug 2013
Answers to Exercises -- Chapter 3
SECTION 9
1. a. ~∃x(Ax ∨ Bx)
∀x∀y(Gx ∧ Hy → By)
∃xGx
∴ ∀x~Hx
1 Show ∀x~Hx
2 ~∀x~Hx ass id
3 ∃xHx 2 qn
4 Hi 3 ei
5 Gj pr3 ei
6 Gj ∧ Hi 4 5 adj
7 Gj ∧ Hi → Bi pr2 ui ui
8 Bi 6 7 mp
9 ∀x~(Ax ∨ Bx) pr1 qn
10 ~(Ai ∨ Bi) 9 ui
11 ~Ai ∧ ~Bi 10 dm
12 ~Bi 11 s 8 id
Copyrighted material Chapter Three -- 66 -- Answers to the Exercises Version of Aug 2013
Answers to Exercises -- Chapter 3
b. ∃x(Hx ∧ ~∃y(Gy ∧ Hx))
∴ ∀y~Gy
1 Show ∀y~Gy
2 ~∀y~Gy ass id
3 ∃yGy 2 qn
4 Hi ∧ ~∃y(Gy ∧ Hi) pr1 ei
5 Hi 4s
6 ~∃y(Gy ∧ Hi) 4s
7 Gj 3 ei
8 Gj ∧ Hi 5 7 adj
9 ∃y(Gy ∧ Hi) 8 eg 6 id
d. ~∀x(Dx ∨ Ex)
∃x(Fx ↔ ~Ex) → ∀zDz
∴ ∃x~Fx
Copyrighted material Chapter Three -- 67 -- Answers to the Exercises Version of Aug 2013
Answers to Exercises -- Chapter 3
1 Show ∃x~Fx
2 ~∃x~Fx ass id
3 ∀xFx 2 qn
4 ∃x~(Dx ∨ Ex) pr1 qn
5 ~(Di ∨ Ei) 4 ei
6 ~Di ∧ ~Ei 5 dm
7 ~Di 6s
8 ~Ei 6s
9 Show Fi → ~Ei
10 ~Ei 8 r cd
11 Show ~Ei → Fi
12 Fi 3 ui cd
13 Fi ↔ ~Ei 9 11 cb
14 ∃x(Fx ↔ ~Ex) 13 eg
15 ∀zDz 14 pr2 mp
16 Di 15 ui 7 id
e. Jc ∧ ~Jd
∀xKx ∨ ∀x~Kx
∃x(Jx ∧ Kx) → ∀x(Kx → Jx)
∴ ~Kc
1 Show ~Kc
2 Kc ass id
3 Show ~∀x~Kx
4 ∀x~Kx ass id
5 ~Kc 4 ui
6 Kc 2 r 5 id
7 ∀xKx 3 pr2 mtp
8 Jc ∧ Kc pr1 s 2 adj
9 ∃x(Jx ∧ Kx) 8 eg
10 ∀x(Kx → Jx) 9 pr3 mp
11 Kd → Jd 10 ui
12 Kd 7 ui
13 Jd 11 12 mp
14 ~Jd pr1 s 13 id
Copyrighted material Chapter Three -- 68 -- Answers to the Exercises Version of Aug 2013
Answers to Exercises -- Chapter 3
SECTION 10
1. a. ∀x(Ax → ∃y(By ∧ ~Ay))
~∀xBx
~∃x(Bx ∧ Cx)
∴ ∃x(Ax ∧ Cx)
Universe: {1, 2, 3}
A: {1}
B: {2}
C: {3}
b. ∃x(Dx ∧ Ex ∧ ~Fx)
∃x(~Dx ∧ ~Ex)
∀x(Ex → Dx ∨ Fx)
∴ ∀x(Dx ∧ Ex → ~Fx)
Universe: {1, 2, 3}
D: {1, 2}
E: {1, 2}
F: {1}
c. ∃x(Fx ∧ Gx)
∃x(Fx ∧ ~Gx)
∃x(~Fx ∧ Gx)
∴ ∀x(~Fx → Gx) <requires more than three things in the universe>
Universe: {1, 2, 3, 4}
F: {2, 3}
G: {1, 2}
e. Ha ∧ ~Hb
∀x(Kx → Hx ∧ Jx)
∃x(Jx ∧ ~Kx)
∴ ∃x(Hx ∧ ~Jx)
Universe: {1, 2}
H: {1}
J: {1,2}
K: { }
a --- 1
b --- 2
Copyrighted material Chapter Three -- 69 -- Answers to the Exercises Version of Aug 2013
Answers to Exercises -- Chapter 3
SECTION 11
1. For each of the following arguments use the method of expansions to determine whether the following
is a counterexample for it or not.
Universe: 0 1 2
F: {0}
G: {0, 2}
H: {2}
a: 2
b: 0
b. ∃x(Gx ∧ Hx ∧ ~Fx)
∃x(~Gx ∧ ~Hx)
∀x(Hx → Gx ∨ Fx)
∴ ∀x(Gx ∧ Hx → ~Fx)
c. ∃x(Fx ∧ Gx)
∃x(Fx ∧ ~Gx)
∃x(~Fx ∧ Gx)
∴ ∀x(~Fx → Gx)
The second premise expands to:
(Fa1 ∧ ~Ga1) ∨ (Fa2 ∧ ~Ga2) ∨ (Fa3 ∧ ~Ga3)
which is false because each disjunct is false. Since we have a false premise we don't have a
counterexample.
Copyrighted material Chapter Three -- 70 -- Answers to the Exercises Version of Aug 2013
Answers to Exercises -- Chapter 3
d. ∀x∃y(Fx ↔ (Gy ∨ Fx))
∴ ~∃xFx → ~∃xGx
e. Ha ∧ ~Hb
∀x(Fx → Hx ∧ Gx)
∃x(Gx ∧ ~Fx)
∴ ∃x(Hx ∧ ~Gx)
The second premise expands to:
(Fa1 → Ha1 ∧ Ga1) ∧ (Fa2 → Ha2 ∧ Ga2) ∧ (Fa3 → Ha3 ∧ Ga3)
which is false because the first conjunct has a true antecedent and a false consequent. Since we have a
false premise, we don't have a counterexample.
Copyrighted material Chapter Three -- 71 -- Answers to the Exercises Version of Aug 2013
CHAPTER 4 SECTION 1
Chapter Four
Many-Place Predicates
In chapter 3 we studied the concept of formal validity in the Monadic Predicate Calculus, validity that is
due to the logical forms that can be expressed using names, variables, connectives, quantifiers, and one-
place predicates. In this chapter the restriction to monadic (one-place) predicates is lifted. We are now
focusing simply on Predicate Calculus formal validity: validity that is due to the logical forms that can be
expressed using logical signs plus predicates of any number of places.
1 MANY-PLACE PREDICATES
In earlier chapters we used predicate letters that combine with one name to make a sentence:
Antarctica is peaceful Ea
Fido is a giraffe Gf
Cynthia ran Ac
There are also expressions that combine with two names to form sentences:
Andria is taller than Bill
Cynthia is a friend of David
Fred sees Bella
or with three names, or more:
Cary gave Fido to Andy
Egbert sent Beatrice to Compton
Fred drove Anna to Chicago with David
To accommodate these expressions we use predicate letters that are followed by two or more names or
variables enclosed in parentheses. Some examples with names are:
Andria is taller than Bill T(ab)
Cynthia is a friend of David F(cd)
Fred sees Bella S(fb)
Cary gave Fido to Andy G(cfa)
Egbert sent Beatrice to Compton S(ebc)
These are atomic sentences, on a par with atomic sentences consisting of a single sentence letter or of a
predicate letter followed by a name. They also occur with variables to form atomic formulas:
Andria is taller than x T(ax)
x is a friend of y F(xy)
z sees Bella S(zb)
Cary gave x to y G(cxy)
z sent u to v S(zuv)
We can use any capital letter for a many-place predicate (adding subscripts if desired). You can tell
whether a predicate letter is being used as a one-place predicate or a many-place predicate by seeing
what follows it. If it is followed by a single name or variable, it is being used as a one-place predicate; if it
is followed by a pair of parentheses containing names or variables, it is being used as a many-place
predicate.
An atomic formula is now either a sentence letter alone, or a one-place predicate letter followed by a
name or variable, or any predicate letter followed by a pair of parentheses containing any number of
names or variables.
These new atomic formulas combine with connectives and quantifiers as in the previous chapter, yielding
formulas such as:
∃xA(bx)
∀yB(yy)
∀x(Ax → B(xd))
etc
We are now using parentheses for two different purposes: to surround the terms following a many-place
predicate symbol, and to surround molecular formulas. It is common to use either parentheses or square
brackets for the latter purpose, and using square brackets instead of parentheses sometimes increases
readability. So we will often write complex formulas as follows:
∀x[Ax → B(xd)]
∀x∃y[[A(xy)→B(yx)] ↔ A(yx)∧B(yx)]
∃xF(ax) ↔ ∃y[G(ay) ∧ G(yx)]
Our official account of formulas is now:
A sentence letter is any capital letter between 'P' and 'Z' (perhaps with a subscript).
A one-place predicate is any capital letter between 'A' and 'O' (perhaps with a subscript).
A many-place predicate is any capital letter between 'A' and 'Z' (perhaps with a subscript).
An atomic formula is:
a sentence letter alone,
a one-place predicate letter followed by one name or one variable, or
any predicate letter followed by a pair of parentheses containing any number of
names or variables.
EXERCISES
1. Which of the following are formulas in official notation? Which are formulas in informal notation?
Which are not formulas at all?
a. ~~F(xa)
b. [∀xG(bx) → ~∃yG(yx)]
c. ∀xG(bx) → ~∃yG(yx)
d. ~Fa ∧ ~G(aa) ∧~Fb ∧ Gxb
e. ~F(a) ∨ ~G(ab)
f. ~Fa ∨ ~Gab
g. ~∃x[~Fx → ∀yG(yy)]
h. ∃x∀y~Fxy
i. ∃x∃yF[xy]
S(): sent to
With this understanding, both of the English sentences above would be symbolized:
S(adb)
Symbolizing complex sentences with many-place predicates mostly involves the same techniques as
those we used earlier. For example, when there is one quantificational expression and a name, the
symbolizations follow the same patterns as before. Some examples:
Every giraffe is happy ∀x[Gx → Hx]
Every giraffe sees Fido ∀x[Gx → S(xf)]
Some dog is spotted ∃x[Dx ∧ Sx]
Some dog loves Bobby ∃x[Dx ∧ L(xb)]
The pattern is similar when the name is the subject of the sentence:
Fido sees every dog ∀x[Dx → S(fx)]
Bobby loves some dog ∃x[Dx ∧ L(bx)]
When there are two quantificational expressions, the translations may often be produced in stages:
Some dog likes every cat Partial translation: ∃x[Dx ∧ x likes every cat]
Then 'x likes every cat' is handled just as if 'x' were a name:
x likes every cat ∀y[Cy → L(xy)]
The whole sentence then has the form:
∃x[Dx ∧ x likes every cat ]
Some examples with three-place predicates, using ‘G()’ for ‘ gave to ’:
Some nurse gave a doll to a child
∃x[Nx ∧ x gave a doll to a child]
∃x[Nx ∧ ∃y[Dy ∧ x gave y to a child]]
∃x[Nx ∧ ∃y[Dy ∧ ∃z[Cz ∧ x gave y to z]]]
∃x[Nx ∧ ∃y[Dy ∧ ∃z[Cz ∧ G(xyz)]]]
Sometimes in English the wording is unclear regarding which quantificational expression has wider
scope; that is, which quantificational expression has the other within its scope. For example, the
sentence:
Some freshman dated every sophomore
can be read in two ways. One is the "super-dater" reading, which says that there is a certain freshman
who dated every sophomore. Its symbolization is:
∃x[Fx ∧ x dated every sophomore] ∃x[Fx ∧ ∀y[Oy → D(xy)]]
Here, the quantifier '∃x' which originates from the 'some freshman' has widest scope, and the '∀y' which
originates from the 'every sophomore' is within the scope of '∃x'.
The other reading expresses the more natural situation, which merely says that for every sophomore,
some freshman dated him/her:
∀y[Oy → some freshman dated y] ∀y[Oy → ∃x[Fx ∧ D(xy)]]
In this symbolization the '∀y' which originates from the 'every sophomore' has widest scope, and the
quantifier '∃x' which originates from the 'some freshman' is within the scope of '∀y'.
Using ‘I()’ for ‘ is in ’ and ‘W()’for ‘ went to ’, this could mean that all the ambulances went
to the same location:
∃y[My ∧ ∃x[Lx ∧ I(xy) ∧ ∀z[Az → W(zx)]]]
"there is a mall, and a location in it, and every ambulance went there"
This gives the '∃y' widest scope, and within its scope the '∃x' has wider scope than the '∀z'. Or it could
mean that they were sent to locations in the same mall, though not necessarily to the same location:
∃y[My ∧ ∀z[Az → ∃x[Lx ∧ I(xy) ∧ W(zx)]]]
"there is a mall and every ambulance was sent to some location in it"
In this symbolization the '∃y' still has widest scope, but the '∃x' and '∀z' are interchanged. Or it could
merely mean that each ambulance was sent to some location in some mall:
∀z[Az → ∃y[My ∧ ∃x[Lx ∧ I(xy) ∧ W(zx)]]]
"every ambulance is such that there is a mall and a location in it and the ambulance went there"
In this symbolization the '∀z' now has widest scope, and the '∃x' is within the scope of the '∃y'. All three
symbolizations have the same ingredients; they differ with respect to how those ingredients are arranged.
Certain words have as their main function indicating that the quantifier they occur with has a wide scope.
An example is 'certain' in:
Every reporter admired a certain car.
The 'certain' gives the existential quantifier with 'car' wide scope:
∃x[Cx ∧ ∀y[Ey → A(yx)]]
The phrase 'the same' can also work in this way, as in:
Every reporter admired the same car.
∃x[Cx ∧ ∀y[Ey → A(yx)]]
As in earlier chapters, some linguistic constructions have no obvious rationale in terms of their parts. An
example is the medieval example 'No man lectures in Paris unless he is a fool'. Here are different but
equivalent symbolizations (using 'L()' for ' lectures in ' and 'a' for 'Paris'):
∀x[Mx → ~L(xa)∨Fx]
~∃x[Mx∧L(xa)∧~Fx]
EXERCISES
1. Symbolize each of the following:
a. Hans sees every doctor but Amanda doesn't see any doctor.
b. Hans, who owns a dog, doesn't own a cat.
c. Hans loves Amanda but she doesn't love him.
d. Neither Hans nor Amanda has a cat.
f. Some hyena and some giraffe like each other.
g. Some giraffe likes every baboon.
h. Some giraffe that likes every baboon likes no hyena.
i. Some giraffe likes every baboon that likes no hyena
j. Some giraffe likes every baboon that likes it
k. Eileen resides in a big city. <use 'R()' for ' resides in '>
l. Eileen and Betty both reside in the same city.
m. If Hank resides in Brea then he attends UCLA; otherwise he doesn't attend UCLA.
n. If David and Hank both live in Brea then David attends a private school and Hank attends a public
school.
o. Nobody who comes from Germany attends a California school.
p. No giraffe likes Fido unless it is crazy
q. Nobody gives a book to a freshman unless it is inexpensive
3 DERIVATIONS
Adding many-place predicates to the notation has no effect on the rules of inference; they are already
adequate as they stand. Here are two examples, using familiar techniques.
Any giraffe that is taller than Harriet is taller than every zebra. Some giraffes aren't taller than
some zebras. So there is a giraffe that is not taller than Harriet.
∀x[Gx ∧ T(xh) → ∀y[Ey → T(xy)]]
∃x[Gx ∧ ∃y[Ey ∧ ~T(xy)]]
∴ ∃x[Gx ∧ ~T(xh)]
Betty scolded every dog that chased a cat. Betty is a jeweler. Some dog that chased Cleo was
grey. Cleo is a cat. So a jeweler scolded some grey dog.
∀x[Dx ∧ ∃y[Cy ∧ H(xy)] → S(bx)]
Jb
∃x[Dx ∧ H(xc) ∧ Gx]
Cc
∴ ∃x[Jx ∧ ∃y[Dy ∧ Gy ∧ S(xy)]]
When existential quantifiers combine with universal ones, in general, a formula beginning with an
existential followed by a universal is stronger than a universal followed by an existential. This simple
derivation illustrates this:
∃x∀yF(xy) something forces everything
∴ ∀y∃xF(xy) everything is forced by something
1. Show ∀y∃xF(xy)
2. Show ∃xF(xy)
3. ∀yF(uy) pr1 ei
4. F(uy) 3 ui
5. ∃xF(xy) 4 eg dd
6. 2 ud
STRATEGY HINTS: The strategy hints given at the end of chapters 2 and 3 remain unchanged.
They are repeated here for convenience.
Try to reason out the argument for yourself.
Begin with a sketch of an outline of a derivation, and then fill in the details.
Write down obvious consequences.
When no other strategy is obvious, try indirect derivation.
Negation of Use nc to derive '□ ∧ ~○', then simplify and use the conjuncts singly.
conditional
~(□ → ○)
Negation of Use nb to turn this into '□ ↔ ~○', and use bc to get the corresponding conditionals.
biconditional
~(□ ↔ ○)
Or write a show line with '∀x□', and then assume '~∀x□' for an indirect
derivation. Turn this into '∃x~□', and proceed from there.
Existential Quantification Derive an instance and then use rule eg.
∃x□
Or write a show line with '∃x□', and then assume '~∃x□' for an indirect
derivation. Turn this into '∀x~□', and proceed from there.
Negation of a Universal State a show line with '~∀x□', and then assume '∀x□' for an indirect
Quantification derivation.
~∀x□
Or derive '∃x~□' and apply derived rule qn.
Negation of an Existential State a show line with '~∃x□', and then assume '∃x□' for an indirect
Quantification derivation.
~∃x□
Or derive '∀x~□' and apply derived rule qn.
Use rule av if necessary: If you are having difficulty with capturing when you use rule ui or ei, change
what you are trying to derive to an alphabetic variant. Complete the derivation, and then use derived rule
av to convert this into a derivation of what you are after.
EXERCISES
Show each of the following arguments to be valid.
1. ∀x∀y∀z[S(xy) ∧ S(yz) → S(xz)]
S(bc) ∧ S(ab)
∴ S(ac)
3. ∀x∃yS(xy)
∀x∀y[Cx ∧ S(xy) → Dy]
∀x∀y[Dx ∧ S(yx) → Dy]
∴ ~∃x[Cx ∧ ~Dx]
4. ∃xEx ∧ ∃x~Ex
∀x∀y[Ex ∧ S(xy) → Ey]
∴ ∃x∃y~S(xy)
5. ∀x∀y[S(xy) ↔ S(yx)]
∃x∃y[Ax ∧ By ∧ S(xy)]
∴ ∃x∃y[By ∧ Ax ∧ S(yx)]]
If we have a rule stating that a certain formula '□' is equivalent to another formula '○', then from
any available line whose formula contains '□' we may infer a line with the same formula but with
'□' changed to '○'. The justification consists of writing 'ie' followed by '/' and the name of the rule
giving the equivalence.
A rule establishes that one formula is equivalent to another if the rule can be applied to either to
infer the other. These rules all establish equivalents:
dn <double negation>
nc <negation of conditional>
cdj <conditional/disjunction>
dm <demorgan's>
nb <negation of biconditional>
qn <quantifier negation>
av <alphabetic variation>
Plus any rules based on a theorem that is biconditional in form.
Step 2 uses one of the cases of quantifier negation to change '∃y~Hy' in the premise to '~∀yHy'. Step 3
then uses a case of De Morgan's law to change '~ [Ax ∨ Bx]' on line 2 to '~Ax ∧ ~Bx'.
This rule also works when the equivalence of the formulas being replaced is given to us by a premise or
an earlier available line, as in this derivation:
P↔Q∧R
S ↔ ~P
∴ S → ~[Q∧R]
1. Show S → ~[Q∧R]
2. S ↔ ~[Q∧R] pr2 ie/pr1
3. S → ~[Q∧R] 2 bc dd
Here the first premise tells us that 'P' and 'Q∧R' are equivalent, so on line 2 we may replace 'P' in the
second premise by 'Q∧R'. (Step 3 is already familiar.) A full statement of rule IE is:
If we have a rule stating that a certain formula '□' is equivalent to another formula '○', then from
any available line whose formula contains '□' we may infer a new line with the same formula but
with '□' changed to '○'. The justification consists of writing 'ie' followed by '/' and the name of the
rule giving the equivalence.
A rule establishes that one formula is equivalent to another if the rule can be applied to either to
infer the other. These rules all establish equivalents:
dn <double negation>
nc <negation of conditional>
cdj <conditional/disjunction>
dm <demorgan's>
nb <negation of biconditional>
qn <quantifier negation>
av <alphabetic variation>
Plus any theorem that is biconditional in form.
Also:
If we have a premise or available line that is a biconditional of the form '□ ↔ ○', or '○ ↔ □',
then from any available line whose formula contains '□' we may infer a line with the same formula
but with '□' changed to '○'. The justification consists of writing 'ie' followed by '/' and the name of
the premise or line used.
EXERCISES
In the following derivations, several lines appeal to the rule for interchanging equivalents. Say which of
these lines are correct, and which incorrect. (In each case, when judging a given line assume that all
previous lines are OK.)
1. P↔Q∨R
~Q → ~S ∨ P
∴ R ∨ ~Q
1. Show R ∨ ~Q
2. ~~P ↔ Q ∨ R pr1 ie/dn
3. ~~P ↔ P 2 ie/pr2
4. ~~P 3 ie/dn
5. P 4 ie/dn
6. ~S ∨ P 5 add
7. ~Q 6 ie/pr2
8. R ∨ ~Q 7 add dd
2. ∀x∃y[Ax ↔ R(xy)]
∀z∀y[R(zy) ↔ S(yz)]
∀x[[Ax↔Ax] ↔ Ax]
∴ Au
1. Show Au
2. ∃y[Ax ↔ R(xy)] pr1 ui
3. Au ↔ R(xu) 2 ei
4. R(xu) ↔ S(ux) pr2 ui ui
5. Au ↔ S(ux) 4 ie/3
6. Au ↔ S(ux) 3 ie/4
7. Au ↔ Au 5 ie/6
8. [Au↔Au] ↔ Au pr3 ui
9. Au 7 ie/8 dd
A strengthened form of rule ie is also available. Suppose that you are given the following as a theorem or
as a premise or a formula on an available line:
∀x∀y∀z(□↔○)
Then you may use rule ie to replace ‘□’ by ‘○’ within a formula on an available line even if the varialbles
‘x’, ‘y’, and ‘z’ are bound in that formula. For example if you are in a derivation with the following pattern:
7. ∀x∀y(R(xy) ↔ Gx ∧ ~Hy)
:::::
:::::
13. ∀x∃y(~R(xy) ∧ Py)
Then you may replace ‘R(xy)’ in line 13 to get:
14. ∀x∃y(~(Gx ∧ ~Hy) ∧ Py) 13 ie/7
(In this explanation we have chosen three particular variables for illustration; any number may be used.)
A constraint on this rule is that there must be no other variables free in ‘□’ or ‘○’ which become bound
when the substitution is made.
This rule applies even when the variables used on the later line are different from those in the
biconditional line, so long as the biconditional line could be changed into a biconditional with the same
variaibles by repeated use of rule av in conjunction with ie. So given lne 7 as above, this would also be
an allowable move:
13. ∀u∃v(~R(uv) ∧ Pv)
14. ∀u∃v(~(Gu ∧ ~Hv) ∧ Pv) 13 ie/7
5 BICONDITIONAL DERIVATIONS
When proving a biconditional informally, people sometimes give a string of equivalences. For example, to
show informally that this is a theorem:
∴ ~P ∧ ~~Q ↔ ~[P ∨ ~Q]
one might reason as follow:
By double negation, '~P ∧ ~~Q ' is equivalent to '~P ∧ Q',
and by De Morgan's laws, that is equivalent to '~[~~P ∨ ~Q]',
and again by double negation, that is equivalent to '~[P ∨ ~Q]'.
So the first (namely '~P ∧ ~~Q') is equivalent to the last (namely, '~[P ∨ ~Q]').
You can establish a biconditional by showing that one of its sides is equivalent to something, which is
equivalent to some further thing, etc, ending up with the other side of the biconditional. This idea can be
implemented by means of a new technique, called "biconditional derivation". It goes as follows:
Biconditional Derivations
Any show line with a biconditional formula '□ ↔ ○' may be followed by an assumption consisting of
a line containing either '□', or '○', justified by the notation 'ass bd' (meaning, "assumption for a
biconditional derivation").
A derivation may be continued so that each additional step follows from the immediately preceding
step by rule IE, so that eventually you reach a line containing '○' or '□' (whichever was not on the
assumption line). Then 'bd' may be written at the end of the last line; box all lines starting with the
assumption line, and cancel the 'show'.
(Alternative: As usual, you may end the derivation by writing an empty line following the line
containing '○' or '□', writing the line number of the previous line and 'bd'; then you box and cancel.)
Line 1 contains the biconditional to be shown. Line 2 assumes its left-hand side for the purpose of a
biconditional derivation. Lines 3-5 make inferences of formulas equivalent to that on line 2. Since this
series of IE steps ends up with the right-hand side of the biconditional on the show line, we conclude the
derivation, boxing and canceling.
The alternative form of the derivation is to delay until line 6 the "bd" justification with boxing and
cancelling:
This new derivation technique is not essential, since whatever we can do with it we can also do without it.
But doing without it requires a longer derivation -- sometimes a much longer derivation. In particular, we
can always derive the biconditional by giving two conditional derivations, followed by an application of rule
cb. Following this pattern, we can convert the above derivation to one without rule bd as follows:
Clearly, using biconditional derivation simplifies matters considerably. (The derivation could also be done
without any use of rule ie as well; this would make it much longer.)
It is important when applying rule bd that every step after the assumption is justified by rule ie. If other
rules are used, then even though every line follows correctly by an established rule, one cannot apply rule
bd to box and cancel. This is because bd derives an equivalence, and so only lines that infer
equivalences of previous lines are permitted. For example, the last line of this derivation is incorrect:
1. Show ~P ∧ ~~Q ↔ ~P
2. ~P ∧ ~~Q ass bd
3. ~P ∧ Q 2 ie/dn
4. ~P 3s
5. 4 bd incorrect
Line 5 is incorrect because a rule other than ie is used to get line 4 from line 3. The derivation is thus
incorrect -- which is good, since the sentence that it purports to derive from no premises is not a
tautology; it is false when 'P' and 'Q' are both false.
EXERCISES
1. Prove the biconditional above without using a biconditional derivation and also without using the rule
for interchange of equivalents:
∴ ~P ∧ ~~Q ↔ ~[P ∨ ~Q]
When showing invalidities in chapter 3 we saw that it is sometimes difficult to assess the truth values of
formulas, particularly when their quantifiers have overlapping scopes, although when the quantifiers do
not have overlapping scopes it is easy. For example, given that Agatha is happy but not carefree, and
that Beatrice is carefree but not happy, and that they are the only two things in the universe, it is easy to
evaluate certain kinds of quantified formulas, such as:
∃xHx true, because Agatha is happy
∃x~Hx true, because Beatrice isn't happy
∃xCx ↔ ∀yHy false because '∃xCx' is true, and '∀yHy' is false
but this is harder to assess:
∀x∃y[Cx ↔ Hy] This sentence is true. It's true because for anyone you choose, either
they're carefree, and there's someone who is happy, and the
biconditional is true, or they aren't carefree, and there's someone who
isn't happy, and again the biconditional is true
The point is that when one quantifier falls inside the scope of another, especially if one quantifier is
universal and the other existential, it can be a sophisticated matter to decide whether the sentence is true
or not. This is why truth-functional expansions of formulas, although complex and artificial to produce,
can sometimes be helpful in deciding what is true and what is false in a counter-example.
For sentences containing only monadic predicates, there is a way to eliminate the problem cases entirely.
This is because when there are no many-place predicates, every formula is provably equivalent to one in
which no quantifier falls within the scope of another quantifier. (This is sometimes described as a formula
"without overlay", where 'overlay' refers to a situation in which one quantifier contains another within its
scope.) The proof of this in any given case can be developed using what in chapter 3 were called laws
of confinement. Here are some confinement laws, repeated from the previous chapter:
These laws may be applied for any sentence letter in place of 'P', or for any formula that has no free
occurrence of 'x' in place of 'P'. They are called "confinement" laws because when 'x' is not free in 'P', a
quantifier governing the whole molecular formula may be confined to one part of the formula.
These laws are very useful when used in conjunction with a few other derived rules for commutativity,
associativity, distribution, quantifier distribution, and biconditional expansion.
Suppose, for example, we are given the sentence '∀x∃y[Fx ∧ Gy]'. We can prove that this is equivalent to
the sentence '∀xFx ∧ ∃yGy'. We do so using a biconditional derivation, using only the confinement laws:
Any formula with no many-place predicates can be transformed into a logically equivalent formula in
which there is no quantifier overlay. Here is a simple routine for doing so:
First, replace all biconditionals in the formula by combinations of formulas without biconditional
signs. For example, convert 'P↔Q' into '[P∧Q]∨[~P∧~Q]' using the ie/bex.
Whenever a quantifier immediately precedes a negation, apply ie/qn to move the quantifier to the
right.
When a quantifier immediately precedes a conjunction or disjunction or conditional, you may be
able to move the quantifier inside using ie/conf or ie/qdist. For example, using ie/conf:
∃x[P∨Fx] becomes P ∨ ∃xFx
∃x[P∧Fx] becomes P ∧ ∃xFx
∀x[P∨Fx] becomes P ∨ ∀xFx
∀x[P∧Fx] becomes P ∧ ∀xFx
∃x[P→Fx] becomes P → ∃xFx
∀x[P→Fx] becomes P → ∀xFx
∃x[Fx→P] becomes ∀xFx→P
∀x[Fx→P] becomes ∃xFx→P
And using ie/qdist:
∃x[Gx∨Fx] becomes ∃xGx ∨ ∃xFx
∀x[Gx∧Fx] becomes ∀xGx ∧ ∀xFx
If ie/conf does not apply, and you have a universal quantifier immediately preceding a disjunction,
or an existential quantifier immediately preceding a conjunction, you can modify the disjunction or
conjunction using ie/dist. If a universal quantifier precedes a disjunction, use ie/dist to turn the
disjunction into a conjunction, and ie/qdist then applies; if an existential quantifier precedes a
conjunction, use ie/dist to turn the conjunction into a disjunction, and ie/qdist then applies.
Examples:
∃x[Fx∧[Gx∨∀yHy]] becomes by ie/dist ∃x[[Fx∧Gx]∨[Fx∧∀yHy]]
and then ie/qdist applies
∀x[Fx∨ [Gx∧∀yHy]] becomes by ie/dist ∀x[[Fx∨Gx]∧[Fx∨∀yHy]]
and then ie/qdist applies
Sometimes it may be necessary to use ie/assoc before applying one of the above rules. For
example, suppose you are given '∃x[Fx∧[Gx∧∀yHy]]'. Then none of the above rules apply. But
the conjuncts may be reordered:
∃x[Fx∧[Gx∧∀yHy]] becomes by ie/assoc ∃x[[Fx∧Gx]∧∀yHy]
and then ie/conf applies.
Finally, a quantifier may end up having scope over another when it is actually binding nothing at
all, as in '∃x∀yFy'. In this case rule ie/vac just lets you drop the quantifier that isn't binding
anything:
∃x∀yFy becomes by ie/vac ∀yFy
The good news is that if a formula contains no many-place predicates it is provably equivalent to a
formula without overlay, and formulas without overlay are often much easier to assess. The bad news is
that when a formula contains at least one many-place predicate, no such equivalence is guaranteed.
There are plenty of examples of formulas that are not equivalent to any without overlay. Here are two:
∀x∃yF(xy)
∃y∀xF(xy)
Suppose that 'F' stands for loving, and that we are discussing a universe consisting only of people. Then
the first says that everyone loves someone, and the second says that there is someone loved by
everyone. There is no simpler way to symbolize either of these.
As a general summary of strategy, your goal will be to move quantifiers inside using QN, Qdist, Conf,and
Vac, while using Assoc, Com, Dist, DM to reorder the formulas that are within the scopes of the
quantifiers so that the former rules will apply.
EXERCISES
1. For each of the following formulas, find an equivalent formula which has no overlay of quantifiers, and
prove that it is equivalent. (The derivations are easiest using biconditional derivations.)
7 PRENEX FORMS
A formula is in prenex form when all of its quantifiers are in a string on the front of the formula, with each
quantifier having scope over everything to its right. These formulas are in prenex form:
∀x∃y∃z∀u~[P(xy) ∧ Q(yz)]
∀x∀y∃z[R(xz) → ~S(zy)]
∃x[Hx → Gx∧ Ky]
These are not:
∀x∃y∃z~∃u[P(xy) ∧ Q(yz)] '∃u' is not part of the string of quantifiers on the front
∀x∀y[R(xz) → ~∃zS(zy)] '∃z' is not part of the string of quantifiers on the front
∀x∃yR(xy) → Gy '∀x' and '∃y' do not have scope over the whole formula
Every formula that we can express is logically equivalent to one that is in prenex form. In fact, any
formula can easily be converted into prenex form. That is, any formula can be transformed into a logically
equivalent formula which is in prenex form. Here is a routine for doing so:
First, replace all biconditionals by combinations of formulas without biconditional signs. For
example, convert 'P↔Q' into '[P→Q] ∧ [Q→P]' or '[P∧Q]∨[~P∧~Q]' using the derived rule ie/bex.
Second, use ie/av to change bound variables within the formula so that every quantifier uses a
different variable, and so that no quantifier uses the same variable as one that occurs free in the
original formula. For example, convert '∀x[Hx ∧ Jy → ∃yK(xy)]' into '∀x[Hx ∧ Jy → ∃wK(xw)]'.
Now move each quantifier to the front of the formula by a series of steps in accordance with these
patterns:
If the quantifier is immediately preceded by a negation, move the quantifier to the left of the
negation, changing the quantifier from existential to universal, or vice versa. This step is justified
by ie/qn.
~∃x becomes ∀x~
~∀x becomes ∃x~
The remaining patterns appeal to the confinement laws.
If the quantifier is on the front of a disjunct, move the quantifier to the front of the whole
disjunction. This rule is justified by ie/conf.
P ∨ ∃xFx becomes ∃x[P ∨ Fx]
∃xFx ∨ P becomes ∃x[Fx ∨ P]
P ∨ ∀xFx becomes ∀x[P ∨ Fx]
∀xFx ∨ P becomes ∀x[Fx ∨ P]
If the quantifier is on the front of a conjunct, move the quantifier to the front of the whole
conjunction, having scope over it. This rule is justified by ie/conf.
P ∧ ∃xFx becomes ∃x[P ∧ Fx]
∃xFx ∧ P becomes ∃x[Fx ∧ P]
P ∧ ∀xFx becomes ∀x[P ∧ Fx]
∀xFx ∧ P becomes ∀x[Fx ∧ P]
If the quantifier is on the front of a consequent of a conditional, move the quantifier to the front of
the conditional, having scope over it. This rule is justified by ie/conf.
P → ∃xFx becomes ∃x[P → Fx]
P → ∀xFx becomes ∀x[P → Fx]
If the quantifier is on the front of an antecedent of a conditional, move the quantifier to the front of
the conditional, having scope over it, changing the quantifier from existential to universal, or vice
versa. This rule is justified by ie/conf.
∃xFx → P becomes ∀x[Fx → P]
∀xFx → P becomes ∃x[Fx → P]
The moves above are to be made first using a quantifier that is not within the scope of any other
quantifier; this quantifier migrates to the very front of the formula. Then any remaining quantifier that is
not within the scope of any remaining quantifier migrates to a point just to the right of the previous
quantifier, and so on until all quantifiers are moved to the prenex string.
Notice that once biconditionals have been eliminated, the remaining quantifier moves leave the internal
molecular structure of the formula intact. For example, if the quantifiers were erased, this would be a
conditional with a conjunction as antecedent and a disjunction as consequent.
∀y∃x[Fx ∧ Gy] → ∃z[Hz ∨ ∀uS(zu)]
When the quantifiers are moved to the front, the internal structure actually is a conditional with a
conjunction as antecedent and a disjunction as consequent:
∀y∃x[Fx ∧ Gy] → ∃z[Hz ∨ ∀uS(zu)]
∃y[ ∃x[Fx ∧ Gy] → ∃z[Hz ∨ ∀uS(zu)] ]
∃y∀x[ Fx ∧ Gy → ∃z[Hz ∨ ∀uS(zu)] ]
∃y∀x ∃z[ Fx ∧ Gy → [Hz ∨ ∀uS(zu)] ]
∃y∀x ∃z[ Fx ∧ Gy → ∀u[Hz ∨ S(zu)] ]
∃y∀x ∃z ∀u[ Fx∧Gy → Hz∨S(zu) ]
The process described above may sometimes be applied in more than one way. For example, in
∀xHx → ∀yKy
neither quantifier is within the scope of the other, and so either one may be moved first. The two options
are:
∀xHx → ∀yKy ⇒ ∃x[Hx → ∀yKy] ⇒ ∃x∀y[Hx → Ky]
∀xHx → ∀yKy ⇒ ∀y[∃xHx → Ky] ⇒ ∀y∃x[Hx → Ky]
The two resulting formulas are not the same; they differ in having an existential and a universal quantifier
permuted. Generally such permutation produces nonequivalent formulas, but in this special case they
are equivalent. A proof of the equivalence in this case can be produced by giving a biconditional
derivation in which one starts with one of the forms, goes back by stages to the original formula, and then
moves to the other form, like this:
EXERCISES
1. The following theorems resemble the last example from the text above. Prove them.
T263 ∀x∃y[Fx→Gy] ↔ ∃y∀x[Fx→Gy]
T264 ∀x∃y[Fx∧Gy] ↔ ∃y∀x[Fx∧Gy]
T265 ∀x∃y[Fx∨Gy] ↔ ∃y∀x[Fx∨Gy]
T266∀x∀y∃z[Fx∧Gy→Hz] ↔ ∀y∃z∀x[Fx∧Gy→Hz]
2. Put each of the following formulas into prenex form. In each case give a biconditional derivation that
shows that the prenex form is equivalent to the original formula.
a. ∀x∃yP(xy) → ∃uP(uu)
b. ∀x[∃uR(ux) → ∃uR(xu)]
c. ∀x∃yA(xy) ∨ ∀x∃yA(yx)
d. ∀x∀y[R(xy)↔R(yx)]
8 SOME THEOREMS
T253 ∃x∀yF(xy) → ∀y∃xF(xy) <a "quantifier switch". Note that the quantifiers do not
switch in the opposite direction>
T254 ∃x∃yF(xy) ↔ ∃x∃y[F(xy) ∨ F(yx)]
The following theorems are of interest in the application in which 'M' stands for set membership; that is,
where 'M()' means that is a member of the set . (We also read this as saying that set
"contains" .) In what is generally called "naïve" set theory, it is assumed that there is a set
corresponding to any condition you can express. For example, there is a set, x, whose members are
things that are giraffes:
∃x∀z[M(zx) ↔ Gz]
There is also a set whose members are all the things that aren't giraffes:
∃x∀z[M(zx) ↔ ~Gz]
Likewise, there is a set whose members are themselves sets which contain at least one giraffe:
∃x∀z[M(zx) ↔ ∃y[Gy ∧ M(yz)]]
There is a set that contains every set that contains something that contains something:
∃x∀z[M(zx) ↔ ∃y∃u[M(uy) ∧ M(yz)]]
In the early 1900's, Bertrand Russell considered the set whose members don't contain themselves.
Naïve set theory says that there must be a set which contains exactly the sets that do not contain
themselves:
∃x∀z[M(zx) ↔ ~M(zz)]
This purported set is called "the Russell set". Russell (among others) showed that there can't be a
Russell set. You can show this also. Just construct an easy derivation for this theorem which denies that
there is a Russell set:
T269 ~∃y∀x[M(xy)↔~M(xx)] <a Russell set does not exist>
A "Russell subset" of a set w is that set which contains all and only those members of w which are not
members of themselves. Generally, there is no problem about there being a Russell subset of a set. But
one can show that if every set has a Russell subset, then there is no universal set; that is, there is no set
which contains everything:
EXERCISES
9 SHOWING INVALIDITY
The introduction of many-place predicates requires some refinements in our technique for showing
predicate calculus invalidity. The fundamental idea remains the same: describe a possible situation in
which the premises of an argument of the given form are all true and the conclusion false. But the
presence of many-place predicates brings with it two complications, a simple one and a complex one.
The simple complication is due to the fact that when we interpret a many-place predicate there are
combinations of things in the universe to take into account. For example, consider this argument:
∀x∃yF(xy)
∴ ∃y∀xF(xy)
Suppose we consider a situation in which exactly three things exist; in particular, our universe is {0, 1, 2}.
Previously, for a given predicate we needed to choose for each entity, 0, 1, and 2 whether or not it was in
the extension of the predicate. But two-place predicates don't have extensions of that sort; it makes no
sense to ask which individual things a two-place predicate is true of. This is because a two-place
predicate holds of pairs of things. So we need to say which pairs of things are in the extension of each
predicate. For example, we might decide that 'F' holds of the following pairs:
<0,1>, <1,2>, <2,0>
We indicate these choices by writing:
COUNTER-EXAMPLE
Universe: {0, 1, 2}
F: {<0,1>, <1,2>, <2,0>}
With these choices the first premise is true, because for each thing in the universe, 'F' relates it to
something: 'F' relates 0 to 1, and 1 to 2, and 2 to 0. But the conclusion is false, because there is nothing
in the universe such that 'F' relates everything to it. 0 won't do because, for example, 1 isn't related to it,
and 1 won't do because, for example, 2 isn't related to it, and 2 won't do because, for example, 0 isn't
related to it. This counter-example shows that the argument is not valid in the predicate calculus.
Another example:
∀x[Fx ↔ ∃yR(xy)]
Fa ∧ Fb
∴ R(ab) ∨ R(ba)
COUNTER-EXAMPLE:
Universe: {0, 1, 2}
a: 0
b: 1
F: {0,1}
R: {<0,2>, <1,2>}
With these choices the first premise is true because the things that are F are 0 and 1, and the things that
bear relation R to something are also 0 and 1. The second premise is true because both 'a' and 'b' stand
for things in the extension of 'F'. And the conclusion is false because R does not relate 0 to 1, or 1 to 0.
As in the previous chapter, if it is difficult to assess the truth of sentences in a finite model, one may be
able to produce an equivalent truth-functional expansion. Here is an example; suppose that you want to
show the invalidity of this argument:
∀x∀y∃zR(xyz)
∴ ∀x∃z∀yR(xyz)
The following counter-example is proposed, in which the three-place relation R has an extension
consisting of triples:
Universe: {0,1}
R: {<0,0,1>, <0,1,0>, <1,0,1>, <1,1,0>}
Then the premise is true and the conclusion false. But this may not be obvious. If so, one may expand
the premise and conclusion as follows. Suppose that a0 stands for 0 and a1 for 1. Begin with the
premise. Eliminating its first (universal) quantifier we get the conjunction:
∀y∃zR(i0yz) ∧ ∀y∃zR(i1yz)
Eliminating the next universal quantifier in each conjunct gives a four-part conjunction:
∃zR(i0i0z) ∧ ∃zR(i0i1z) ∧ ∃zR(i1i0z) ∧ ∃zR(i1i1z)
Finally, eliminating the existential quantifiers in each conjunct gives the following sentence:
[R(i0i0i0) ∨ R(i0i0i1)] ∧ [R(i0i1i0) ∨ R(i0i1i1)] ∧ [R(i1i0i0) ∨ R(i1i0i1)] ∧ [R(i1i1i0) ∨ R(i1i1i1)]
The atomic sentences are now easy to evaluate. For example, we know that 'R(i0i0i0)' is false because
<0,0,0> is not in the extension of R. And 'R(i0i0i1)' is true because <0,0,1> is in the extension of R. And
so on. Each disjunctive conjunct is true, so the premise is true:
[R(i0i0i0) ∨ R(i0i0i1)] ∧ [R(i0i1i0) ∨ R(i0i1i1)] ∧ [R(i1i0i0) ∨ R(i1i0i1)] ∧ [R(i1i1i0) ∨ R(i1i1i1)]
F T T F F T T F
T T T T
Regarding the conclusion, eliminating its first (universal) quantifier we get the conjunction:
∃z∀yR(i0yz) ∧ ∃z∀yR(i1yz)
Now eliminating the initial existential quantifiers in each conjunct, we get:
[∀yR(i0yi0) ∨ ∀yR(i0yi1)] ∧ [∀yR(i1yi0) ∨ ∀yR(i1yi1)]
Finally, eliminating the remaining universal quantifiers we get the following sentence, which is a two-
conjunct conjunction each of whose conjuncts is a disjunction of conjuncts.
[[R(i0i0i0) ∧ R(i0i1i0)] ∨ [R(i0i0i1) ∧ R(i0i1i1)]] ∧ [[R(i1i0i0) ∧ R(i1i1i0)] ∨ [R(i1i0i1) ∧ R(i1i1i1)]]
Assessing the atomic sentences as above yields these truth values:
[[R(i0i0i0) ∧ R(i0i1i0)] ∨ [R(i0i0i1) ∧ R(i0i1i1)]] ∧ [[R(i1i0i0) ∧ R(i1i1i0)] ∨ [R(i1i0i1) ∧ R(i1i1i1)]]
F T T F F T T F
F F F F
The disjuncts are all false, so each conjunct is false, and the conclusion is false.
EXERCISES
1. Give counter-examples to show that each of the following arguments are invalid in the predicate
calculus.
a. ∀x[Fx → ∃y[Gy ∧ R(xy)]]
∀x[Gx → ~R(xx)]
∴ ∃x[Fx ∧ ∀y[Gy ∧ R(xy)]]
b. ∀x[F(xa) ↔ G(xb)]
∴ ∃x∃y[F(xy) ∧ G(xy)]
c. ∀x[Hx → R(ax)]
∀x∀y[R(xy) ↔ R(yx)]
∴ ∃y∀xR(xy)
d. ∀x[Fx → ∃yR(xy)]
∀x[~Fx → ∃yR(yx)]
∴ ∀x∃y[R(xy) ∧ R(yx)]
e. ∀x∀y[F(xy) ↔ ∃z[G(xz) ∧ ~G(yz)]]
∀x∀y[F(xy) → F(yx)]
∴ ∀x∀y[G(xy) → G(yx)]
10 INFINITE UNIVERSES
The second complication to our technique arises only in some cases. It has to do with infinite universes.
Consider the following argument:
∀x∃yR(xy)
∀x∀y∀z[R(xy) ∧ R(yz) → R(xz)]
∴ ∃xR(xx)
The first premise says that each thing is related by R to something. The second says that relation R is
transitive: if something is related by R to something else, and that something else is related to a further
thing, the first thing must be related to this further thing. Finally, the conclusion says that something is
related by R to itself.
This argument is invalid, but this cannot be shown using a counter-example with a finite universe.
Instead, we have to devise a counter-example using an infinite universe.
Here is why a finite universe will not work. Consider an attempt to create a counter-example using a finite
universe. Let us start with the smallest choice: there is only 0. Now by the first premise, 'R' relates 0 to
something. Since 0 is all there is, in order to make the first premise true, 'R' must relate 0 to 0. But then
the conclusion will be true. So a one-element universe won't do.
OK, let's try two things, 0 and 1. Again, 'R' must relate 0 to something. It can't relate 0 to 0, as we saw
above. So 'R' must relate 0 to 1. Looking at the first premise again, 'R' must relate 1 to something. It
can't relate 1 to 1, for the same reason as before; making the conclusion false forbids anything being
related to itself. So 'R' must relate 1 to 0. Fine. But now the second premise comes into play. The
second premise says that 'R' is transitive: if it relates one thing to a second, and that second to a third, it
relates the first to the third. But it does relate one thing (0) to a second thing (1), and it relates that
second thing (1) to a third thing (0), so it must now relate the first (0) to the third (0). Which is ruled out by
the third premise.
(You might think that this reasoning doesn't work, since we have talked about a "first" thing and a
"second" thing, and a "third" thing. And in the application we used, we made 0 be the "third"
thing. But there are only two of them: 0 and 1, and so it seems wrong to talk of a third thing.
The answer to this objection is that this use of 'first', 'second', and 'third' is just a manner of
speaking that is used in natural language to keep track, not of three things, but of three variables.
There aren't any variables in English so we speak in this way. This usage can be avoided if we
argue as follows:
"The second premise says that 'R' is transitive: if it relates one thing, x, to a thing, y, and it
relates thing, y, to a thing, z, then it relates thing x to thing z. But it relates one thing, 0, to 1,
and it relates 1 to 0, so it must now relate 0 to 0. Which is ruled out by the required
falsehood of the conclusion.")
Trying three things won't work either. The premises require that 0 is related to 1, and 1 to something
else, 2, but then 2 must be related to something. Not to itself, because we need the conclusion to be
false, and not to either 0 or 1, because the reasoning given above reapplies. And so on. These premises
require that each thing is related to something new, and so on ad infinitum ("to infinity").
We therefore need to consider a situation in which there are an infinite number of things, say, all of the
integers {0, 1, 2, . . }. Previously we gave the extensions of predicates by listing the things, or the pairs of
things, in their extensions. But if the extensions happen to be infinite, their members can't be given in a
finite list. Instead, we need to explain in words how things are related by each predicate. We can do this
by giving a scheme of abbreviation. Here is a way to do this, describing a situation in which the premises
are all true and the conclusion false:
COUNTER-EXAMPLE:
Universe = the non-negative integers: {0, 1, 2, 3, . . }
R() holds when <
That is, 'R' relates any two things, here called ‘’ and ‘’, if and only if the first thing, , is arithmetically
less than the second thing, .
Now consider how the parts of the argument fare in this counter-example.
∀x∃yR(xy) True: Every integer is less than some integer
∀x∀y∀z[R(xy) ∧ R(yz) → R(xz)] True: For any integers, x, y, z, if x is less than y and y is
less than z then x is less than z
∴ ∃xR(xx) False: No integer is less than itself
Again, a finite universe will not work to produce a counter-example. (Try it and see.) But a counter-
example with an infinite universe is possible. For example:
COUNTER-EXAMPLE
Universe: {0, 1, 2, . . . }
R() holds when >
The first premise is true in this counter-example because whenever one thing is greater than another, that
other is not greater than the first. The second premise says that for anything there is something greater
than it, which is true in our infinite universe. The third is the transitivity condition again, which holds for
greater-thanness. And the conclusion is false because there isn't a thing in this universe which is greater
than everything.
In constructing counter-examples in this way one must keep in mind that each name must be assigned
something that is actually in the chosen universe.
~∃xR(xx)
∀x∃yR(xy)
∀x∀y∀z[R(xy) ∧ R(yz) → R(xz)]
∴ ∃xR(xa)
COUNTER-EXAMPLE
Universe: {1, 2, . . . }
a: 1
R() holds when <
The first premise is true because nothing in the given universe is less than itself. Ths second is true
because for each thing in the universe, there’s something that it’s less than. The third premise is true
because less than is transitive. And the conclusion is false because there isn’t a thing in the universe
that’s less than 1. Of course, there are things that are less than 1, but not in the given universe. Notice
that given the universe we have chosen, we could not specify that ‘a’ stands for 0, because 0 isn’t in the
given universe. (Of course, we could choose a different universe, say: {0, 1, 2, . . . }, and then we could
let ‘a’ stand for 0.)
EXERCISES
Give counterexamples with infinite domains to show that each of the following arguments is invalid.
a. ∀x~R(xx)
∀x∃yR(xy)
∴ ∃x∃y∃z[R(xy) ∧ R(yz) ∧ ~R(xz)]
SECTION 1
1. Which of the following are formulas in official notation? Which are formulas in informal notation?
Which are not formulas at all?
a. ~~F(xa) Formula – official notation
b. [∀xG(bx) → ~∃yG(yx)] Formula – official notation
c. ∀xG(bx) → ~∃yG(yx) Formula – informal notation
d. ~Fa & ~G(aa) & ~Fb & Gxb Not a formula -- lacks parentheses around 'xb'
e. ~F(a) ∨ ~G(ab) Not a formula -- parentheses not used with 1-place predicates
f. ~Fa ∨ ~Gab Not a formula – lacks parentheses around ‘ab’
g. ~∃x[~Fx → ∀yG(yy)] Formula –official notation
h. ∃x∀y~Fxy Not a formula – lacks parentheses around ‘xy’
i. ∃x∃yF[xy] Not a formula –parentheses required; not brackets
k. Eileen resides in a big city. <use 'R()' for ' resides in '>
∃x[x is a big city ∧ Eileen resides in x]
∃x[Bx ∧ Cx ∧ R(ex)]
l. Eileen and Betty both reside in the same city.
∃x[x is a city ∧ Eileen resides in x ∧ Betty resides in x]
∃x[Cx ∧ R(ex) ∧ R(bx)]
m. If Hank resides in Brea then he attends UCLA; otherwise he doesn't attend UCLA.
[Hank resides in Brea → Hank attends UCLA] ∧ [Hank doesn't reside in Brea → Hank doesn't
attend UCLA]
[R(hb) → A(ha)] ∧ [~R(hb) → ~A(ha)]
n. If David and Hank both live in Brea then David attends a private school and Hank attends a public
school.
D: private E: public C: school L(): lives in A(): attends
David lives in Brea ∧ Hank lives in Brea→ ∃x[x is a private school ∧ David attends x] ∧ ∃x[x is a
public school ∧ Hank attends x]
L(db) ∧ L(hb) → ∃x[Dx ∧ Cx ∧ A(dx)] ∧ ∃x[Ex ∧ Cx ∧ A(hx)]
o. Nobody who comes from Germany attends a Californian school. F: Californian
~∃x[x comes from Germany ∧ x attends a Californian school]
~∃x[x comes from Germany ∧ ∃y[y is a Californian school ∧ x attends y]]
~∃x[C(xg) ∧ ∃y[Fy ∧ Cy ∧ A(xy)]]
p. No giraffe likes Fido unless it is crazy
∀x[x is a giraffe → x doesn't like Fido unless x is crazy]
∀x[Gx → ~L(xf) ∨ Cx]
or
~∃x[x is a giraffe ∧ x likes Fido ∧ x isn't crazy]
~∃x[Gx ∧ L(xf) ∧ ~Cx]
q. Nobody gives a book to a freshman unless it is inexpensive G(): gives to
∀x∀y[x is a person ∧ y is a book → x doesn't give y to a freshman unless y is inexpensive]
∀x∀y[Ex ∧ By → ~∃z[Fz∧G(xyz)] ∨ Iy]
or
~∃x∃y[Ex ∧ By ∧ ∃z[Fz∧G(xyz)] ∧ ~Iy]
3 DERIVATIONS
Show each of the following arguments to be valid.
1. ∀x∀y∀z[S(xy) ∧ S(yz) → S(xz)]
S(bc) ∧ S(ab)
∴ S(ac)
1. Show S(ac)
2. S(ab) ∧ S(bc) → S(ac) pr1 ui ui ui
3. S(ab) pr2 s
4. S(bc) pr2 s
5. S(ac) 3 4 adj 2 mp dd
3. ∀x∃yS(xy)
∀x∀y[Cx ∧ S(xy) → Dy]
∀x∀y[Dx ∧ S(yx) → Dy]
∴ ~∃x[Cx ∧ ~Dx]
1. Show ~∃x[Cx ∧ ~Dx]
2. ∃x[Cx ∧ ~Dx] ass id
3. Cu ∧ ~Du 2 ei
4. ∃yS(uy) pr1 ui
5. S(uv) 4 ei
6. Cu ∧ S(uv) → Dv pr2 ui ui
7. Dv 3 s 5 adj 6 mp
8. Dv ∧ S(uv) → Du pr3 ui ui
9. Du 7 5 adj 8 mp
10. ~Du 3s 9 id
4. ∃xEx ∧ ∃x~Ex
∀x∀y[Ex ∧ S(xy) → Ey]
∴ ∃x∃y~S(xy)
1. Show ∃x∃y~S(xy)
2. ∃xEx pr1 s
3. Eu 2 ei
4. ∃x~Ex pr1 s
5. ~Ev 4 ei
6. Eu ∧ S(uv) → Ev pr2 ui ui
7. ~[Eu ∧ S(uv)] 5 6 mt
8. ~Eu ∨ ~S(uv) 7 dm
9. ~S(uv) 3 dn 8 mtp
10. ∃x∃y~S(xy) 9 eg eg dd
5. ∀x∀y[S(xy) ↔ S(yx)]
∃x∃y[Ax ∧ By ∧ S(xy)]
∴ ∃x∃y[By ∧ Ax ∧ S(yx)]]
2. ∀x∃y[Ax ↔ R(xy)]
∀z∀y[R(zy) ↔ S(yz)]
∀x[[Ax↔Ax] ↔ Ax]
∴ Au
1. Show ∃x[R(xx) ∧ ~Ax]
2. ∃y[Ax ↔ R(xy)] pr1 ui
3. Au ↔ R(xu) 2 ie
4. R(xu) ↔ S(ux) pr2 ui
5. Au ↔ S(ux) 4 ie/3 OK
6. Au ↔ S(ux) 3 ie/4 OK
7. Au ↔ Au 5 ie/6 OK
8. [Au↔Au] ↔ Au pr3 ui
9. Au 7 ie/8 dd OK -- results from 7 by changing ‘Au↔Au’ to
‘Au’, which is justified by line 8
5 BICONDITIONAL DERIVATIONS
1. Prove the given biconditional without using a biconditional derivation and also without using the rule for
interchange of equivalents:
∴ ~P ∧ ~~Q ↔ ~[P ∨ ~Q]
1. Show ~P ∧ ~~Q ↔ ~[P ∨ ~Q]
2. Show ~P ∧ ~~Q → ~[P ∨ ~Q]
3. ~P ∧ ~~Q ass cd
4. Show ~[P ∨ ~Q]
5. P ∨ ~Q ass id
6. ~Q 3 s 5 mtp
7. ~~Q 3 s 6 id
b. ∃z∀x[Fx ↔ Fz]
One way to do this kind of problem is to ask yourself how to express what a formula says in terms of a
formula without overlay, and then prove that that formula is equivalent to the original. Here is an
example, that leads to a long derivation.
38. 21 cd
Another way to do it is to just go through and change parts to their equivalents. This is convenient if you
make use of derived rules, beginning by setting up the derivation before you know what the final formula
will be. Your goal will be to turn subformulas into disjunctions and conjunctions and use the laws:
assoc associativity
com commutativity
dist distribution
conf confinement
qdist quantifier distribution
bex biconditional expansion
One may then replace the question marks with the formula on line 12, and then add:
13. 12 bd
and box and cancel the original 'show'.
e. ∀x∃y∀z[Fx ∧ Gz → Fz ∨ Gy]
The strategy here is to manipulate the parts to get the two atomic formulas containing 'z' together as a
unit (lines 3-5) and then apply the confinement laws.
1. Show ∀x∃y∀z[Fx ∧ Gz → Fz ∨ Gy] ↔ ∃xFx → [∀z[~Gz ∨ Fz] ∨ ∃yGy]
2. ∀x∃y∀z[Fx ∧ Gz → Fz ∨ Gy] ass bd
3. ∀x∃y∀z[Fx → [Gz → Fz ∨ Gy]] 2 ie/exp
4. ∀x∃y∀z[Fx → [~Gz ∨ [Fz ∨ Gy]]] 3 ie/cdj
5. ∀x∃y∀z[Fx → [[~Gz ∨ Fz] ∨ Gy]] 4 ie/assoc
6. ∀x∃y[Fx → ∀z[[~Gz ∨ Fz] ∨ Gy]] 5 ie/conf
7. ∀x∃y[Fx → [∀z[~Gz ∨ Fz] ∨ Gy]] 6 ie/conf
8. ∀x[Fx → ∃y[∀z[~Gz ∨ Fz] ∨ Gy]] 7 ie/conf
9. ∀x[Fx → [∀z[~Gz ∨ Fz] ∨ ∃yGy]] 8 ie/conf
10. ∃xFx → [∀z[~Gz ∨ Fz] ∨ ∃yGy] 9 ie/conf bd
Finally fill in the right-hand side of the top biconditional with what you have shown on line 5, and box and
cancel:
1. Show [∀x∃yP(xy) → ∃uP(uu)] ↔ ∃x∀y∃u[P(xy) → P(uu)]
2. ∀x∃yP(xy) → ∃uP(uu) ass bd
3. ∃x[∃yP(xy) → ∃uP(uu)] 2 ie/conf
4. ∃x∀y[P(xy) → ∃uP(uu)] 3 ie/conf
5. ∃x∀y∃u[P(xy) → P(uu)] 4 ie/conf bd
Derivations for the other examples will be generated in this way: set up a biconditional derivation and then
carry it out. Only the final derivations are given below:
b. ∀x[∃uR(ux) → ∃uR(xu)]
The trick here is to use rule av to change bound variables so that the confinement rules will apply.
1. Show ∀x[∃uR(ux) → ∃uR(xu)] ↔ ∀x∀y∃u[R(yx) → R(xu)]
2. ∀x[∃uR(ux) → ∃uR(xu)] ass bd
3. ∀x[∃yR(yx) → ∃uR(xu)] 2 ie/av
4. ∀x∀y[R(yx) → ∃uR(xu)] 3 ie/conf
5. ∀x∀y∃u[R(yx) → R(xu)] 4 ie/conf bd
c. ∀x∃yA(xy) ∨ ∀x∃yA(yx)
1. Show [∀x∃yA(xy) ∨ ∀x∃yA(yx)] ↔ ∀x∃y∀u∃v[A(xy) ∨ A(vu)]
2. ∀x∃yA(xy) ∨ ∀x∃yA(yx) ass bd
3. ∀x∃yA(xy) ∨ ∀x∃vA(vx) 2 ei/av
4. ∀x∃yA(xy) ∨ ∀u∃vA(vu) 3 ei/av
5. ∀x[∃yA(xy) ∨ ∀u∃vA(vu)] 4 ei/conf
6. ∀x∃y[A(xy) ∨ ∀u∃vA(vu)] 5 ei/conf
7. ∀x∃y∀u[A(xy) ∨ ∃vA(vu)] 6 ei/conf
8. ∀x∃y∀u∃v[A(xy) ∨ A(vu)] 7 ei/conf bd
d. ∀x∀y[R(xy)↔R(yx)]
This is already in prenex normal form.
8 SOME THEOREMS
Derivations are not given here for numbered theorems.
9 SHOWING INVALIDITY
1. Give counter-examples to show that these arguments are invalid in the predicate calculus.
a. ∀x[Fx → ∃y[Gy ∧ R(xy)]]
∀x[Gx → ~R(xx)]
∴ ∃x[Fx ∧ ∀y[Gy ∧ R(xy)]]
Universe: {0}
F: {}
G: <any choice will do>
R: <any choice will do>
Both premises are true because they contain conditionals with false antecedents for any value of 'x'; the
conclusion is false because nothing is F.
Another answer:
Universe: {0,1}
F: {0,1}
G: {0,1}
R: {<1,0>, <0,1>}
The first premise is true because for any choice of 'x' (either 0 or 1) there is something which is G and
related to the choice of 'x' by R. The second premise is true because nothing is related to itself by R. The
conclusion is false because whatever you pick for 'x' the universal quantifier '∀y' will require that that thing
be related to itself by R.
b. ∀x[F(xa) ↔ G(xb)]
∴ ∃x∃y[F(xy) ∧ G(xy)]
Universe: {0,1}
a: 1
b: 0
F: {<0,1>}
G: {<0,0>}
The premise is true because its instances are all true; choosing 0 for 'x' both sides of the biconditional are
true; choosing 1 for 'x' both sides of the biconditional are false. The conclusion is false since there is
nothing you can pick for 'x' and 'y' which give you a pair of things that is in the extensions of both F and G.
c. ∀x[Hx → R(ax)]
∀x∀y[R(xy) ↔ R(yx)]
∴ ∃y∀xR(xy)
Universe: {0,1}
a: 0
H: {0}
R: {<0,0>}
The first premise is true because there is only one thing that is H, and that is 0, and it is related to the
thing that 'a' stands for (namely, 0) by R. The second premise says that any pair of things that are related
by R are also related in reverse order; the only thing that R applies to is the pair <0,0>, and reversing it
makes no difference. The conclusion is false since there isn't anything that is related to everything by R.
d. ∀x[Fx → ∃yR(xy)]
∀x[~Fx → ∃yR(yx)]
∴ ∀x∃y[R(xy) ∧ R(yx)]
Universe: {0,1}
F: {0}
R: {<0,1>}
The first premise is true because whatever is F, namely, 0, is related to something (namely, 1) by R. The
second premise is true because for whatever isn't F, namely, 1, something (namely, 0) is related to it by
R. The conclusion is false since it says that everything is related to something by R in both directions,
and nothing is related to anything by R in both directions.
The first premise is true since it comes out true for all choices of 'x' and 'y'. (There are nine choices in all;
each can be checked on its own.) The second premise says that F is symmetric; this is clearly true since
for every pair in the extension of F the reverse pair is also there. The conclusion says falsely that G is
symmetric; G holds of <2,0> but not of <0,2>.
Another answer:
Universe: {0,1}
F: {}
G: {<0,1>, <1,1>}
The first premise is true since both sides of the biconditional are false for any choices of 'x' and 'y'; this is
clear for the left-hand side since F is true of no pairs at all; the other side can be checked by cases. The
second premise is vacuously true. The conclusion falsely says that G is symmetric; but G holds of <0,1>
and not of <1,0>.
10 INFINITE UNIVERSES
Give counterexamples with infinite domains to show that each of the following arguments is invalid.
a. ∀x~R(xx)
∀x∃yR(xy)
∴ ∃x∃y∃z[R(xy) ∧ R(yz) ∧ ~R(xz)]
Universe: {0, 1, 2, . . . } zero and all the positive integers
R() : <
The first premise says truly that nothing is less than itself. The second says truly that for every non-
negative integer there is another non-negative integer that it is less than. The conclusion is false since
less than is transitive.
The first premise is true since less than is transitive. The second is true since no even number is less
than itself. The third premise says truly that for every non-negative integer there is an even non-negative
integer that it is less than. The conclusion says falsely that for some non-negative integer it isn't the case
that it's even if and only if it's even.
The first premise is true since for every even non-negative integer there is an odd non-negative integer
that is greater than it. The second premise is true since for every odd non-negative integer there is an
even non-negative integer that is greater than it. The third premise says truly that every non-negative
integer is either even or odd. <if you think that 0 is neither even nor odd, then just change the
interpretation of 'E' to 'x is even or x=0'> The fourth premise says truly that greater than is transitive. The
conclusion says falsely that some non-negative integer is greater than itself.
Chapter Five
Identity and Operation Symbols
1 IDENTITY
A certain relation is given a special treatment in logic. This is the identity relation -- the relation that relates
each thing to itself and relates no thing to another thing. It is represented by a two-place predicate. For
historical reasons, it is usually written as the equals sign of arithmetic, and instead of being written in the
position that we use for other predicates, in front of its terms:
=(xy)
it is written in between its terms:
x=y
Except for its special shape and location, it is just like any other two-place predicate. So the following are
formulas:
a=x
b=z ∨ ~b=c
Ax → x=x
∀x∀y[x=a → [a=y → x=y]]
∀x[Bx → ∃y[Cy ∧ x=y]]
This sign is used to symbolize the word 'is' in English when that word is used between two names. For
example, according to the famous story, Dr. Jekyll is Mr. Hyde, so using 'e' for Jekyll and 'h' for Hyde we
write 'Jekyll is Hyde' as 'e=h'. And using 'c' for 'Clark Kent', 'a' for 'Superman', and 'd' for Jimmy Olsen we
can write:
a=c ∧ ~a=d Superman is Clark Kent but Superman is not Jimmy Olsen
It is customary to abbreviate the negation of an identity formula by writing a slash through the identity sign:
'≠' instead of putting the negation sign in front. So we could write:
a=c ∧ a≠d Superman is Clark Kent but Superman is not Jimmy Olsen
There are other ways of saying 'is'. The word 'same' sometimes conveys the sense of identity -- and
sometimes not. Consider the claim:
Bozo and Herbie were wearing the same pants.
This could simply mean that they were wearing pants of the same style; if so, that is not identity in the
logical sense. But it could mean that there was a single pair of pants that they were both inside of; that
would mean identity.
Copyrighted material Chapter 5 -- 1 Version of Aug 2013
CHAPTER 5 SECTION 1
The word 'other' is often meant as the negation of identity. In the following sentences:
Agatha saw a dragonfly and Betty saw a dragonfly
Agatha saw a dragonfly and Betty saw another dragonfly
the first sentence is neutral about whether they saw the same dragonfly, but in the second sentence Betty
saw a dragonfly that was not the same dragonfly that Agatha saw:
∃x[Dx ∧ S(ax)] ∧ ∃y[Dy ∧ S(by)]
∃x[Dx ∧ S(ax) ∧ ∃y[Dy ∧ y≠x ∧ S(by)]]
y is other than x
EXERCISES
a. Fa ∧ Gb ∧ F=G
b. ∀x∀y[R(xy) → x=y]
c. ∀x∀y[R(xy) ∧ x≠y ↔ S(yx)]
d. R(xy) ∧ R(yx) ↔ x=y
e. ∃x∃y[x=y ∧ y≠x]
At most one: If we want to say that Betty saw at most one dragonfly, we can say that if she saw a
dragonfly and a dragonfly, they were the same:
∀x∀y[x is a dragonfly that Betty saw ∧ y is a dragonfly that Betty saw → x=y]
∀x∀y[Dx ∧ Dy ∧ S(bx) ∧ S(by) → x=y]
This doesn't say whether Betty saw any dragonflies at all; it merely requires that she didn't see more than
one. We can also symbolize this by saying that she didn't see at least two dragonflies:
~∃x∃y[Dx ∧ Dy ∧ S(bx) ∧ S(by) ∧ y≠x]
It is easy to show that these two symbolizations are equivalent:
1. Show ∀x∀y[Dx ∧ Dy ∧ S(bx) ∧ S(by) → x=y] ↔ ~∃x∃y[Dx ∧ Dy ∧ S(bx) ∧ S(by) ∧ y≠x]
2. ~∃x∃y[Dx ∧ Dy ∧ S(bx) ∧ S(by) ∧ y≠x] ass bd
3. ∀x∀y~[Dx ∧ Dy ∧ S(bx) ∧ S(by) ∧ y≠x] 2 ie/qn ie/qn
4. ∀x∀y[Dx ∧ Dy ∧ S(bx) ∧ S(by) → x=y] 3 ie/nc bd
At most two: If we want to say that Betty saw at most two dragonflies either of the above styles will do:
∀x∀y∀z[Dx ∧ Dy ∧ Dz ∧ S(bx) ∧ S(by) ∧ S(bz) → x=y ∨ x=z ∨ y=z]
~∃x∃y∃z[Dx ∧ Dy ∧ Dz ∧ x≠y ∧ y≠z ∧ x≠z ∧ S(bx) ∧ S(by) ∧ S(bz)]
Exactly one: There are two natural ways to say that Betty saw exactly one dragonfly. One is to conjoin
the claims that she saw at least one and that she saw at most one:
∃x[Dx ∧ S(bx)] ∧ ∀x∀y[Dx ∧ Dy ∧ S(bx) ∧ S(by) → x=y]
Or we can say that she saw a dragonfly, and any dragonfly she saw was that one:
∃x[Dx ∧ S(bx) ∧ ∀y[Dy ∧ S(by) → x=y]]
Or, even more briefly:
∃x∀y[Dy ∧ S(by) ↔ y=x]
Exactly two: Similarly with exactly two; we can use the conjunction of she saw at least two and she saw
at most two:
∃x∃y[Dx ∧ S(bx) ∧ Dy ∧ S(by) ∧ y≠x] ∧
∀x∀y∀z[Dx ∧ Dy ∧ Dz ∧ S(bx) ∧ S(by) ∧ S(bz) → x=y ∨ x=z ∨ y=z]
or we can say that she saw two dragonflies, and any dragonfly she saw is one of them:
∃x∃y[Dx ∧ S(bx) ∧ Dy ∧ S(by) ∧ y≠x ∧ ∀z[Dz ∧ S(bz) → x=z ∨ y=z]]
or, even more briefly:
∃x∃y[y≠x ∧ ∀z[Dz ∧ S(bz) ↔ z=x∨z=y]]
Talk of at least, or at most, or exactly, frequently occurs within larger contexts. For example:
Some giraffe that saw at least two hyenas was seen by at most two lions
∃x[x is a giraffe ∧ x saw at least two hyenas ∧
x was seen by at most two lions]
i.e.
∃x[Gx ∧ ∃y∃z[Hy ∧ Hz ∧ y≠z ∧ S(xy) ∧ S(xz)] ∧
∀u∀v∀w[Lu ∧ Lv ∧ Lw ∧ S(ux) ∧ S(vx) ∧ S(wx) → u=v ∨ v=w ∨ u=w]]
Or this:
Each giraffe that saw exactly one hyena saw a lion that exactly one hyena saw
∀x[x is a giraffe ∧ x saw exactly one hyena → ∃y[Ly ∧ exactly one hyena saw y ∧ x saw y]]
∀x[Gx ∧ ∃z[Hz ∧ S(xz) ∧ ∀u[Hu ∧ S(xu) → u=z]] →
∃y[Ly ∧ ∃v[Hv ∧ S(vy) ∧ ∀w[Hw ∧ S(wy) → w=v]] ∧ S(xy)]]
Only: In chapter 1 we saw how to symbolize claims with ‘only if’, and in chapter 3 we discussed how to
symbolize ‘only As are Bs’. When ‘only’ occurs with a name, it has a similar symbolization. Saying that
only giraffes are happy is to say that anything that is happy is a giraffe:
∀x[Hx → Gx]
or that nothing that isn't a giraffe is happy:
~∃x[~Gx ∧ Hx]
With a name or variable the use of 'only' is generally taken to express a stronger claim. For example,
'only Cynthia sees Dorothy' is generally taken to imply that Cynthia sees Dorothy, and that anyone who
sees Dorothy is Cynthia:
S(cd) ∧ ∀x[S(xd) → x=c]
This can be symbolized briefly as:
∀x[S(xd) ↔ x=c]
We have seen that 'another' can often be represented by the negation of an identity; the same is true of
'except' and 'different':
No freshman except Betty is happy.
~∃x[x is a freshman ∧ x isn't Betty ∧ x is happy]
~∃x[Fx ∧ ~x = b ∧ Hx]
This has the same meaning as 'No freshman besides Betty is happy'. Notice that neither of these
sentences entail that Betty is happy. That is because one could reasonably say something like 'No
freshman except Betty is happy, and for all I know she isn't happy either'. So the sentence by itself does
not say that Betty herself is happy, although if you knew that the speaker knew whether or not Betty is
happy then since the speaker didn't say 'No freshman is happy', you can assume that the speaker thinks
Betty is happy.
Lastly:
Betty groomed a dog and Cynthia groomed a different dog.
∃x[x is a dog ∧ Betty groomed x ∧ ∃y[y is a dog ∧ y is different from x ∧ Cynthia groomed y]]
∃x[Dx∧ G(bx) ∧ ∃y[Dy ∧ ~y = x ∧ G(cy)]]
EXERCISES
1. Symbolize each of the following,
a. At most one candidate will win at least two elections
b. Exactly one election will be won by no candidate
c. Betty saw at least two hyenas which (each) saw at most one giraffe.
2. The text states that one can symbolize 'Betty saw exactly one dragonfly' as:
∃x∀y[Dy ∧ S(by) ↔ y=x].
Prove that this sentence is equivalent to one of the other symbolizations given in the text for 'exactly one'.
3. Similarly show that one can symbolize 'Betty saw exactly two dragonflies' as:
∃x∃y[x≠y ∧ ∀z[Dz ∧ S(bz) ↔ z=x ∨ z=y]]
by showing that this is equivalent to one of the other symbolizations given in the text.
4. Show that the two symbolizations proposed above for only Cynthia sees Dorothy are equivalent:
∴ S(cd) ∧ ∀x[S(xd) → x=c] ↔ ∀x[S(xd) ↔ x=c]
This rule is not often used, but when it is needed, it is straightforward. For example, it can be used to
show that this argument is valid:
∀x x=x → P
∴ P
1. Show P
2. ~P ass id
3. ~∀x x=x 2 pr1 mt
4. ∃x~x=x 3 qn
5. ~u=u 4 ei
6. u=u sid Rule sid
7. 5 6 id
1. Show P
2. Show ∀x x=x
3. x=x sid ud Rule sid
4. P pr1 3 mp dd
th
The more commonly used rule is called Leibniz's Law, for the 17-18 century philosopher Gottfried
Wilhelm von Leibniz. It is an application of the principle that if x=y then whatever is true of x is true of y.
Specifically:
Example:
Cynthia saw a rabbit, and nothing else. ∃x[Rx ∧ S(cx) ∧ ∀y[S(cy) → ~y≠x]]
Cynthia saw Henry S(ch)
∴ Henry is a rabbit Rh
1. Show Rh
2. Ru ∧ S(cu) ∧ ∀y[S(cy) → ~y≠u] pr1 ei
3. ∀y[S(cy) → ~y≠u] 2s
4. S(ch) → ~h≠u 3 ui
5. ~h≠u pr2 4 mp
6. h=u 5 dn
7. Ru 2ss
8. Rh 6 7 LL dd
It is convenient to also have a contrapositive form of Leibniz's law, saying that if something that is true of a
is not true of b, then a≠b. For example:
Fa ∧ S(ac)
~[Fb ∧ S(bc)]
∴ a≠b
This inference is easily attainable with an indirect derivation: assume 'a=b' and use LL with the premises
to derive a contradiction. But it is convenient to include this as a special case of Leibniz's law itself:
An additional rule is derivable from the rules at hand. It is called Symmetry because it says that identity is
symmetric: if x=y then y=x:
Rule sm (symmetry)
If an identity formula (or the negation of an identity formula) occurs on an available line or premise,
one may write that formula with its left and right terms interchanged.
As justification, write the earlier line number and 'sm'.
∃x[x=b ∧ Fx]
∀x[b=x → Gx]
∴ ∃x[Fx ∧ Gx]
1. Show ∃x[Fx ∧ Gx]
2. u=b ∧ Fu pr1 ei
3. u=b 2s
4. b=u → Gb pr2 ui
5. b=u 3 sm rule sm
6. Gb 4 5 mp
7. Fu 2s
8. Fb 3 7 LL
9. Fb ∧ Gb 6 8 adj
10. ∃x[Fx ∧ Gx] 9 eg dd
∴ ∀x[x=a → a=x]
EXERCISES
3. Symbolize these arguments and produce derivations to show that they are valid.
a. Every giraffe that loves some other giraffe loves itself.
Every giraffe loves some giraffe.
∴ Every giraffe loves itself.
b. No cat that likes at least two dogs is happy.
Tabby is a cat that likes Fido.
Tabby likes a dog that Betty owns.
Fido is a dog.
Tabby is happy.
∴ Betty owns Fido.
5 OPERATION SYMBOLS
So far we have dealt only with simple terms: variables and names. In mathematics and in science
complex terms are common. Some familiar examples from arithmetic are:
−x, x , √x, . . .
2
negative x, x squared, the square root of x
x+y, x−y, x×y, . . . x plus y, x minus y, x times y
These complex terms consist of variables combined with special symbols called operation symbols. The
operation symbols on the first line are one-place operation symbols; they each combine with one variable
to make a complex term. The two-place operation symbols on the second line each combine with two
variables to make a complex term. Operation symbols also combine with names. It is customary in
arithmetic to treat numerals as names of numbers. When numeral names combine with operation
symbols we get complex signs such as:
−4, 7 , √9, . . .
2
In logical notation we use any small letter between 'a' and 'h' as an operation symbol; the terms that they
combine with are enclosed in parentheses following them. So if 'a' stands for the squaring operation, we
2
write 'a‹x›' for what is represented in arithmetic as 'x ' and if 'b' stands for the addition operation, we write
'b‹xy›' for what is represented in arithmetic as 'x+y'. Specifically:
Terms
Simple names (the letters 'a' through 'h') and variables (the letters 'i' through 'z') are terms.
Any small letter between 'a' and 'h' can be used as an operation symbol.
Any operation symbol followed by some number of terms in parentheses is a term.
The same letters are used both for names and for operation symbols. (It is often held that names are
themselves zero-place operation symbols; a name makes a term by combining with nothing at all.) You
can tell quickly whether a small letter between 'a' and 'h' is being used as a name or as an operation
symbol: if it is directly followed by a left parenthesis, it is being used as an operation symbol; otherwise it is
being used as a name.
Examples of terms are: 'b', 'w', 'e‹x›', 'f‹by›', 'h‹zbx›'. Since an operation symbol may combine with any
term, it may combine with complex terms. So 'f‹z e‹x››' is a term, which consists of the operation symbol
'f' followed by the two terms: 'z' and 'e‹x›'. Terms can be much more complex than this. Consider the
arithmetical expression:
a × (b + c )
2 2
If 'd' stands for the multiplication operation, 'e' for addition, and 'f' for squaring, this will be expressed in
logical notation as:
d‹a e‹f‹b›f‹c›››
In arithmetic, operation symbols can go in front of the terms they combine with (as with '−4'), or between
the terms they combine with (as with '5×8'), or to-the-right-and-above the terms they combine with (as with
2
'7 '), and so on. The logical notation used here uniformly puts operation symbols in front of the terms that
they combine with.
We are used to seeing arithmetical notation used in equations with the equals sign. If numerals are
names of numbers, then the equals sign can be taken to mean identity, and we can use our logical identity
sign -- which already looks exactly like the equals sign -- for the equals sign. For example, we can take
the equation:
7+5 = 12
to say that the number that '7+5' stands for is exactly the same number that '12' stands for. The equation
will appear in logical notation as:
e‹ab› = c
EXERCISES
Which of the following are formulas?
a. R(f‹x›g‹x›)
b. ∀x[Fx → Fg‹x›]
c. ∀x[Fx → Fg‹xx›]
d. ∀x∀y[x=h‹y› → f‹xy›=f‹yx›]
e. ~∃y∃x[x=f‹y› ∧ y=f‹x›]
f. ~∃x∃yf‹xy›
g. S(xyz) ∨ ~S(xg‹y›z) ∨ ~S(g‹xy›g‹z›g‹yz›)
h. Fa ∧ ~Fb → [Fg‹a› → g‹b›≠g‹a›]
Complex terms made with operation symbols do not require any additional rules of derivation. All that is
needed is a clarification of previous rules regarding free occurrences of terms. Recall that if we are going
to apply Leibniz's Law, there is a restriction that the occurrences of terms being changed be free ones.
This is to forbid fallacious inferences like this one:
5. x=a <derived somehow>
6. ∃x(Fx ∧ Gx) <derived somehow>
7. ∃x(Fa ∧ Gx) 5 6 LL incorrect step
This inference is prevented by the restriction on Leibniz's Law that says that both the term being replaced
and its replacement be free occurrences at the location of replacement. The displayed inference violates
this constraint because it replaces a bound occurrence of 'x' by 'a'. When Leibniz's Law is applied to
complex terms we say that a complex term is considered not to be free if it contains any variables that are
bound by a quantifier outside the term; otherwise it is free. So, for example, this is fallacious:
5. h‹x›=a <derived somehow>
6. ∃x(Fh‹x› ∧ Gx) <derived somehow>
7. ∃x(Fa ∧ Gx) 5 6 LL incorrect step
This application of Leibniz's Law is incorrect since the occurrence of the term 'h‹x›' being replaced has its
'x' bound by a quantifier outside that term on line 6. The following is OK since no variable becomes
bound:
5. h‹y›=a <derived somehow>
6. ∃x(Fh‹y› ∧ Gx) <derived somehow>
7. ∃x(Fa ∧ Gx) 5 6 LL
Some arithmetical calculations with complex terms are just applications of the logic of identity. For
example, given that 2+3=5, and that 5+2=7 we can prove by the logic of identity alone that 7=(2+3)+2.
This inference has the form:
e‹ab› = c 2+3 = 5
e‹ca› = d 5+2 = 7
∴ d = e‹e‹ab›a› 7 = (2+3)+2
1. Show d = e‹e‹ab›a›
2. e‹e‹ab›a› = d pr1 pr2 LL <replacing 'c' in premise 2 by 'e‹ab›'
3. d = e‹e‹ab›a› 2 sm dd
Other similar inferences cannot be proved by logic alone. For example, we cannot prove '2+3=3+2' by
logical principles alone, because the fact that the order of the terms flanking an addition sign doesn't
matter is not a principle of logic. This pattern doesn't hold, for example, for subtraction; we don't have
2−3=3−2.
A simple consequence of our laws of identity is a principle that is sometimes called Euclid's Law, because
it was used by the geometer Euclid. This law says that given an identity statement, you can infer another
identity statement where both sides of the new identity differ only with respect to terms identified in the
original identity statement. Some examples of Euclid's Law are:
2 2 2
x=y a=b x=a a =b a+b=c+d
∴ x =y ∴ a+1=b+1 ∴ 3×x=3×a ∴ a +a =b +b ∴ (a+b) =(c+d)
2 2 2 2 2 2 2 2 2
This rule is only a convenience, since one can get along without it by combining the rule for self-identity
with Leibniz's Law. For example, we can validate this use of Euclid's Law:
a+b=c+d
∴
2 2
(a+b) =(c+d)
with this derivation, which does not appeal to Euclid's Law:
2 2
1. Show (a+b) =(c+d)
2 2
2. (a+b) =(a+b) sid
2 2
3. (a+b) =(c+d) 2 pr1 LL dd
Mathematical equations often appear in the formulation of scientific principles. For example, in physics
you might be given an equation saying that the force acting on a body is equal to the product of its mass
times its acceleration. The scientific equation for this is typically written:
F = ma
From the point of view of our logical notation, this is a universal generalization of the form:
∀x[f‹x› = b‹m‹x›a‹x››]
where 'b' represents the operation symbol for multiplication, and where 'f‹x›' means "the force acting on x",
'm‹x›' means "the mass of x", and 'a‹x›' means "the acceleration of x produced by f‹x›".
Operation symbols are not common outside of mathematics and science. They are sometimes used in
discussing kinship relations, where 'father of' and 'mother of' are treated as operation symbols. Here is a
set of principles of biological kinship where:
∴ ∀x∀y[I(xy) ∧ ∃z x=f‹z› → B(xy)] Any father who is someone's sibling is that person's
brother
Derivations with operation symbols can be hard to do; this happens often in mathematics, and it accounts
for some of the reason that mathematics is thought to be difficult. An example of this is the typical
development of the mathematical theory of groups. A group is a set of things which may be combined
with a two-place operation symbolized by 'c'. In a group, each thing has an inverse; the inverse of a thing
is represented using a one-place inverting operation symbol 'd'. And there is a neutral element 'e'. There
are three axioms governing groups:
∀x∀y∀z c‹xc‹yz››=c‹c‹xy›z› Combination is associative
∀x c‹xe›=x Combining anything with e yields the original thing
∀x c‹xd‹x››=e Combining anything with its inverse yields e
These axioms can be satisfied by a wide variety of structures. For example, the positive and negative
integers together with zero satisfy these axioms when the method of combination is addition, the neutral
element is 0, and the inverse of anything is its negative; in arithmetical notation the axioms look like:
∀x∀y∀z x+(y+z)=((x+y)+z) Addition is associative
∀x x+0=x Adding zero to anything yields that thing
∀x x+(−x)=0 Adding the negative of anything to that thing yields 0
A typical exercise in group theory is to show that the group axioms entail the "law of right-hand
cancellation": the general principle whose arithmetical analogue is '∀x∀v∀u [x+u=v+u → x=v]':
∀x∀y∀z c‹xc‹yz››=c‹c‹xy›z›
∀x c‹xe›=x
∀x c‹xd‹x››=e
∴ ∀x∀v∀u [c‹xu›=c‹vu› → x=v]
[The reader should try to think up how to do this derivation before looking below.]
EXERCISES
1. Give derivations for these theorems:
a. ∴ ∀xR(xf‹x›) → ∀xR(e‹x›f‹e‹x››)
b. ∴ ∀x∀y[x=f‹y› ∧ y=f‹x› → f‹f‹x››=x]
2. Show that these are consequences of the theory of biological kinship given above.
a. ~∃x[∃zB(xz) ∧ ∃zD(xz)] No brother is a daughter
b. ~∃x[∃z x=f‹z› ∧ ∃z x=e‹z›] No father is a mother
3. Show that these are consequences of the axioms for groups given above.
a. ∀x∀y∀z[c‹xy›=c‹zy› → x=z] <proved above>
b. ∀x[∀yc‹yx›=y → x=e]
c. ∀x c‹xd‹x››=c‹d‹x›x›
d. ∀x∀y∀z[c‹yx›=c‹yz› → x=z]
e. ∀x∀y[c‹xy›=e → y=d‹x›]
f. ∀x d‹d‹x››=x
Two-place operation symbols are treated the same, except that they require us to assign things to pairs of
members of the universe. Consider this invalid argument:
∀x∃yf‹xy›=x
∀x∃yf‹xy›≠x
∴ ∃x∃y[x≠y ∧ f‹xy›=f‹yx›]
This can be shown invalid with a counter-example having a two-membered universe: {0,1}
We give the following for f:
f‹00› = 0 f‹01› = 1 f‹10› = 0 f‹11› = 1
The first premise is true because if 'x' is chosen to be 0, 'y' can be chosen to be 0, and if 'x' is chosen to
be 1, 'y' can be chosen to be 1. The opposite choices make the second premise true. But the conclusion
is false, because when 'x' and 'y' are different, their order makes a difference for 'f'.
If these judgments about the truth and falsity of the sentences in this counter-example are difficult, we can
employ the method of truth-functional expansions from chapter 3. We introduce names for the members
of the universe:
i0: 0
i1: 1
The first premise has as a partial expansion the conjunction:
pr1: ∃yf‹i0y›=i0 ∧ ∃yf‹i1y›=i1
Its full expansion results from expanding each conjunct into a disjunction:
pr1: [f‹i0i0›=i0 ∨ f‹i0i1›=i0] ∧ [f‹i1i0›=i1 ∨ f‹i1i1›=i1]
T F F T
T T
So the first premise has a true expansion. The second premise has as a partial expansion:
pr2: ∃yf‹i0y›≠i0 ∧ ∃yf‹i1y›≠i1
and as a full expansion:
pr2: [f‹i0i0›≠i0 ∨ f‹i0i1›≠i0] ∧ [f‹i1i0›≠i1 ∨ f‹i1i1›≠i1]
F T T F
T T
It, too, has a true expansion. The conclusion has as a partial expansion:
c: ∃y[i0≠y ∧ f‹i0y›=f‹yi0›] ∨ ∃y[i1≠y ∧ f‹i1y›=f‹yi1›]
and as a full expansion:
[i0≠i0 ∧ f‹i0i0›=f‹i0i0›] ∨ [i0≠i1 ∧ f‹i0i1›=f‹i1i0›] ∨ [i1≠i0 ∧ f‹i1i0›=f‹i0i1›] ∨ [i1≠i1 ∧ f‹i1i1›=f‹i1i1›]
F T T F T F F T
F F F F
As usual, it is a bit complicated to produce these expansions, but easy to check them for truth-value once
they are produced.
EXERCISES
1. Produce counter-examples to show that these arguments are not formally valid:
a. ∀x∃y a‹xy›=c
∀x∃y a‹yx›=c
∴ ∀x∀y a‹xy›=a‹yx›
b. ∀x∃y a‹xy›=c
∀x∃y a‹xy›=d
∴ ∃x∀y a‹xc›=y
c. ∀x∀y a‹xy›=a‹yx›
∴ ∀z∃x∃y a‹xy›=z
2. Show that these are not theorems of the theory of biological kinship given in the previous section:
a. ∀x[∃y x=f‹y› ∨ ∃y x=e‹y›] Everyone is a father or a mother
b. ~∃x x=e‹x› Nobody is their own mother
Some invalid arguments with operation symbols need infinite universes for a counter-example. Here is an
example:
∀xH(xg‹x›)
∀x∀y∀z[H(xy) ∧ H(yz) → H(xz)]
∴ ∃xH(xx)
A natural arithmetic counter-example is given by making the universe the non-negative integers {0,1,2, . .},
making 'g' stand for the successor operation, that is, the operation which associates with each number the
number after it, and 'H' for the two-place relation of 'less than':
g‹›: +1 whatever you apply g to, you get that thing plus 1
H(): < H relates two things iff the first is less than the second
The first premise then says that every integer is less than the integer you get by adding one to it, and the
second says, as earlier, that less than is transitive. The conclusion falsely says that some integer is less
then itself.
Sometimes an infinite universe is not required but it is convenient if a counter-example with an infinite
universe springs to mind. That might happen with this argument:
∀x∀y e‹xy›=e‹yx›
∀x e‹xa›=x
∃x∃y f‹xy›≠f‹yx›
∀x f‹xa› =x
∴ ∃x[x≠a ∧ e‹xx›=x]
Take as the universe all integers, positive, negative, and zero. Then interpret the symbols as follows:
a: 0
e‹›: +
f‹›: −
On this interpretation, the first premise says that + is always the same as +. The second says that
+0= for any integer . The third says that for some integers and , − is not the same as −.
The fourth says that −0= for any integer . The conclusion says falsely that for some integer other
than 0, adding it to itself yields itself.
An infinite universe was not forced on us in this case. We could instead have taken as our universe the
numbers {0,1}, and interpreted as follows:
a: 0
e‹00› = 0 e‹01› = 1 e‹10› = 1 e‹11› = 0
f‹00› = 0 f‹01› = 0 f‹10› = 1 f‹11› = 0
These choices make all the premises true and the conclusion false.
A very simple invalid argument that requires an infinite universe to show its invalidity is the following.
∀x∀y[g‹x›=g‹y› → x=y]
∴ ∃x g‹x› = a
Universe: {0, 1, 2, . . . }
g‹›: +1
a: 0
EXERCISES
2. Show that the third axiom for groups does not follow from the first two axioms; i.e. that this is invalid:
∀x∀y∀z c‹xc‹yz››=c‹c‹xy›z›
∀x c‹xe›=x
∴ ∀x c‹xd‹x››=e
3. Show that the second axiom for groups does not follow from the first two axioms; i.e. that this is invalid:
∀x∀y∀z c‹xc‹yz››=c‹c‹xy›z›
∀x c‹xd‹x››=e
∴ ∀x c‹xe›=x
4. Show that the first axiom for groups does not follow from the first two axioms; i.e. that this is invalid:
∀x c‹xe›=x
∀x c‹xd‹x››=e
∴ ∀x∀y∀z c‹xc‹yz››=c‹c‹xy›z›
1 IDENTITY
1. Say which of the following are formulas:
a. Fa ∧ Gb ∧ F=G No (identity cannot be flanked by predicates)
b. ∀x∀y[R(xy) → x=y] Yes
c. ∀x∀y[R(xy) ∧ x≠y ↔ S(yx)] Yes
d. R(xy) ∧ R(yx) ↔ x=y Yes
e. ∃x∃y[x=y ∧ y≠x] Yes
c. Betty saw at least two hyenas which each saw at most one giraffe.
∃x∃y(x≠y ∧ x is a hyena which saw at most one giraffe ∧ y is a hyena which saw at most one
giraffe ∧ Betty saw x ∧ Betty saw y)
∃x∃y(x≠y ∧ Hx ∧ x saw at most one giraffe ∧ Hy ∧ y saw at most one giraffe ∧ S(bx) ∧ S(by))
∃x∃y(x≠y ∧ Hx ∧ ∀z∀u(Gz∧Gu∧S(xz)∧S(xu) → z=u) ∧ Hy ∧ ∀z∀u(Gz∧Gu∧S(yz)∧S(yu) → z=u) ∧
S(bx) ∧ S(by))
Copyrighted material Chapter 5 -- 21 -- Answers to the Exercises Version of Aug 2013
CHAPTER 5 Answers to Exercises
3. Similarly show that one can symbolize 'Betty saw exactly two dragonflies' as:
∃x∃y[x≠y ∧ ∀z[Dz ∧ S(bz) ↔ z=x ∨ z=y]
That is, show
∴ ∃x∃y[x≠y∧∀z[Dz∧S(bz) ↔ z=x∨z=y] ↔ ∃x∃y[Dx∧S(bx)∧Dy∧S(by)∧y≠x∧∀z[Dz∧S(bz) → x=z∨y=z]]
It is straightforward but quite tedious to write out a derivation for this equivalence.
4. Show that the two symbolizations proposed above for only Cynthia sees Dorothy are equivalent:
∴ S(cd) ∧ ∀x[S(xd) → x=c] ↔ ∀x[S(xd) ↔ x=c]
b. ∃x∀y[Ay ↔ y=x]
∴ ∃x[Ax ∧ ~Bx] ↔ ~∃x[Ax ∧ Bx]
1. Show ∃x[Ax ∧ ~Bx] ↔ ~∃x[Ax ∧ Bx]
2. Show ∃x[Ax ∧ ~Bx] →~∃x[Ax ∧ Bx]
3. ∃x[Ax ∧ ~Bx] ass cd
4. Au ∧ ~Bu 3 ei
5. Show ~∃x[Ax∧Bx]
6. ∃x[Ax∧Bx] ass id
7. Av∧Bv 6 ei
8. ∀y[Ay ↔ y=w] pr1 ei
9. Au ↔ u=w 8 ui
10. u=w 4 2 9 bc mp
11. Av ↔ v=w 8 ui
12. v=w 7 s 9 bc mp
13. u=v 10 12 LL
14. ~Bv 4 s 13 LL
15. Bv 7 s 14 id
16. 5 cd
17. Show ~∃x[Ax ∧ Bx] → ∃x[Ax ∧ ~Bx]
18. ~∃x[Ax∧Bx] ass cd
19. ∀y[Ay ↔ y=i] pr1 ei
20. Ai ↔ i=i 19 ui
21. i=i sid
22. Ai 20 bc 21 mp
23. ∀x~[Ax ∧ Bx] 18 qn
24. ~[Ai ∧ Bi] 23 ui
25. ~Ai ∨ ~Bi 24 dm
26. ~Bi 22 dn 25 mtp
27. Ai ∧ ~Bi 22 26 adj
28. ∃x[Ax∧Bx] 27 eg cd
29. ∃x[Ax ∧ ~Bx] ↔ ~∃x[Ax ∧ Bx] 2 17 bc dd
c. ∃x∃y[x≠y ∧ Gx ∧ Gy]
∀x[Gx → Hx]
∴ ~∃x∀y[Hy ↔ y=x]
d. ∃x∃y[Fx ∧ Fy ∧ x≠y]
∃x∃y[Gx ∧ Gy ∧ x≠y]
∴ ∃x∃y[Fx ∧ Gy ∧ x≠y]
3. Symbolize these arguments and produce derivations to show that they are valid.
Copyrighted material Chapter 5 -- 25 -- Answers to the Exercises Version of Aug 2013
CHAPTER 5 Answers to Exercises
1. Show Ib
2. Eu∧Iu pr2 ei
3. Eu ↔ u=b ∨ u=c pr1 ui
4. u=b ∨ u=c 2 s 3 bp
5. ~u=c 2 s pr3 LL <contrapositive form of LL>
6. u=b 4 5 mpt
7. Ib 2 s 6 LL dd
3. Lois sees Clark at a time if and only if she sees Superman at that time.
∴ Clark is Superman
∀x[Tx → [S(icx) ↔ S(iex)]] i: Lois c: Clark e: Superman S(xyz) x sees y at z
∴ c=e
Universe: {0, 1, 2, 3, 4, 5} <4 and 5 could be omitted>
i: 0
c: 1
e: 2
T: {3, 4, 5}
S: {<0,1,3>, <0,2,3>, <0,1,4>, <0,2,4>}
5 OPERATION SYMBOLS
Which of the following are formulas?
a. R(f‹x›g‹x›) Yes
b. ∀x[Fx → Fg‹x›] Yes
c. ∀x[Fx → Fg‹xx›] Yes
d. ∀x∀y[x=h‹y› → f‹xy›=f‹yx›] Yes
e. ~∃y∃x[x=f‹y› ∧ y=f‹x›] Yes
f. ~∃x∃yf‹xy› No. There is no predicate letter.
g. S(xyz) ∨ ~S(xg‹y›z) ∨ ~S(g‹xy›g‹z›g‹yz›) Yes
h. Fa ∧ ~Fb → [Fg‹a› → g‹b›≠g‹a›] Yes
2. Show that these are consequences of the theory of biological kinship given above.
a. ~∃x[∃zB(xz) ∧ ∃zD(xz)] No brother is a daughter
∀x∀y∀z c‹xc‹yz››=c‹c‹xy›z›
∀x c‹xe›=x
∀x c‹xd‹x››=e
∴ ∀x c‹xd‹x››=c‹d‹x›x›
1. Show ∀x c‹xd‹x››=c‹d‹x›x›
2. Show c‹xd‹x››=c‹d‹x›x›
3. Show ∀y c(yc‹d‹x›x›) = y
4. Show c(yc‹d‹x›x›) = y
5. c‹yd‹x›› = c‹yd‹x›› sid
6. c‹yc‹d‹x›e› = c‹yd‹x›› pr2 ui 5 LL
7. c‹yc‹d‹x›c‹xd‹x››› = c‹yd‹x›› pr3 ui 6 LL
8. c‹yc‹c‹d‹x›x›d‹x››› = c‹yd‹x›› pr1 ui ui ui 7 LL
9. c‹c‹yc‹d‹x›x››d‹x›› = c‹yd‹x›› pr1 ui ui ui 8 LL
10. c‹yc‹d‹x›x›› = y Group Theorem a ui ui ui 9 mp
11. 10 dd
12. 4 ud
13. ∀yc‹y c‹d‹x›x››=y → c‹d‹x›x›=e Group Theorem b ui
14. c‹d‹x›x› = e 13 3 mp
15. c‹xd‹x›› = e pr3 ui
16. c‹xd‹x›› = c‹d‹x›x› 14 15 LL
17. 16 dd
18. 2 ud
1. Show ∀x d‹d‹x››=x
2. Show d‹d‹x››=x
3. c‹xd‹x››=e pr3
4. e= c‹xd‹x›› 3 sm
5. c‹d‹d‹x››d‹x››=e pr3 ui
6. c‹d‹d‹x››d‹x››= c‹xd‹x›› 4 5 LL
7. c‹d‹d‹x››d‹x››= c‹xd‹x››→ d‹d‹x››=x Group theorem a ui ui ui
8. d‹d‹x››=x 6 7 mp dd
9. 2 ud
b. ∀x∃y a‹xy›=c
∀x∃y a‹xy›=d
∴ ∃x∀y a‹xc›=y
Universe: {01)
c: 0
d: 1
a‹00› ⇒ 0 a‹11› ⇒ 1 a‹01› ⇒ 1 a‹10› ⇒ 0
c. ∀x∀y a‹xy›=a‹yx›
∴ ∀z∃x∃y a‹xy›=z
Universe: {0,1}
a‹00› ⇒ 0 a‹11› ⇒ 0 a‹01› ⇒ 0 a‹10› ⇒ 0
2. Show that these are not theorems of the theory of biological kinship given in the previous section:
a. ∀x[∃y x=f‹y› ∨ ∃y x=e‹y›] Everyone is a father or a mother
P1 ∀xAf‹x› Everyone's father is male
P2 ∀xEe‹x› Everyone's mother is female
P3 ∀x∀y[I(xy) ↔ x≠y∧f‹x›=f‹y›∧e‹x›=e‹y›] (Full) Siblings have the same mother and father
P4 ∀x∀y[B(xy) ↔ Ax ∧ I(xy)] A brother of someone is his/her male sibling
P5 ∀x∀y[D(xy) ↔ Ex ∧ [y=f‹x› ∨ y=e‹x›]] A daughter of a person is a female such that that
person is her father or her mother
P6 ∀x[Ax ↔ ~Ex] Someone is male if and only if that person is not
female
Universe: (0,1, 2)
A: {0,2}
E: {1}
f‹0› ⇒ 0 f‹1› ⇒ 0 f‹2› ⇒ 0
e‹0› ⇒ 1 e‹1› ⇒ 1 e‹2› ⇒ 1
2. Show that the third axiom for groups does not follow from the first two axioms; i.e. that this is invalid:
∀x∀y∀z c‹xc‹yz››=c‹c‹xy›z› Universe: {0, 1, 2, . . .}
∀x c‹xe›=x c‹›: +
∴ ∀x c‹xd‹x››=e e: 0
d‹›: +
The first premise says truly that addition is commutative. The second says truly that adding 0 to a number
yields that number itself. The conclusion says falsely that +(+)=0 for every integer in the domain.
3. Show that the second axiom for groups does not follow from the first two axioms; i.e. that this is invalid:
∀x∀y∀z c‹xc‹yz››=c‹c‹xy›z› Universe: {0, 1, 2, . . .}
∀x c‹xd‹x››=e c‹›: ⋅ <multiplication>
∴ ∀x c‹xe›=x d‹›: -
e: 0
The first premise says truly that multiplication is associative. The second says truly that multiplying any
integer by 0 yields 0. The conclusion says falsely that multiplying any integer by zero (by what you get
when you subtract that integer from itself) yields that integer.
4. Show that the first axiom for groups does not follow from the first two axioms; i.e. that this is invalid:
∀x c‹xe›=x Universe: {. . . , -2, -1, 0, 1, 2, . . .}
∀x c‹xd‹x››=e c‹›: -
∴ ∀x∀y∀z c‹xc‹yz››=c‹c‹xy›z› d‹›:
e: 0
The first premise says truly that subtracting zero from any integer yields that integer. The second premise
says truly that subtracting any integer from itself yields zero. The conclusion says falsely that subtraction
is associative. It's not associative; for example, (5-2)-1=2 whereas 5-(2-1)=4.
Chapter Six
Definite Descriptions
1 DEFINITE DESCRIPTIONS
A definite description is a singular phrase of English beginning with the definite article 'the', used as if to
refer to a single thing. Examples are:
the winner of American Idol
the girl who won a medal
the book that Betty wrote
the assignment from last week
the even prime
the integer between 0 and 2
All theories about definite descriptions that treat them as terms agree on how to symbolize these phrases.
First, there is a symbol called the definite description operator; it is the Greek letter iota ('ι'), rotated 180
degrees and often stylized as:
℩
This operator combines with a variable, just as a quantifier sign does, and the operator plus variable
combines with a formula, as a quantifier does. But instead of making another formula, it makes a
complex term. To symbolize the definite description 'the book that Betty wrote' we would put '℩x' on the
front of the formula 'x is a book and Betty wrote x' to make:
℩x[Bx ∧ W(bx)]
We can read '℩x' as 'the thing such that', and so we can read that definite description as:
the thing such that it is a book and Betty wrote it
Other examples are:
the winner of American Idol ℩xW(xa)
the girl who won a medal ℩x[Gx ∧ ∃y[My ∧ W(xy)]]
the even prime ℩x[Ex ∧ Ix]
the integer between 0 and 2 ℩x[Ix ∧ B(x02)] B(): is between and
There are different theories concerning the exact logical status of definite descriptions. We focus here on
what is probably the simplest account, according to which definite descriptions are complex terms, on a
par with proper names and with terms built up using operation symbols.
Formation rule:
Definite descriptions
If '□' is a formula, '℩x□' is a term.
On this view, definite descriptions occur in formulas exactly where terms may occur. Some examples of
formulas that contain definite descriptions and the definite descriptions that they contain are:
A℩xFx ℩xFx
S(℩yFy ℩zGz) ℩yFy ℩zGz
B℩x[Fx∧Gx] → C℩xHx ℩x[Fx∧Gx] ℩xHx
B℩x~Dx ∨ ~B℩xR(xa) ℩x~Dx ℩xR(xa)
B℩xR(x ℩yS(xy)) ℩xS(xy) ℩xR(x ℩yS(xy)) <the latter contains the former>
a = ℩xR(ax) ℩xR(ax)
~℩xFx=℩xGx ℩xFx ℩xGx
∀x∀y[A℩zS(zx) ∧ B℩uS(yu)] ℩zS(zx) ℩uS(yu)
EXERCISES
Symbolizing a sentence containing a definite description involves two tasks. One task is to figure out how
to construct the definite description. The other task is figuring out where to put the definite description in
the symbolization of the sentence containing it. This second task is easy; you just treat the definite
description as you would any other term, simple or complex. So you put the definite description in the
same place that you would put a name, if there were a name instead of the definite description. For
example, if you were symbolizing ‘Anna sees the giraffe’, you could consider instead how to symbolize
‘Anna sees Fido’. If you do this you would write:
S(af)
You get the symbolization of ‘Anna sees the giraffe’ by putting the definite description for 'the giraffe' in
place of 'f':
S(a ℩xGx)
In more complex cases the same principle applies. If you are wondering how to symbolize ‘Every giraffe
that Anna owns likes the ferocious hyena’ then just ask yourself how to symbolize a sentence with ‘Fido’ in
place of ‘the ferocious hyena’:
Every giraffe that Anna owns likes Fido
∀x(Gx∧O(ax) → L(xf))
Then put ‘℩x(Fx∧Hx)’ in place of ‘f’:
∀x(Gx ∧ O(ax) → L(x ℩y(Fy∧Hy)))
where the variable ‘y’ has been used in the definite description instead of ‘x’ to avoid confusion with the ‘x’
in the quantifier. (Actually, it would be OK to use ‘x’ in this case. It is advisable to use ‘y’ so you don’t
have to figure out whether there might be a problem.)
The other part of symbolizing a definite description is determining what its contents should be. This can
be done if one considers what the part after the ‘℩x’ would be if the sentence you are symbolizing had a
quantifier word such as ‘every’ instead of ‘the. Suppose the sentence is:
Anna likes the ferocious giraffe
and you want to know how to symbolize the ‘ferocious giraffe’ part of the description. In such a case
consider how you would symbolize a sentence containing 'every ferocious giraffe'. Your symbolization
would contain a form like this:
∀x(Fx∧Gx → □)
The formula to use for ‘ferocious giraffe’ in the definite description is what occurs in the underlined place
Read any universal quantifier as "everything is such that", while reading any variable
that it binds as a pronoun which has that 'everything' as its antecedent.
Read any existential quantifier as "something is such that" while reading any
variable that it binds as a pronoun which has the 'something' as its antecedent.
We can add a provision for definite descriptions to the recipe given in the box above:
Read any prefix of the form '℩x' as "the thing such that", while reading any variable
that it binds as a pronoun which has 'thing' as its antecedent.
So this formula:
L(a ℩y(Fy∧Hy))
can be read as follows:
L (a ℩y (Fy ∧ Hy))
Some examples of sentences that can be symbolized with definite descriptions are:
The cat that Maria sees is larger than the dog that she sees.
L(℩x[Cx∧S(mx)] ℩x[Dx∧S(mx)])
Everyone parked in the space that they saw. E: is a person A: is a space
∀x[Ex → P(x ℩y[Ay∧S(xy)])] P(): parked in S(): saw
Definite descriptions are often used to symbolize possessive constructions, using 'have' to indicate
possession. For example, 'Fred's car' means 'the car that Fred has'. The sentence 'Maria saw Fred's car'
could be symbolized:
S(m ℩xH(fx))
Definite descriptions are also naturally used with superlative constructions. The strong reading of the
phrase 'the tallest coat rack' means something like 'the coat rack which is taller than every other coat
rack':
℩x[Cx ∧ ∀y[Cy∧y≠x → T(xy)]]
EXERCISES
1. Symbolize each of the following,
Anna dated the tallest spy.
The person who put a bug in my drink will pay.
Beatrice likes the man who bought her a ring.
Every giraffe loves the keeper who feeds it.
Every giraffe loves the tallest keeper who feeds it.
The woman who studied did better than the woman who didn't study.
Everybody honors the woman who gave birth to her/him.
The prize will be awarded to the person who spells the word correctly.
Every woman parked her own car.
2. Read each symbolized sentence in stilted English using the recipe given above.
All theories that treat definite descriptions as terms agree on how to treat proper definite descriptions.
They obey the rule that if the description is proper, the descriptive part is true when its variables (the
occurrences bound by '℩') are replaced by the definite description itself. For example, if 'the book that
Betty wrote' is proper, then the book that Betty wrote is indeed a book, and Betty did write it:
∃z∀x[Bx∧W(bx) ↔ x=z] properness of 'book that Betty wrote'
∴ B℩x[Bx∧W(bx)] ∧ W(b ℩x[Bx∧W(bx)]) the book that Betty wrote is a book ∧ Betty wrote the
book that Betty wrote
The pattern is always the same: if there is a unique such-and-such, then the such-and-such is such-and-
such. When 'such-and-such' is complex, so is the application of this rule.
<Constraints: This rule is subject to the restriction that 'z' is not free in '○', and that no variable that
is free in '℩x○' gets bound when '℩x○' is put in place of a free occurrence of ‘x’ in '○'.>
EXERCISES
1. What can be inferred from the statements that say that these definite descriptions are proper?
The spy who loved me
The tallest giraffe to fly to the moon
The number whose square root is the same as its cube root
The boy such that he and the girl who saw him both sang
The largest gift given to UCLA
The big blue tuba
Our artificial technique for handling improper definite descriptions was suggested over a century ago by
the logician Gottlob Frege. The technique is to arbitrarily choose something for all improper descriptions
to stand for. We then assume that any improper definite description stands for this thing. This thing can
be anything -- the number zero, your pet dog, the tallest giraffe in the San Diego Zoo, the left front burner
of the stove on which I cooked oatmeal today. Since the thing is arbitrarily chosen, we will not identify it in
any further way, say by assigning a simple name to it, or applying a predicate to it. Any name that we are
using might actually name the artificially chosen thing, but nothing in our logic tells us so.
In spite of not knowing what it is, we can easily refer to this arbitrarily chosen thing with an appropriate
complex term. We just need to use a definite description that we know to be improper. A natural example
is to use the definite description:
℩x x≠x the thing that is not identical to itself
Since nothing can fail to be identical to itself, this definite description has to be improper. This is a logical
truth, for the statement that the description is improper is:
∴ ~∃z∀x[x≠x ↔ x=z]
and we can easily produce a derivation to show that this is a theorem of logic:
1. Show ~∃z∀x[x≠x ↔ x=z]
2. ∃z∀x[x≠x ↔ x=z] ass id
3. ℩x x≠x ≠ ℩x x≠x 2 prd putting '℩x x≠x' in for both occurrences
4. ℩x x≠x = ℩x x≠x sid of 'x' in 'x≠x'.
5. 3 4 id
Our single rule for improper definite descriptions says that if a definite description is improper, it refers to
whatever '℩x x≠x' refers to:
A description must either be proper or improper, although based on information given to us we may not
know which. We do know this much, however: if the definite description does not refer to the chosen
object, it must be proper:
℩x○ ≠ ℩x x≠x ∴ ∃z∀x[○ ↔ x=z]
This is a trivial consequence of the rule for improper descriptions:
1. Show ∃z∀x[○ ↔ x=z]
2. ~∃z∀x[○ ↔ x=z] ass id
3. ℩x○ = ℩x x≠x 2 imd
4. ℩x○ ≠ ℩x x≠x pr1
5. 3 4 id
One must be careful not to make a similar but invalid inference. Given that
℩x○ = ℩x x≠x
one may not infer from this that '℩x○' is improper. Since the chosen object may be anything at all, it might
be the dog that Cynthia bought. In this case the definite description 'the dog that Cynthia bought' refers
properly to the chosen object. That is, we have:
℩x[Dx∧B(cx)] = ℩x x≠x
where the definite description '℩x[Dx∧B(cx)]' is proper:
∃z∀x[Dx∧B(cx) ↔ x=z]
So the chosen object can be referred to by proper descriptions, in addition to improper ones.
Some applications of the rule for improper descriptions are relatively straightforward. An example is:
∃x∃y[x≠y∧Fx∧Fy]
~∃xGx
∴ ℩xFx = ℩xGx
1. Show ℩xFx = ℩xGx
2. Show ~∃z∀x[Fx ↔ x=z]
3. ∃z∀x[Fx ↔ x=z] ass id
4. ∀x[Fx ↔ x=i] 3 ei
5. u≠v∧Fu∧Fv pr1 ei ei
6. Fu ↔ u=i 4 ui
7. u=i 5 s s 6 bp
8. Fv ↔ v=i 4 ui
9. v=i 5 s 8 bp
10. u=v 7 9 LL
11. u≠v 5 s s 10 id
12. Show ~∃z∀x[Gx ↔ x=z]
13. ∃z∀x[Gx ↔ x=z] ass id
14. ∀x[Gx ↔ x=j] 13 ei
15. Gj ↔ j=j 14 ui
16. Gj sid 15 bp
17. ∃xGx 16 eg
18. ~∃xGx pr2 17 id
19. ℩xFx = ℩x x≠x 2 imd rule imd
20. ℩xGx = ℩x x≠x 12 imd rule imd
21. ℩xFx = ℩xGx 19 20 LL dd
Often you will be given an argument whose premises contain definite descriptions that may be either
proper or improper. If you can prove that a definite description is proper, you can often use that to prove
other desired things. Likewise, if you can prove that a definite description is improper, you can often use
that to prove other desired things. But sometimes you cannot prove either of these things, because not
enough information is given to decide. You may still be able to use both strategies just described: you
must both (i) infer what you want to infer using the assumption that the definite description is proper, and
also (ii) infer what you want to infer using the assumption that the definite description is improper. If you
can do this, you can use the rule for separation of cases to get the desired conclusion.
∀x[Hx → Gx]
F℩xFx → G℩xFx
~H℩xFx → ℩xFx ≠ ℩xx≠x
∴ ∃xGx
1. Show ∃xGx
2. Show ∃z∀x[Fx ↔ x=z] → ∃xGx
3. ∃z∀x[Fx ↔ x=z] ass cd
4. F℩xFx 3 prd
5. G℩xFx pr2 4 mp
6. ∃xGx 5 eg cd
7. Show ~∃z∀x[Fx ↔ x=z] → ∃xGx
8. ~∃z∀x[Fx ↔ x=z] ass cd
9. ℩xFx = ℩xx≠x 8 imd
10. H℩xFx 9 dn pr3 mt
11. G℩xFx pr1 ui 10 mp
12. ∃xGx 11 eg cd
13. ∃xGx 2 7 sc dd rule sc
[You may be tempted to use separation of cases with the cases being identity with the chosen object
("℩xFx=℩xx≠x") and non-identity with the chosen object ("℩xFx≠℩xx≠x"). But this is not often useful,
because if a definite description is identical to the chosen object, you still don't know whether it is proper or
not, and so you can't use either rule prd or rule imd. It is usually better to take the cases to be the
statements that the definite description is proper, and that it is improper.]
EXERCISES
Produce derivations for the following arguments.
1. ∃x[Fx∧Gx∧Hx]
∃x[Fx∧Gx∧~Hx]
∴ ℩xFx = ℩xGx
2. Fa ∧ Fb ∧ a≠b
∀x[Fx ∧ x≠a ↔ Gx]
℩xGx≠℩xFx
∴ ∀x[Gx → x=b]
~∀x[Fx ↔ Gx]
∴ ℩xFx ≠ ℩xGx
Universe: {0, 1, 2}
F: {}
G: {1,2}
℩x x≠x: 0
The premise is true because F and G don't agree everywhere. (In fact, they agree nowhere.) The
conclusion is false because both Fx and Gx are improper -- the first is true of nothing, and the second is
true of two things -- so both definite descriptions stand for the chosen object, 0.
EXERCISES
Give counter-examples to show that these arguments are invalid:
1. A℩xAx
∴ ∀x∀y(Ax∧Ay→x=y)
2. ∀x∃yR(xy)
∃x∀y[R(xy)→y≠a]
∴ ℩xR(xx) = ℩x x≠x
3. ℩xAx≠℩xBx
℩xBx≠℩xCx
d(a)= ℩xBx
∀xd(d(x))=d(x)
∴ ℩xAx≠℩xCx
There are invalid arguments containing definite descriptions whose counter-examples require infinite
universes. No new techniques are involved in giving such counter-examples.
Here is an example of an invalid argument that can't be given a counter-example with a finite universe:
∀y∃z∀x[R(yx) ↔ x=z]
∀x∀y[x≠y → ℩zR(xz)≠℩zR(yz)]
∴ ∀x∃yR(yx)
A counter-example that shows it to be invalid is:
Universe: {0, 1, 2, . . . }
R(): =+1
The first premise is true here because for whatever you pick for y, there is something -- namely, y's
successor -- such that everything is one greater than y iff it is y's successor. The second premise is true
because whenever there are two different things, the unique thing that is one greater than the first is
something other than the unique thing that is one greater than the second. And the conclusion is false
because 0 is such that there is nothing such that it plus 1 is 0.
EXERCISES
Give counter-examples with infinite universes to show that these arguments are invalid:
1. ∀x∀y[x≠y → d(x)≠d(y)]
∴ ℩x~∃y x=d(y) = ℩x x≠x
2. ∀xR(x c(x))
∀x∀y∀z[R(xy)∧R(yz)→R(xz)]
∴ ∀x∃yx=℩zR(yz)
3. ∀x∀y[b(x)=b(y)→x=y]
∃y∀x~y=℩z[z=b(x)]
∴ ∃x∃y[x≠y∧∀z[b(z)≠x∧b(z)≠y]]
1. What can be inferred from the statements that say that these definite descriptions are proper?
The spy who loved me
There is one and only one spy who loved me.
The tallest giraffe to fly to the moon
There is one and only one giraffe which flew to the moon and is taller than every other giraffe
which flew to the moon.
The number whose square root is the same as its cube root
There is one and only one number such that its square root is its cube root.
<Note that this is false, since the condition is met by both 0 and 1>
The boy such that he and the girl who saw him both sang
There is one and only one thing such that it is a boy and there is one and only one girl who saw it
and they both sang.
The largest gift given to UCLA
There is one and only one gift given to UCLA which is larger than any other gift given to UCLA.
The big blue tuba
There is one and only one thing which is big and blue and is a tuba.
1. Show ∃x[Hx∧Fx]
2. H℩x[Hx∧∃y[Iy∧C(yx)]]∧∃y[Iy∧C(y℩x[Hx∧∃y[Iy∧C(yx)]])] pr2 prd
3. H℩x[Hx∧∃y[Iy∧C(yx)]] ∧ F℩x[Hx∧∃y[Iy∧C(yx)]] 2 s pr1 adj
4. ∃x[Hx∧Fx] 3 eg dd
(ii) F℩x[Hx∧∃y[Iy∧C(yx)]]
∴ ∃x[Hx∧Fx]
INVALID
Universe: {0,1}
F: {0}
H: {}
I: {}
C: {}
chosen object: 0
The premise is true because the description is improper, so it refers to 0, which 'F' is true of.
The conclusion is false because nothing is both H and F.
3. The cat that Maria owns chased a mouse that ate the fig.
∴ A cat that Maria owns chased a mouse that ate a fig.
1. ∃x[Fx∧Gx∧Hx]
∃x[Fx∧Gx∧~Hx]
∴ ℩xFx = ℩xGx
2. Fa ∧ Fb ∧ a≠b
∀x[Fx ∧ x≠a ↔ Gx]
℩xGx≠℩xFx
∴ ∀x[Gx → x=b]
2. ∀x∃yR(xy)
∃x∀y[R(xy)→y≠a]
∴ ℩xR(xx) = ℩xx≠x
Universe: {0, 1, 2}
R: {<0,1>,<1,1>,<2,0>}
a: 1
chosen object: 0
The description 'R(xx)' is proper, since 'R' holds of exactly one pair of identical things: <1,1>. The definite
description, '℩xR(xx)', then stands for 1, which is not the chosen object, 0, so the conclusion is false. The
first premise is clearly true, and the second is true when 'x' is taken to be 2.
3. ℩xAx≠℩xBx
℩xBx≠℩xCx
d〈a〉= ℩xBx
∀xd〈d〈x〉〉=d〈x〉
∴ ℩xAx≠℩xCx
Universe: {0,1}
A: {1}
C: {1}
B: {0}
a: 0
d〈0〉=0 d〈1〉=1
(chosen object: 0, though this is not important since all descriptions are proper)
Give counter-examples with infinite universes to show that these arguments are invalid:
1. ∀x∀y[x≠y → d〈x〉≠d〈y〉]
∴ ℩x~∃yx=d〈y〉 = ℩xx≠x
Universe: {-1, 0, 1, 2, . . . }
d(): +1
chosen object: 0
The description '~∃yx=d〈y〉' in this universe is uniquely true of -1, so '℩x~∃yx=d〈y〉' refers to -1, which is
distinct from 0 (the chosen object), so the conclusion is false. The premise is true since whenever x and y
are different, so are x+1 and y+1.
2. ∀xR(xc(x))
∀x∀y∀z[R(xy)∧R(yz)→R(xz)]
∴ ∀x∃yx=℩zR(yz)
Universe: {0, 1, 2, . . . }
R(): <
c(): +1
chosen object: 0
The first premise is true because everything is less than it plus one. The second premise is true because
< is transitive. For the conclusion, note that no matter what y is, '℩zR(yz)' is improper, because there are
things that y is less than. So '℩zR(yz)' always refers to the chosen object, 0, and not everything is identical
to 0.
3. ∀x∀y[b〈x〉=b〈y〉 → x=y]
∃y∀x~y=℩z[z=b〈x〉]
∴ ∃x∃y[x≠y∧∀z[b〈z〉≠x∧b〈z〉≠y]]
Universe: {0, 1, 2, . . . }
b〈〉: +1
The first premise, with b〈x〉=x+1, is clearly true. Notice that for any value of 'x', the description '℩z[z=b〈x〉]'
is proper, because there is always a unique thing got from x by adding 1 to it. And the description never
stands for 0, since there is nothing in the universe that makes 0 when you add 1 to it. So the second
premise is true, taking 'y' to be 0. The conclusion is false since there aren't two things in the universe that
cannot be gotten by adding 1 to something.