Articles

2.3: Types of Thinking - Mathematics


Thinking StillWhat It Involves
1. Remembering and RecallingRetrieving or repeating information or ideas from memory. This is the first and most basic thinking skill you develop (starting as a toddler with learning numbers, letters, and colors).
2. UnderstandingInterpreting, constructing meaning, inferring, or explaining material from written, spoken, or graphic sources. Reading is the most common understanding skill; these skills are developed starting with early education.
3. ApplyingUsing learned material or implementing material in new situations. This skill is commonly used starting in middle school (in some cases earlier).
4. AnalyzingBreaking material or concepts into key elements and determining how the parts relate to one another or to an overall structure or purpose. Mental actions included in this skill are examining, contrasting or differentiating, separating, categorizing, experimenting, and deducing. You most likely started developing this skill in high school (particularly in science courses) and will continue to practice it in college.
5. EvaluatingAssessing, making judgments, and drawing conclusions from ideas, information, or data. Critiquing the value and usefulness of material. This skill encompasses most of what is commonly referred to as critical thinking; this skill will be called on frequently during your college years and beyond. Critical thinking is the first focus of this chapter.
6. CreatingPutting parts together or reorganizing them in a new way, form, or product. This process is the most difficult mental function. This skill will make you stand out in college and is in very high demand in the workforce. Creative thinking is the second focus of this chapter.
Skill SetHow You Used It in the Past Three WeeksComments
Remembering and Recalling
Understanding
Applying
Analyzing
Evaluating
Creating
Skill SetVerbs
1. Remembering and RecallingBookmark, count, describe, draw, enumerate, find, google, identify, label, list, match, name, quote, recall, recite, search, select, sequence, tell, write
2. UnderstandingBlog, conclude, describe, discuss, explain, generalize, identify, illustrate, interpret, paraphrase, predict, report, restate, review, summarize, tell, tweet
3. ApplyingApply, articulate, change, chart, choose, collect, compute, control, demonstrate, determine, do, download, dramatize, imitate, implement, interview, install (as in software), participate, prepare, produce, provide, report, role-play, run (software), select, share, show, solve, transfer, use
4. AnalyzingAnalyze, break down, characterize, classify, compare, contrast, debate, deduce, diagram, differentiate, discriminate, distinguish, examine, infer, link, outline, relate, research, reverse-engineer, separate, subdivide, tag
5. EvaluatingAppraise, argue, assess, beta test, choose, collaborate, compare, contrast, conclude, critique, criticize, decide, defend, “friend/de-friend,” evaluate, judge, justify, network, post, predict, prioritize, prove, rank, rate, review, select, support
6. CreatingAdapt, animate, blog, combine, compose, construct, create, design, develop, devise, film, formulate, integrate, invent, make, model, modify, organize, perform, plan, podcast, produce, program, propose, rearrange, remix, revise, rewrite, structure

Math skills at different ages

Kids start learning math the moment they start exploring the world. Each skill — from identifying shapes to counting to finding patterns — builds on what they already know.

There are certain math milestones most kids hit at roughly the same age. But keep in mind that kids develop math skills at different rates. If kids don’t yet have all the skills listed for their age group, that’s OK.

Here’s how math skills typically develop as kids get older.


Contents

History Edit

Hamilton offers a history of the three traditional laws that begins with Plato, proceeds through Aristotle, and ends with the schoolmen of the Middle Ages in addition he offers a fourth law (see entry below, under Hamilton):

"The principles of Contradiction and Excluded Middle can be traced back to Plato: The principles of Contradiction and of Excluded Middle can both be traced back to Plato, by whom they were enounced and frequently applied though it was not till long after, that either of them obtained a distinctive appellation. To take the principle of Contradiction first. This law Plato frequently employs, but the most remarkable passages are found in the Phœdo, in the Sophista, and in the fourth and seventh books of the Republic. [Hamilton LECT. V. LOGIC. 62] Law of Excluded Middle: The law of Excluded Middle between two contradictories remounts, as I have said, also to Plato, though the Second Alcibiades, the dialogue in which it is most clearly expressed, must be admitted to be spurious. It is also in the fragments of Pseudo-Archytas, to be found in Stobæus. [Hamilton LECT. V. LOGIC. 65] Hamilton further observes that "It is explicitly and emphatically enounced by Aristotle in many passages both of his Metaphysics (l. iii. (iv.) c.7.) and of his Analytics, both Prior (l. i. c. 2) and Posterior (1. i. c. 4). In the first of these, he says: "It is impossible that there should exist any medium between contradictory opposites, but it is necessary either to affirm or to deny everything of everything." [Hamilton LECT. V. LOGIC. 65] "Law of Identity. [Hamilton also calls this "The principle of all logical affirmation and definition"] Antonius Andreas: The law of Identity, I stated, was not explicated as a coordinate principle till a comparatively recent period. The earliest author in whom I have found this done, is Antonius Andreas, a scholar of Scotus, who flourished at the end of the thirteenth and beginning of the fourteenth century. The schoolman, in the fourth book of his Commentary of Aristotle's Metaphysics – a commentary which is full of the most ingenious and original views, – not only asserts to the law of Identity a coordinate dignity with the law of Contradiction, but, against Aristotle, he maintains that the principle of Identity, and not the principle of Contradiction, is the one absolutely first. The formula in which Andreas expressed it was Ens est ens. Subsequently to this author, the question concerning the relative priority of the two laws of Identity and of Contradiction became one much agitated in the schools though there were also found some who asserted to the law of Excluded Middle this supreme rank." [From Hamilton LECT. V. LOGIC. 65–66]

Three traditional laws: identity, non-contradiction, excluded middle Edit

The following will state the three traditional "laws" in the words of Bertrand Russell (1912):

The law of identity Edit

Regarding this law, Aristotle wrote:

First then this at least is obviously true, that the word "be" or "not be" has a definite meaning, so that not everything will be "so and not so". Again, if "man" has one meaning, let this be "two-footed animal" by having one meaning I understand this:—if "man" means "X", then if A is a man "X" will be what "being a man" means for him. (It makes no difference even if one were to say a word has several meanings, if only they are limited in number for to each definition there might be assigned a different word. For instance, we might say that "man" has not one meaning but several, one of which would have one definition, viz. "two-footed animal", while there might be also several other definitions if only they were limited in number for a peculiar name might be assigned to each of the definitions. If, however, they were not limited but one were to say that the word has an infinite number of meanings, obviously reasoning would be impossible for not to have one meaning is to have no meaning, and if words have no meaning our reasoning with one another, and indeed with ourselves, has been annihilated for it is impossible to think of anything if we do not think of one thing but if this is possible, one name might be assigned to this thing.)

More than two millennia later, George Boole alluded to the very same principle as did Aristotle when Boole made the following observation with respect to the nature of language and those principles that must inhere naturally within them:

There exist, indeed, certain general principles founded in the very nature of language, by which the use of symbols, which are but the elements of scientific language, is determined. To a certain extent these elements are arbitrary. Their interpretation is purely conventional: we are permitted to employ them in whatever sense we please. But this permission is limited by two indispensable conditions, first, that from the sense once conventionally established we never, in the same process of reasoning, depart secondly, that the laws by which the process is conducted be founded exclusively upon the above fixed sense or meaning of the symbols employed.

The law of non-contradiction Edit

The law of non-contradiction (alternately the 'law of contradiction' [4] ): 'Nothing can both be and not be.' [2]

In other words: "two or more contradictory statements cannot both be true in the same sense at the same time": ¬(A∧¬A).

In the words of Aristotle, that "one cannot say of something that it is and that it is not in the same respect and at the same time". As an illustration of this law, he wrote:

It is impossible, then, that "being a man" should mean precisely not being a man, if "man" not only signifies something about one subject but also has one significance . And it will not be possible to be and not to be the same thing, except in virtue of ambiguity, just as if one whom we call "man", and others were to call "not-man" but the point in question is not this, whether the same thing can at the same time be and not be a man in name, but whether it can be in fact.

The law of excluded middle Edit

The law of excluded middle: 'Everything must either be or not be.' [2]

In accordance with the law of excluded middle or excluded third, for every proposition, either its positive or negative form is true: A∨¬A.

Regarding the law of excluded middle, Aristotle wrote:

But on the other hand there cannot be an intermediate between contradictories, but of one subject we must either affirm or deny any one predicate. This is clear, in the first place, if we define what the true and the false are. To say of what is that it is not, or of what is not that it is, is false, while to say of what is that it is, and of what is not that it is not, is true so that he who says of anything that it is, or that it is not, will say either what is true or what is false

Rationale Edit

As the quotations from Hamilton above indicate, in particular the "law of identity" entry, the rationale for and expression of the "laws of thought" have been fertile ground for philosophic debate since Plato. Today the debate—about how we "come to know" the world of things and our thoughts—continues for examples of rationales see the entries, below.

In one of Plato's Socratic dialogues, Socrates described three principles derived from introspection:

First, that nothing can become greater or less, either in number or magnitude, while remaining equal to itself . Secondly, that without addition or subtraction there is no increase or diminution of anything, but only equality . Thirdly, that what was not before cannot be afterwards, without becoming and having become.

The law of non-contradiction is found in ancient Indian logic as a meta-rule in the Shrauta Sutras, the grammar of Pāṇini, [6] and the Brahma Sutras attributed to Vyasa. It was later elaborated on by medieval commentators such as Madhvacharya. [7]

John Locke claimed that the principles of identity and contradiction (i.e. the law of identity and the law of non-contradiction) were general ideas and only occurred to people after considerable abstract, philosophical thought. He characterized the principle of identity as "Whatsoever is, is." He stated the principle of contradiction as "It is impossible for the same thing to be and not to be." To Locke, these were not innate or a priori principles. [8]

Gottfried Leibniz formulated two additional principles, either or both of which may sometimes be counted as a law of thought:

In Leibniz's thought, as well as generally in the approach of rationalism, the latter two principles are regarded as clear and incontestable axioms. They were widely recognized in European thought of the 17th, 18th, and 19th centuries, although they were subject to greater debate in the 19th century. As turned out to be the case with the law of continuity, these two laws involve matters which, in contemporary terms, are subject to much debate and analysis (respectively on determinism and extensionality [ clarification needed ] ). Leibniz's principles were particularly influential in German thought. In France, the Port-Royal Logic was less swayed by them. Hegel quarrelled with the identity of indiscernibles in his Science of Logic (1812–1816).

Four laws Edit

"The primary laws of thought, or the conditions of the thinkable, are four: – 1. The law of identity [A is A]. 2. The law of contradiction. 3. The law of exclusion or excluded middle. 4. The law of sufficient reason." (Thomas Hughes, The Ideal Theory of Berkeley and the Real World, Part II, Section XV, Footnote, p. 38)

Arthur Schopenhauer discussed the laws of thought and tried to demonstrate that they are the basis of reason. He listed them in the following way in his On the Fourfold Root of the Principle of Sufficient Reason, §33:

  1. A subject is equal to the sum of its predicates, or a = a.
  2. No predicate can be simultaneously attributed and denied to a subject, or a ≠

  1. Everything that is, exists.
  2. Nothing can simultaneously be and not be.
  3. Each and every thing either is or is not.
  4. Of everything that is, it can be found why it is.

To show that they are the foundation of reason, he gave the following explanation:

Through a reflection, which I might call a self-examination of the faculty of reason, we know that these judgments are the expression of the conditions of all thought and therefore have these as their ground. Thus by making vain attempts to think in opposition to these laws, the faculty of reason recognizes them as the conditions of the possibility of all thought. We then find that it is just as impossible to think in opposition to them as it is to move our limbs in a direction contrary to their joints. If the subject could know itself, we should know those laws immediately, and not first through experiments on objects, that is, representations (mental images).

Schopenhauer's four laws can be schematically presented in the following manner:

Two laws Edit

Later, in 1844, Schopenhauer claimed that the four laws of thought could be reduced to two. In the ninth chapter of the second volume of The World as Will and Representation, he wrote:

It seems to me that the doctrine of the laws of thought could be simplified if we were to set up only two, the law of excluded middle and that of sufficient reason. The former thus: "Every predicate can be either confirmed or denied of every subject." Here it is already contained in the "either, or" that both cannot occur simultaneously, and consequently just what is expressed by the laws of identity and contradiction. Thus these would be added as corollaries of that principle which really says that every two concept-spheres must be thought either as united or as separated, but never as both at once and therefore, even although words are joined together which express the latter, these words assert a process of thought which cannot be carried out. The consciousness of this infeasibility is the feeling of contradiction. The second law of thought, the principle of sufficient reason, would affirm that the above attributing or refuting must be determined by something different from the judgment itself, which may be a (pure or empirical) perception, or merely another judgment. This other and different thing is then called the ground or reason of the judgment. So far as a judgement satisfies the first law of thought, it is thinkable so far as it satisfies the second, it is true, or at least in the case in which the ground of a judgement is only another judgement it is logically or formally true. [9]

The title of George Boole's 1854 treatise on logic, An Investigation on the Laws of Thought, indicates an alternate path. The laws are now incorporated into an algebraic representation of his "laws of the mind", honed over the years into modern Boolean algebra.

Rationale: How the "laws of the mind" are to be distinguished Edit

Boole begins his chapter I "Nature and design of this Work" with a discussion of what characteristic distinguishes, generally, "laws of the mind" from "laws of nature":

"The general laws of Nature are not, for the most part, immediate objects of perception. They are either inductive inferences from a large body of facts, the common truth in which they express, or, in their origin at least, physical hypotheses of a causal nature. . They are in all cases, and in the strictest sense of the term, probable conclusions, approaching, indeed, ever and ever nearer to certainty, as they receive more and more of the confirmation of experience. . "

Contrasted with this are what he calls "laws of the mind": Boole asserts these are known in their first instance, without need of repetition:

"On the other hand, the knowledge of the laws of the mind does not require as its basis any extensive collection of observations. The general truth is seen in the particular instance, and it is not confirmed by the repetition of instances. . we not only see in the particular example the general truth, but we see it also as a certain truth – a truth, our confidence in which will not continue to increase with increasing experience of its practical verification." (Boole 1854:4)

Boole's signs and their laws Edit

Boole begins with the notion of "signs" representing "classes", "operations" and "identity":

"All the signs of Language, as an instrument of reasoning may be conducted by a system of signs composed of the following elements "1st Literal symbols as x, y, etc representing things as subjects of our conceptions, "2nd Signs of operation, as +, −, x standing for those operations of the mind by which conceptions of things are combined or resolved so as to form new conceptions involving the same elements, "3rd The sign of identity, =. And these symbols of Logic are in their use subject to definite laws, partly agreeing with and partly differing from the laws of the corresponding symbols in the science of Algebra. (Boole 1854:27)

Boole then clarifies what a "literal symbol" e.g. x, y, z. represents—a name applied to a collection of instances into "classes". For example, "bird" represents the entire class of feathered winged warm-blooded creatures. For his purposes he extends the notion of class to represent membership of "one", or "nothing", or "the universe" i.e. totality of all individuals:

"Let us then agree to represent the class of individuals to which a particular name or description is applicable, by a single letter, as z. . By a class is usually meant a collection of individuals, to each of which a particular name or description may be applied but in this work the meaning of the term will be extended so as to include the case in which but a single individual exists, answering to the required name or description, as well as the cases denoted by the terms "nothing" and "universe," which as "classes" should be understood to comprise respectively 'no beings,' 'all beings.'" (Boole 1854:28)

He then defines what the string of symbols e.g. xy means [modern logical &, conjunction]:

"Let it further be agreed, that by the combination xy shall be represented that class of things to which the names or descriptions represented by x and y are simultaneously, applicable. Thus, if x alone stands for "white things," and y for "sheep," let xy stand for 'white Sheep'" (Boole 1854:28)

Given these definitions he now lists his laws with their justification plus examples (derived from Boole):

  • (1) xy = yx [commutative law]
  • (2) xx = x, alternately x 2 = x [Absolute identity of meaning, Boole's "fundamental law of thought" cf page 49]

Logical OR: Boole defines the "collecting of parts into a whole or separate a whole into its parts" (Boole 1854:32). Here the connective "and" is used disjunctively, as is "or" he presents a commutative law (3) and a distributive law (4) for the notion of "collecting". The notion of separating a part from the whole he symbolizes with the "-" operation he defines a commutative (5) and distributive law (6) for this notion:

  • (3) y + x = x + y [commutative law]
  • (4) z(x + y) = zx + zy [distributive law]
  • (6) z(x − y) = zx − zy [distributive law]

Nothing "0" and Universe "1": He observes that the only two numbers that satisfy xx = x are 0 and 1. He then observes that 0 represents "Nothing" while "1" represents the "Universe" (of discourse).

The logical NOT: Boole defines the contrary (logical NOT) as follows (his Proposition III):

"If x represent any class of objects, then will 1 − x represent the contrary or supplementary class of objects, i.e. the class including all objects which are not comprehended in the class x" (Boole 1854:48) If x = "men" then "1 − x" represents the "universe" less "men", i.e. "not-men".

The notion of a particular as opposed to a universal: To represent the notion of "some men", Boole writes the small letter "v" before the predicate-symbol "vx" some men.

Exclusive- and inclusive-OR: Boole does not use these modern names, but he defines these as follows x(1-y) + y(1-x) and x + y(1-x), respectively these agree with the formulas derived by means of the modern Boolean algebra. [10]

Boole derives the law of contradiction Edit

Armed with his "system" he derives the "principle of [non]contradiction" starting with his law of identity: x 2 = x. He subtracts x from both sides (his axiom 2), yielding x 2 − x = 0. He then factors out the x: x(x − 1) = 0. For example, if x = "men" then 1 − x represents NOT-men. So we have an example of the "Law of Contradiction":

"Hence: x(1 − x) will represent the class whose members are at once "men," and" not men," and the equation [x(1 − x)=0] thus express the principle, that a class whose members are at the same time men and not men does not exist. In other words, that it is impossible for the same individual to be at the same time a man and not a man. . this is identically that "principle of contradiction" which Aristotle has described as the fundamental axiom of all philosophy. . what has been commonly regarded as the fundamental axiom of metaphysics is but the consequence of a law of thought, mathematical in its form." (with more explanation about this "dichotomy" comes about cf Boole 1854:49ff)

Boole defines the notion "domain (universe) of discourse" Edit

This notion is found throughout Boole's "Laws of Thought" e.g. 1854:28, where the symbol "1" (the integer 1) is used to represent "Universe" and "0" to represent "Nothing", and in far more detail later (pages 42ff):

" Now, whatever may be the extent of the field within which all the objects of our discourse are found, that field may properly be termed the universe of discourse. . Furthermore, this universe of discourse is in the strictest sense the ultimate subject of the discourse."

In his chapter "The Predicate Calculus" Kleene observes that the specification of the "domain" of discourse is "not a trivial assumption, since it is not always clearly satisfied in ordinary discourse . in mathematics likewise, logic can become pretty slippery when no D [domain] has been specified explicitly or implicitly, or the specification of a D [domain] is too vague (Kleene 1967:84).

As noted above, Hamilton specifies four laws—the three traditional plus the fourth "Law of Reason and Consequent"—as follows:

"XIII. The Fundamental Laws of Thought, or the conditions of the thinkable, as commonly received, are four: – 1. The Law of Identity 2. The Law of Contradiction 3. The Law of Exclusion or of Excluded Middle and, 4. The Law of Reason and Consequent, or of Sufficient Reason." [11]

Rationale: "Logic is the science of the Laws of Thought as Thought" Edit

Hamilton opines that thought comes in two forms: "necessary" and "contingent" (Hamilton 1860:17). With regards the "necessary" form he defines its study as "logic": "Logic is the science of the necessary forms of thought" (Hamilton 1860:17). To define "necessary" he asserts that it implies the following four "qualities": [12]

(1) "determined or necessitated by the nature of the thinking subject itself . it is subjectively, not objectively, determined (2) "original and not acquired (3) "universal that is, it cannot be that it necessitates on some occasions, and does not necessitate on others. (4) "it must be a law for a law is that which applies to all cases without exception, and from which a deviation is ever, and everywhere, impossible, or, at least, unallowed. . This last condition, likewise, enables us to give the most explicit enunciation of the object-matter of Logic, in saying that Logic is the science of the Laws of Thought as Thought, or the science of the Formal Laws of Thought, or the science of the Laws of the Form of Thought for all these are merely various expressions of the same thing."

Hamilton's 4th law: "Infer nothing without ground or reason" Edit

Here's Hamilton's fourth law from his LECT. V. LOGIC. 60–61:

"I now go on to the fourth law. "Par. XVII. Law of Sufficient Reason, or of Reason and Consequent: "XVII. The thinking of an object, as actually characterized by positive or by negative attributes, is not left to the caprice of Understanding – the faculty of thought but that faculty must be necessitated to this or that determinate act of thinking by a knowledge of something different from, and independent of the process of thinking itself. This condition of our understanding is expressed by the law, as it is called, of Sufficient Reason (principium Rationis Sufficientis) but it is more properly denominated the law of Reason and Consequent (principium Rationis et Consecutionis). That knowledge by which the mind is necessitated to affirm or posit something else, is called the logical reason ground, or antecedent that something else which the mind is necessitated to affirm or posit, is called the logical consequent and the relation between the reason and consequent, is called the logical connection or consequence. This law is expressed in the formula – Infer nothing without a ground or reason. 1 Relations between Reason and Consequent: The relations between Reason and Consequent, when comprehended in a pure thought, are the following: 1. When a reason is explicitly or implicitly given, then there must ¶ exist a consequent and, vice versa, when a consequent is given, there must also exist a reason. 1 See Schulze, Logik, §19, and Krug, Logik, §20, – ED. 2. Where there is no reason there can be no consequent and, vice versa, where there is no consequent (either implicitly or explicitly) there can be no reason. That is, the concepts of reason and of consequent, as reciprocally relative, involve and suppose each other. The logical significance of this law: The logical significance of the law of Reason and Consequent lies in this, – That in virtue of it, thought is constituted into a series of acts all indissolubly connected each necessarily inferring the other. Thus it is that the distinction and opposition of possible, actual and necessary matter, which has been introduced into Logic, is a doctrine wholly extraneous to this science.

In the 19th century, the Aristotelian laws of thoughts, as well as sometimes the Leibnizian laws of thought, were standard material in logic textbooks, and J. Welton described them in this way:

The Laws of Thought, Regulative Principles of Thought, or Postulates of Knowledge, are those fundamental, necessary, formal and a priori mental laws in agreement with which all valid thought must be carried on. They are a priori, that is, they result directly from the processes of reason exercised upon the facts of the real world. They are formal for as the necessary laws of all thinking, they cannot, at the same time, ascertain the definite properties of any particular class of things, for it is optional whether we think of that class of things or not. They are necessary, for no one ever does, or can, conceive them reversed, or really violate them, because no one ever accepts a contradiction which presents itself to his mind as such.

The sequel to Bertrand Russell's 1903 "The Principles of Mathematics" became the three volume work named Principia Mathematica (hereafter PM), written jointly with Alfred North Whitehead. Immediately after he and Whitehead published PM he wrote his 1912 "The Problems of Philosophy". His "Problems" reflects "the central ideas of Russell's logic". [13]

The Principles of Mathematics (1903) Edit

In his 1903 "Principles" Russell defines Symbolic or Formal Logic (he uses the terms synonymously) as "the study of the various general types of deduction" (Russell 1903:11). He asserts that "Symbolic Logic is essentially concerned with inference in general" (Russell 1903:12) and with a footnote indicates that he does not distinguish between inference and deduction moreover he considers induction "to be either disguised deduction or a mere method of making plausible guesses" (Russell 1903:11). This opinion will change by 1912, when he deems his "principle of induction" to be par with the various "logical principles" that include the "Laws of Thought".

In his Part I "The Indefinables of Mathematics" Chapter II "Symbolic Logic" Part A "The Propositional Calculus" Russell reduces deduction ("propositional calculus") to 2 "indefinables" and 10 axioms:

"17. We require, then, in the propositional calculus, no indefinable except the two kinds of implication [simple aka "material" [14] and "formal"]-- remembering, however, that formal implication is a complex notion, whose analysis remains to be undertaken. As regards our two indefinables, we require certain indemonstrable propositions, which hitherto I have not succeeded in reducing to less ten (Russell 1903:15).

From these he claims to be able to derive the law of excluded middle and the law of contradiction but does not exhibit his derivations (Russell 1903:17). Subsequently, he and Whitehead honed these "primitive principles" and axioms into the nine found in PM, and here Russell actually exhibits these two derivations at ❋1.71 and ❋3.24, respectively.

The Problems of Philosophy (1912) Edit

By 1912 Russell in his "Problems" pays close attention to "induction" (inductive reasoning) as well as "deduction" (inference), both of which represent just two examples of "self-evident logical principles" that include the "Laws of Thought." [4]

Induction principle: Russell devotes a chapter to his "induction principle". He describes it as coming in two parts: firstly, as a repeated collection of evidence (with no failures of association known) and therefore increasing probability that whenever A happens B follows secondly, in a fresh instance when indeed A happens, B will indeed follow: i.e. "a sufficient number of cases of association will make the probability of a fresh association nearly a certainty, and will make it approach certainty without limit." [15]

He then collects all the cases (instances) of the induction principle (e.g. case 1: A1 = "the rising sun", B1 = "the eastern sky" case 2: A2 = "the setting sun", B2 = "the western sky" case 3: etc.) into a "general" law of induction which he expresses as follows:

"(a) The greater the number of cases in which a thing of the sort A has been found associated with a thing of the sort B, the more probable it is (if cases of failure of association are known) that A is always associated with B "(b) Under the same circumstances, a sufficient number of cases of the association of A with B will make it nearly certain that A is always associated with B, and will make this general law approach certainty without limit." [16]

He makes an argument that this induction principle can neither be disproved or proved by experience, [17] the failure of disproof occurring because the law deals with probability of success rather than certainty the failure of proof occurring because of unexamined cases that are yet to be experienced, i.e. they will occur (or not) in the future. "Thus we must either accept the inductive principle on the ground of its intrinsic evidence, or forgo all justification of our expectations about the future". [18]

In his next chapter ("On Our Knowledge of General Principles") Russell offers other principles that have this similar property: "which cannot be proved or disproved by experience, but are used in arguments which start from what is experienced." He asserts that these "have even greater evidence than the principle of induction . the knowledge of them has the same degree of certainty as the knowledge of the existence of sense-data. They constitute the means of drawing inferences from what is given in sensation". [19]

Inference principle: Russell then offers an example that he calls a "logical" principle. Twice previously he has asserted this principle, first as the 4th axiom in his 1903 [20] and then as his first "primitive proposition" of PM: "❋1.1 Anything implied by a true elementary proposition is true". [21] Now he repeats it in his 1912 in a refined form: "Thus our principle states that if this implies that, and this is true, then that is true. In other words, 'anything implied by a true proposition is true', or 'whatever follows from a true proposition is true'. [22] This principle he places great stress upon, stating that "this principle is really involved – at least, concrete instances of it are involved – in all demonstrations". [4]

He does not call his inference principle modus ponens, but his formal, symbolic expression of it in PM (2nd edition 1927) is that of modus ponens modern logic calls this a "rule" as opposed to a "law". [23] In the quotation that follows, the symbol "⊦" is the "assertion-sign" (cf PM:92) "⊦" means "it is true that", therefore "⊦p" where "p" is "the sun is rising" means "it is true that the sun is rising", alternately "The statement 'The sun is rising' is true". The "implication" symbol "⊃" is commonly read "if p then q", or "p implies q" (cf PM:7). Embedded in this notion of "implication" are two "primitive ideas", "the Contradictory Function" (symbolized by NOT, "

") and "the Logical Sum or Disjunction" (symbolized by OR, "⋁") these appear as "primitive propositions" ❋1.7 and ❋1.71 in PM (PM:97). With these two "primitive propositions" Russell defines "p ⊃ q" to have the formal logical equivalence "NOT-p OR q" symbolized by "

"Inference. The process of inference is as follows: a proposition "p" is asserted, and a proposition "p implies q" is asserted, and then as a sequel the proposition "q" is asserted. The trust in inference is the belief that if the two former assertions are not in error, the final assertion is not in error. Accordingly, whenever, in symbols, where p and q have of course special determination " "⊦p" and "⊦(p ⊃ q)" " have occurred, then "⊦q" will occur if it is desired to put it on record. The process of the inference cannot be reduced to symbols. Its sole record is the occurrence of "⊦q". . An inference is the dropping of a true premiss it is the dissolution of an implication". [24]

In other words, in a long "string" of inferences, after each inference we can detach the "consequent" "⊦q" from the symbol string "⊦p, ⊦(p⊃q)" and not carry these symbols forward in an ever-lengthening string of symbols.

The three traditional "laws" (principles) of thought: Russell goes on to assert other principles, of which the above logical principle is "only one". He asserts that "some of these must be granted before any argument or proof becomes possible. When some of them have been granted, others can be proved." Of these various "laws" he asserts that "for no very good reason, three of these principles have been singled out by tradition under the name of 'Laws of Thought'. [25] And these he lists as follows:

"(1) The law of identity: 'Whatever is, is.' (2) The law of contradiction: 'Nothing can both be and not be.' (3) The law of excluded middle: 'Everything must either be or not be.'" [25]

Rationale: Russell opines that "the name 'laws of thought' is . misleading, for what is important is not the fact that we think in accordance with these laws, but the fact that things behave in accordance with them in other words, the fact that when we think in accordance with them we think truly." [26] But he rates this a "large question" and expands it in two following chapters where he begins with an investigation of the notion of "a priori" (innate, built-in) knowledge, and ultimately arrives at his acceptance of the Platonic "world of universals". In his investigation he comes back now and then to the three traditional laws of thought, singling out the law of contradiction in particular: "The conclusion that the law of contradiction is a law of thought is nevertheless erroneous . [rather], the law of contradiction is about things, and not merely about thoughts . a fact concerning the things in the world." [27]

His argument begins with the statement that the three traditional laws of thought are "samples of self-evident principles". For Russell the matter of "self-evident" [28] merely introduces the larger question of how we derive our knowledge of the world. He cites the "historic controversy . between the two schools called respectively 'empiricists' [ Locke, Berkeley, and Hume ] and 'rationalists' [ Descartes and Leibniz]" (these philosophers are his examples). [29] Russell asserts that the rationalists "maintained that, in addition to what we know by experience, there are certain 'innate ideas' and 'innate principles', which we know independently of experience" [29] to eliminate the possibility of babies having innate knowledge of the "laws of thought", Russell renames this sort of knowledge a priori. And while Russell agrees with the empiricists that "Nothing can be known to exist except by the help of experience,", [30] he also agrees with the rationalists that some knowledge is a priori, specifically "the propositions of logic and pure mathematics, as well as the fundamental propositions of ethics". [31]

This question of how such a priori knowledge can exist directs Russell to an investigation into the philosophy of Immanuel Kant, which after careful consideration he rejects as follows:

". there is one main objection which seems fatal to any attempt to deal with the problem of a priori knowledge by his method. The thing to be accounted for is our certainty that the facts must always conform to logic and arithmetic. . Thus Kant's solution unduly limits the scope of a priori propositions, in addition to failing in the attempt at explaining their certainty". [32]

His objections to Kant then leads Russell to accept the 'theory of ideas' of Plato, "in my opinion . one of the most successful attempts hitherto made." [33] he asserts that " . we must examine our knowledge of universals . where we shall find that [this consideration] solves the problem of a priori knowledge.". [33]

Principia Mathematica (Part I: 1910 first edition, 1927 2nd edition) Edit

Unfortunately, Russell's "Problems" does not offer an example of a "minimum set" of principles that would apply to human reasoning, both inductive and deductive. But PM does at least provide an example set (but not the minimum see Post below) that is sufficient for deductive reasoning by means of the propositional calculus (as opposed to reasoning by means of the more-complicated predicate calculus)—a total of 8 principles at the start of "Part I: Mathematical Logic". Each of the formulas :❋1.2 to :❋1.6 is a tautology (true no matter what the truth-value of p, q, r . is). What is missing in PM's treatment is a formal rule of substitution [34] in his 1921 PhD thesis Emil Post fixes this deficiency (see Post below). In what follows the formulas are written in a more modern format than that used in PM the names are given in PM).

❋1.1 Anything implied by a true elementary proposition is true. ❋1.2 Principle of Tautology: (p ⋁ p) ⊃ p ❋1.3 Principle of [logical] Addition: q ⊃ (p ⋁ q) ❋1.4 Principle of Permutation: (p ⋁ q) ⊃ (q ⋁ p) ❋1.5 Associative Principle: p ⋁ (q ⋁ r) ⊃ q ⋁ (p ⋁ r) [redundant] ❋1.6 Principle of [logical] Summation: (q ⊃ r) ⊃ ((p ⋁ q) ⊃ (p ⋁ r)) ❋1.7 [logical NOT]: If p is an elementary proposition,

p is an elementary proposition. ❋1.71 [logical inclusive OR]: If p and q are elementary propositions, (p ⋁ q) is an elementary proposition.

Russell sums up these principles with "This completes the list of primitive propositions required for the theory of deduction as applied to elementary propositions" (PM:97).

Starting from these eight tautologies and a tacit use of the "rule" of substitution, PM then derives over a hundred different formulas, among which are the Law of Excluded Middle ❋1.71, and the Law of Contradiction ❋3.24 (this latter requiring a definition of logical AND symbolized by the modern ⋀: (p ⋀ q) =def

q). (PM uses the "dot" symbol for logical AND)).

At about the same time (1912) that Russell and Whitehead were finishing the last volume of their Principia Mathematica, and the publishing of Russell's "The Problems of Philosophy" at least two logicians (Louis Couturat, Christine Ladd-Franklin) were asserting that two "laws" (principles) of contradiction" and "excluded middle" are necessary to specify "contradictories" Ladd-Franklin renamed these the principles of exclusion and exhaustion. The following appears as a footnote on page 23 of Couturat 1914:

"As Mrs. LADD·FRANKLlN has truly remarked (BALDWIN, Dictionary of Philosophy and Psychology, article "Laws of Thought"), the principle of contradiction is not sufficient to define contradictories the principle of excluded middle must be added which equally deserves the name of principle of contradiction. This is why Mrs. LADD-FRANKLIN proposes to call them respectively the principle of exclusion and the principle of exhaustion, inasmuch as, according to the first, two contradictory terms are exclusive (the one of the other) and, according to the second, they are exhaustive (of the universe of discourse)."

In other words, the creation of "contradictories" represents a dichotomy, i.e. the "splitting" of a universe of discourse into two classes (collections) that have the following two properties: they are (i) mutually exclusive and (ii) (collectively) exhaustive. [35] In other words, no one thing (drawn from the universe of discourse) can simultaneously be a member of both classes (law of non-contradiction), but [and] every single thing (in the universe of discourse) must be a member of one class or the other (law of excluded middle).

As part of his PhD thesis "Introduction to a general theory of elementary propositions" Emil Post proved "the system of elementary propositions of Principia [PM]" i.e. its "propositional calculus" [36] described by PM's first 8 "primitive propositions" to be consistent. The definition of "consistent" is this: that by means of the deductive "system" at hand (its stated axioms, laws, rules) it is impossible to derive (display) both a formula S and its contradictory

S (i.e. its logical negation) (Nagel and Newman 1958:50). To demonstrate this formally, Post had to add a primitive proposition to the 8 primitive propositions of PM, a "rule" that specified the notion of "substitution" that was missing in the original PM of 1910. [37]

Given PM's tiny set of "primitive propositions" and the proof of their consistency, Post then proves that this system ("propositional calculus" of PM) is complete, meaning every possible truth table can be generated in the "system":

". every truth system has a representation in the system of Principia while every complete system, that is one having all possible truth tables, is equivalent to it. . We thus see that complete systems are equivalent to the system of Principia not only in the truth table development but also postulationally. As other systems are in a sense degenerate forms of complete systems we can conclude that no new logical systems are introduced." [38]

A minimum set of axioms? The matter of their independence Edit

Then there is the matter of "independence" of the axioms. In his commentary before Post 1921, van Heijenoort states that Paul Bernays solved the matter in 1918 (but published in 1926) – the formula ❋1.5 Associative Principle: p ⋁ (q ⋁ r) ⊃ q ⋁ (p ⋁ r) can be proved with the other four. As to what system of "primitive-propositions" is the minimum, van Heijenoort states that the matter was "investigated by Zylinski (1925), Post himself (1941), and Wernick (1942)" but van Heijenoort does not answer the question. [39]

Model theory versus proof theory: Post's proof Edit

Kleene (1967:33) observes that "logic" can be "founded" in two ways, first as a "model theory", or second by a formal "proof" or "axiomatic theory" "the two formulations, that of model theory and that of proof theory, give equivalent results"(Kleene 1967:33). This foundational choice, and their equivalence also applies to predicate logic (Kleene 1967:318).

In his introduction to Post 1921, van Heijenoort observes that both the "truth-table and the axiomatic approaches are clearly presented". [40] This matter of a proof of consistency both ways (by a model theory, by axiomatic proof theory) comes up in the more-congenial version of Post's consistency proof that can be found in Nagel and Newman 1958 in their chapter V "An Example of a Successful Absolute Proof of Consistency". In the main body of the text they use a model to achieve their consistency proof (they also state that the system is complete but do not offer a proof) (Nagel & Newman 1958:45–56). But their text promises the reader a proof that is axiomatic rather than relying on a model, and in the Appendix they deliver this proof based on the notions of a division of formulas into two classes K1 and K2 that are mutually exclusive and exhaustive (Nagel & Newman 1958:109–113).

The (restricted) "first-order predicate calculus" is the "system of logic" that adds to the propositional logic (cf Post, above) the notion of "subject-predicate" i.e. the subject x is drawn from a domain (universe) of discourse and the predicate is a logical function f(x): x as subject and f(x) as predicate (Kleene 1967:74). Although Gödel's proof involves the same notion of "completeness" as does the proof of Post, Gödel's proof is far more difficult what follows is a discussion of the axiom set.

Completeness Edit

Kurt Gödel in his 1930 doctoral dissertation "The completeness of the axioms of the functional calculus of logic" proved that in this "calculus" (i.e. restricted predicate logic with or without equality) that every valid formula is "either refutable or satisfiable" [41] or what amounts to the same thing: every valid formula is provable and therefore the logic is complete. Here is Gödel's definition of whether or not the "restricted functional calculus" is "complete":

". whether it actually suffices for the derivation of every logico-mathematical proposition, or where, perhaps, it is conceivable that there are true propositions (which may be provable by means of other principles) that cannot be derived in the system under consideration." [42]

The first-order predicate calculus Edit

This particular predicate calculus is "restricted to the first order". To the propositional calculus it adds two special symbols that symbolize the generalizations "for all" and "there exists (at least one)" that extend over the domain of discourse. The calculus requires only the first notion "for all", but typically includes both: (1) the notion "for all x" or "for every x" is symbolized in the literature as variously as (x), ∀x, ∏x etc., and the (2) notion of "there exists (at least one x)" variously symbolized as Ex, ∃x.

The restriction is that the generalization "for all" applies only to the variables (objects x, y, z etc. drawn from the domain of discourse) and not to functions, in other words the calculus will permit ∀xf(x) ("for all creatures x, x is a bird") but not ∀f∀x(f(x)) [but if "equality" is added to the calculus it will permit ∀f:f(x) see below under Tarski]. Example:

Let the predicate "function" f(x) be "x is a mammal", and the subject-domain (or universe of discourse) (cf Kleene 1967:84) be the category "bats": The formula ∀xf(x) yields the truth value "truth" (read: "For all instances x of objects 'bats', 'x is a mammal'" is a truth, i.e. "All bats are mammals") But if the instances of x are drawn from a domain "winged creatures" then ∀xf(x) yields the truth value "false" (i.e. "For all instances x of 'winged creatures', 'x is a mammal'" has a truth value of "falsity" "Flying insects are mammals" is false) However over the broad domain of discourse "all winged creatures" (e.g. "birds" + "flying insects" + "flying squirrels" + "bats") we can assert ∃xf(x) (read: "There exists at least one winged creature that is a mammal'" it yields a truth value of "truth" because the objects x can come from the category "bats" and perhaps "flying squirrels" (depending on how we define "winged"). But the formula yields "falsity" when the domain of discourse is restricted to "flying insects" or "birds" or both "insects" and "birds".

Kleene remarks that "the predicate calculus (without or with equality) fully accomplishes (for first order theories) what has been conceived to be the role of logic" (Kleene 1967:322).

A new axiom: Aristotle's dictum – "the maxim of all and none" Edit

This first half of this axiom – "the maxim of all" will appear as the first of two additional axioms in Gödel's axiom set. The "dictum of Aristotle" (dictum de omni et nullo) is sometimes called "the maxim of all and none" but is really two "maxims" that assert: "What is true of all (members of the domain) is true of some (members of the domain)", and "What is not true of all (members of the domain) is true of none (of the members of the domain)".

The "dictum" appears in Boole 1854 a couple places:

"It may be a question whether that formula of reasoning, which is called the dictum of Aristotle, de Omni et nullo, expresses a primary law of human reasoning or not but it is no question that it expresses a general truth in Logic" (1854:4)

But later he seems to argue against it: [43]

"[Some principles of] general principle of an axiomatic nature, such as the "dictum of Aristotle:" Whatsoever is affirmed or denied of the genus may in the same sense be affirmed or denied of any species included under that genus. . either state directly, but in an abstract form, the argument which they are supposed to elucidate, and, so stating that argument, affirm its validity or involve in their expression technical terms which, after definition, conduct us again to the same point, viz. the abstract statement of the supposed allowable forms of inference."

But the first half of this "dictum" (dictum de omni) is taken up by Russell and Whitehead in PM, and by Hilbert in his version (1927) of the "first order predicate logic" his (system) includes a principle that Hilbert calls "Aristotle's dictum" [44]

This axiom also appears in the modern axiom set offered by Kleene (Kleene 1967:387), as his "∀-schema", one of two axioms (he calls them "postulates") required for the predicate calculus the other being the "∃-schema" f(y) ⊃ ∃xf(x) that reasons from the particular f(y) to the existence of at least one subject x that satisfies the predicate f(x) both of these requires adherence to a defined domain (universe) of discourse.

Gödel's restricted predicate calculus Edit

To supplement the four (down from five see Post) axioms of the propositional calculus, Gödel 1930 adds the dictum de omni as the first of two additional axioms. Both this "dictum" and the second axiom, he claims in a footnote, derive from Principia Mathematica. Indeed, PM includes both as

❋10.1 ⊦ ∀xf(x) ⊃ f(y) ["I.e. what is true in all cases is true in any one case" [45] ("Aristotle's dictum", rewritten in more-modern symbols)] ❋10.2 ⊦∀x(p ⋁ f(x)) ⊃ (p ⋁ ∀xf(x)) [rewritten in more-modern symbols]

The latter asserts that the logical sum (i.e. ⋁, OR) of a simple proposition p and a predicate ∀xf(x) implies the logical sum of each separately. But PM derives both of these from six primitive propositions of ❋9, which in the second edition of PM is discarded and replaced with four new "Pp" (primitive principles) of ❋8 (see in particular ❋8.2, and Hilbert derives the first from his "logical ε-axiom" in his 1927 and does not mention the second. How Hilbert and Gödel came to adopt these two as axioms is unclear.

Also required are two more "rules" of detachment ("modus ponens") applicable to predicates.

Alfred Tarski in his 1946 (2nd edition) "Introduction to Logic and to the Methodology of the Deductive Sciences" cites a number of what he deems "universal laws" of the sentential calculus, three "rules" of inference, and one fundamental law of identity (from which he derives four more laws). The traditional "laws of thought" are included in his long listing of "laws" and "rules". His treatment is, as the title of his book suggests, limited to the "Methodology of the Deductive Sciences".

Rationale: In his introduction (2nd edition) he observes that what began with an application of logic to mathematics has been widened to "the whole of human knowledge":

"[I want to present] a clear idea of that powerful trend of contemporary thought which is concentrated about modern logic. This trend arose originally from the somewhat limited task of stabilizing the foundations of mathematics. In its present phase, however, it has much wider aims. For it seeks to create a unified conceptual apparatus which would supply a common basis for the whole of human knowledge.". [46]

Law of identity (Leibniz's law, equality) Edit

To add the notion of "equality" to the "propositional calculus" (this new notion not to be confused with logical equivalence symbolized by ↔, ⇄, "if and only if (iff)", "biconditional", etc.) Tarski (cf p54-57) symbolizes what he calls "Leibniz's law" with the symbol " x has every property that y has", we can write "x = y", and this formula will have a truth value of "truth" or "falsity". Tarski states this Leibniz's law as follows:

  • I. Leibniz' Law: x = y, if, and only if, x has every property which y has, and y has every property which x has.

He then derives some other "laws" from this law:

  • II. Law of Reflexivity: Everything is equal to itself: x = x. [Proven at PM ❋13.15]
  • III. Law of Symmetry: If x = y, then y = x. [Proven at PM ❋13.16]
  • IV. Law of Transitivity: If x = y and y = z, then x = z. [Proven at PM ❋13.17]
  • V. If x = z and y = z, then x = y. [Proven at PM ❋13.172]

Principia Mathematica defines the notion of equality as follows (in modern symbols) note that the generalization "for all" extends over predicate-functions f( ):

❋13.01. x = y =def ∀f:(f(x) → f(y)) ("This definition states that x and y are to be called identical when every predicate function satisfied by x is satisfied by y" [47]

Hilbert 1927:467 adds only two axioms of equality, the first is x = x, the second is (x = y) → ((f(x) → f(y)) the "for all f" is missing (or implied). Gödel 1930 defines equality similarly to PM :❋13.01. Kleene 1967 adopts the two from Hilbert 1927 plus two more (Kleene 1967:387).

All of the above "systems of logic" are considered to be "classical" meaning propositions and predicate expressions are two-valued, with either the truth value "truth" or "falsity" but not both(Kleene 1967:8 and 83). While intuitionistic logic falls into the "classical" category, it objects to extending the "for all" operator to the Law of Excluded Middle it allows instances of the "Law", but not its generalization to an infinite domain of discourse.

Intuitionistic logic Edit

'Intuitionistic logic', sometimes more generally called constructive logic, is a paracomplete symbolic logic that differs from classical logic by replacing the traditional concept of truth with the concept of constructive provability.

The generalized law of the excluded middle is not part of the execution of intuitionistic logic, but neither is it negated. Intuitionistic logic merely forbids the use of the operation as part of what it defines as a "constructive proof", which is not the same as demonstrating it invalid (this is comparable to the use of a particular building style in which screws are forbidden and only nails are allowed it does not necessarily disprove or even question the existence or usefulness of screws, but merely demonstrates what can be built without them).

Paraconsistent logic Edit

'Paraconsistent logic' refers to so-called contradiction-tolerant logical systems in which a contradiction does not necessarily result in trivialism. In other words, the principle of explosion is not valid in such logics. Some (namely the dialetheists) argue that the law of non-contradiction is denied by dialetheic logic. They are motivated by certain paradoxes which seem to imply a limit of the law of non-contradiction, namely the liar paradox. In order to avoid a trivial logical system and still allow certain contradictions to be true, dialetheists will employ a paraconsistent logic of some kind.

Three-valued logic Edit

TBD cf Three-valued logic try this A Ternary Arithmetic and Logic – Semantic Scholar [48]

Modal propositional calculi Edit

(cf Kleene 1967:49): These "calculi" include the symbols ⎕A, meaning "A is necessary" and ◊A meaning "A is possible". Kleene states that:

"These notions enter in domains of thinking where there are understood to be two different kinds of "truth", one more universal or compelling than the other . A zoologist might declare that it is impossible that salamanders or any other living creatures can survive fire but possible (though untrue) that unicorns exist, and possible (though improbable) that abominable snowmen exist."

Fuzzy logic Edit

'Fuzzy logic' is a form of many-valued logic it deals with reasoning that is approximate rather than fixed and exact.


Recent Posts

About Us

Paperwritten.com is an online writing service for those struggling with their writing. As simple as that.Whether you are a student having a hard time writing your descriptive essay, an MA major trying to draft a dissertation, or a graduate looking for ways to enhance your resume - PaperWritten.com is your best solution.

A 100+ crew of native English speaking writers. Since 2008 we have been working hard to gather the cream of the writing industry.

As a result we have ended up with a pool of more than 100 American, British, Australian, Canadian, and European writers, editors, and proofreaders - an amazing writing force enabling us to give a 100% money back guarantee to our clients.


Foundations of mathematics

Mathematics is the science of quantity. Traditionally there were two branches of mathematics, arithmetic and geometry, dealing with two kinds of quantities: numbers and shapes. Modern mathematics is richer and deals with a wider variety of objects, but arithmetic and geometry are still of central importance.

Foundations of mathematics is the study of the most basic concepts and logical structure of mathematics, with an eye to the unity of human knowledge. Among the most basic mathematical concepts are: number, shape, set, function, algorithm, mathematical axiom, mathematical definition, mathematical proof.

The reader may reasonably ask why mathematics appears at all in this volume. Isn't mathematics too narrow a subject? Isn't the philosophy of mathematics of rather specialized interest, all the more so in comparison to the broad humanistic issues of philosophy proper, issues such as the good, the true, and the beautiful?

There are three reasons for discussing mathematics in a volume on general philosophy:

  1. Mathematics has always played a special role in scientific thought. The abstract nature of mathematical objects presents philosophical challenges that are unusual and unique.
  2. Foundations of mathematics is a subject that has always exhibited an unusually high level of technical sophistication. For this reason, many thinkers have conjectured that foundations of mathematics can serve as a model or pattern for foundations of other sciences.
  3. The philosophy of mathematics has served as a highly articulated test-bed where mathematicians and philosophers alike can explore how various general philosophical doctrines play out 13 in a specific scientific context.

The purpose of this section is to indicate the role of logic in the foundations of mathematics. We begin with a few remarks on the geometry of Euclid. We then describe some modern formal theories for mathematics.

The geometry of Euclid

Above the gateway to Plato's academy appeared a famous inscription:

In the Posterior Analytics [13], Aristotle laid down the basics of the scientific method. 14 The essence of the method is to organize a field of knowledge logically by means of primitive concepts, axioms, postulates, definitions, and theorems. The majority of Aristotle's examples of this method are drawn from arithmetic and geometry [1,7,9].

The methodological ideas of Aristotle decisively influenced the structure and organization of Euclid's monumental treatise on geometry, the Elements [8]. Euclid begins with 21 definitions, five postulates, and five common notions. After that, the rest of the Elements are an elaborate deductive structure consisting of hundreds of propositions. Each proposition is justified by its own demonstration. The demonstrations are in the form of chains of syllogisms. In each syllogism, the premises are identified as coming from among the definitions, postulates, common notions, and previously demonstrated propositions. For example, in Book I of the Elements, the demonstration of Proposition 16 (``in any triangle, if one of the sides be produced, the exterior angle is greater than either of the interior and opposite angles'') is a chain of syllogisms with Postulate 2, Common Notion 5, and Propositions 3, 4 and 15 (``if two straight lines cut one another, they make the vertical angles equal to one another'') occurring as premises. It is true that the syllogisms of Euclid do not always conform strictly to Aristotelean templates. However, the standards of rigor are very high, and Aristotle's influence is readily apparent.

The logic of Aristotle and the geometry of Euclid are universally recognized as towering scientific achievements of ancient Greece.

Formal theories for mathematics

A formal theory for geometry

With the advent of calculus in the 17th and 18th centuries, mathematics developed very rapidly and with little attention to logical foundations. Euclid's geometry was still regarded as a model of logical rigor, a shining example of what a well-organized scientific discipline ideally ought to look like. But the prolific Enlightenment mathematicians such as Leonhard Euler showed almost no interest in trying to place calculus on a similarly firm foundation. Only in the last half of the 19th century did scientists begin to deal with this foundational problem in earnest. The resulting crisis had far-reaching consequences. Even Euclid's geometry itself came under critical scrutiny. Geometers such as Moritz Pasch discovered what they regarded as gaps or inaccuracies in the Elements. Great mathematicians such as David Hilbert entered the fray.

An outcome of all this foundational activity was a thorough reworking of geometry, this time as a collection of formal theories within the predicate calculus. Decisive insights were obtained by Alfred Tarski. We shall sketch Tarski's formal theory for Euclidean 15 plane geometry. 16

As his primitive predicates, Tarski takes (``point''), (``between''), (``distance''), (``identity''). The atomic formulas , , , and mean `` is a point'', `` lies between and '', ``the distance from to is equal to the distance from to '', and `` is identical to '', respectively. Geometrical objects other than points, such as line segments, angles, triangles, circles, etc., are handled by means of the primitives. For example, the circle with center and radius consists of all points such that holds.

In geometry, two points and are considered identical if the distance between them is zero. Tarski expresses this by means of an axiom

Altogether Tarski presents twelve axioms, plus an additional collection of axioms expressing the idea that a line is continuous. The full statement of Tarski's axioms for Euclidean plane geometry is given at [10, pages 19-20]. Let be the formal theory based on Tarski's axioms.

Remarkably, Tarski has demonstrated that is complete . This means that, for any purely geometrical 17 statement , either or is a theorem of . Thus we see that the axioms of suffice to answer all yes/no questions of Euclidean plane geometry. Combining this with the completeness theorem of Gödel, we find that is decidable : there is an algorithm 18 which accepts as input an arbitrary statement of plane Euclidean geometry, and outputs ``true'' if the statement is true, and ``false'' if it is false. This is a triumph of modern foundational research.

A formal theory for arithmetic

By arithmetic we mean elementary school arithmetic, i.e., the study of the positive whole numbers , , , . along with the familiar operations of addition ( ) and multiplication ( ). This part of mathematics is obviously fundamental, yet it turns out to be surprisingly complicated. Below we write down some of the axioms which go into a formal theory of arithmetic. 19

Our primitive predicates for arithmetic are (``number''), (``addition''), (``multiplication''), (``identity''). The atomic formulas , , , mean `` is a number'', `` '', `` '', `` '', respectively. Our axioms will use the predicates , , , to assert that for any given numbers and , the numbers and always exist and are unique. We shall also have axioms expressing some well known arithmetical laws:


substitution laws: if and is a number then is a number, etc.
commutative laws: and .
associative laws: and .
distributive law: .
comparison law: if and only if, for some , or .
unit law: .
Let be the formal theory specified by the above primitives and axioms.

It is known that suffices to derive many familiar arithmetical facts. For example, may be expressed, awkwardly 20 to be sure, as or

On the other hand, the axioms of are by no means exhaustive. They can be supplemented with other axioms expressing the so-called mathematical induction or least number principle: if there exists a number having some well-defined property, then among all numbers having the property there is a smallest one. The resulting formal theory is remarkably powerful, in the sense that its theorems include virtually all known arithmetical facts. But it is not so powerful as one might wish. Indeed, any formal theory which includes is necessarily either inconsistent 22 or incomplete. Thus there is no hope of writing down enough axioms or developing an algorithm to decide all arithmetical facts. This is a variant of the famous 1931 incompleteness theorem of Gödel [5,22]. There are several methods of coping with the incompleteness phenomenon, and this constitutes a currently active area of research in foundations of mathematics.

The contrast between the completeness of formal geometry and the incompleteness of formal arithmetic is striking. Both sides of this dichotomy are of evident philosophical interest.

A formal theory of sets

One of the aims of modern logical research is to devise a single formal theory which will unify all of mathematics. Such a theory will necessarily be subject to the Gödel incompleteness phenomenon, because it will incorporate not only but also .

One approach to a unified mathematics is to straightforwardly embed arithmetic into geometry, by identifying whole numbers with evenly spaced points on a line. This idea was familiar to the ancient Greeks. Another approach is to explain geometry in terms of arithmetic and algebra, by means of coordinate systems, like latitude and longitude on a map. This idea goes back to the 17th century mathematician and philosopher René Descartes and the 19th century mathematician Karl Weierstrass. Both approaches give rise to essentially the same formal theory, known as second-order arithmetic . 23 This theory includes both and and is adequate for the bulk of modern mathematics. Thus the decision about whether to make geometry more fundamental than arithmetic or vice versa seems to be mostly a matter of taste.

A very different approach to a unified mathematics is via set theory . This is a peculiarly 20th century approach. It is based on one very simple-looking concept: sets. Remarkably, this one concept leads directly to a vast structure which encompasses all of modern mathematics.

A set is a collection of objects called the elements of the set. We sometimes use informal notations such as to indicate that is a set consisting of elements , , . The number of elements in a set can be arbitrarily large or even infinite. A basic principle of set theory is that a set is determined by its elements. Thus two sets are identical if and only if they have the same elements. This principle is known as extensionality . For example, the set is considered to be the same set as because the elements are the same, even though written in a different order.

Much of the complexity of set theory arises from the fact that sets may be elements of other sets. For instance, the set is an element of the set and this is distinct from the set .

For a formal theory of sets, we use three primitives: (``set''), (``identity''), (``element''). The atomic formulas , , mean `` is a set'', `` is identical to '', `` is an element of '', respectively. One of the ground rules of set theory is that only sets may have elements. This is expressed as an axiom . In addition there is an axiom of extensionality

The set theory approach to arithmetic is in terms of the non-negative whole numbers , , , , . These numbers are identified with specific sets. Namely, we identify with the empty set , with , with , with , etc. In general, we identify the number with the set of smaller numbers . Among the axioms of is an axiom of infinity asserting the existence of the infinite set . One can use the set to show that includes a theory equivalent to . After that, one can follow the ideas of Descartes and Weierstrass to see that also includes a theory equivalent to . It turns out that the rest of modern mathematics can also be emulated within . This includes an elaborate theory of infinite sets which are much larger than .

The set-theoretical approach to arithmetic and geometry is admittedly somewhat artificial. However, the idea of basing all of mathematics on one simple concept, sets, has exerted a powerful attraction. 24 The implications of this idea are not yet fully understood and are a topic of current research.


Shapes and inductive reasoning

Look carefully at the following figures. Then, use inductive reasoning to make a conjecture about the next figure in the pattern.

Look at the pattern below. Can you draw the next figure or next set of dots using inductive reasoning?

The trick is to see that one dot is always placed between and above two dots. Also, the next figure always has one more dot at the very bottom row.

keeping this in mind, your next figure should look like this:


Only a small number of kids who learn and think differently receive accommodations or specialized instruction.

1 in 16 public school students have IEPs for LD or for other health impairments (OHI). (These are two of the 13 disability categories covered under special education law. LD covers kids with dyslexia , dyscalculia , and other learning differences. When kids qualify for special education because of ADHD, they’re classified under OHI.)

1 in 42 public school students have 504 plans . The percentage of kids with 504 plans has more than doubled in the past decade. Like IEPs, these plans provide accommodations for kids with disabilities. But unlike IEPs, they don’t provide specialized instruction. And schools don’t have to classify kids with 504 plans by disability type.

These groups combined don’t come anywhere close to 1 in 5. This means millions of kids who learn and think differently aren’t being identified by schools as needing support.


Unwarranted Generalizations

A formula or notation may work properly in one context, but some students try to apply it in a wider context, where it may not work properly at all. Robin Chapman also calls this type of error "crass formalism." Here is one example that he has mentioned:

Every positive number has two square roots: one positive, the other negative. The notation √b generally is only used when b is a nonnegative real number it means "the nonnegative square root of b," and not just "the square root of b." The notation √b probably should not be used at all in the context of complex numbers. Every nonzero complex number b has two square roots, but in general there is no natural way to say which one should be associated with the expression √b. The formula is correct when a and b are positive real numbers, but it leads to errors when generalized indiscriminately to other kinds of numbers. Beginners in the use of complex numbers are prone to errors such as . In fact, the great mathematician Leonhard Euler published a computation similar to this in a book in 1770, when the theory of complex numbers was still young.

Here is another example, from my own teaching experience: What is the derivative of x x ? If you ask this during the first week of calculus, a correct answer is "we haven't covered that yet." But many students will very confidently tell you that the answer is Some of them may even simplify that expression -- it reduces to -- and a few students will even remark: "Say, that's interesting -- is its own derivative!" Of course, all these students are wrong. The correct answer, covered after about a semester of calculus, is

The difficulty is that, in high school or shortly after they arrive at college, the students have learned that

That formula is actually WRONG, but in a very subtle way. The correct formula is

(x k ) = kx k𔂿 (for all x where the right side is defined), if k is any constant.

The equation is unchanged, but it's now accompanied by some words telling us when the equation is applicable. I've thrown in the parenthetical "for all x where the right side is defined," in order to avoid discussing the complications that arise when But the part that I really want to discuss here is the other part -- i.e., the phrase "if k is any constant."

To most teachers, that additional phrase doesn't seem important, because in the teacher's mind "x" usually means a variable and "k" usually means a constant. The letters x and k are used in different ways here, a little like the difference between bound and free variables in logic: Fix any constant k then the equation states a relationship between two functions of the variable x. So the language suggests to us that x is probably not supposed to equal k.

But the math teacher is already fluent in this language, whereas mathematics is a foreign language to most students. To most students, the distinction between the two boxed formulas is one which doesn't seem important at first, because the only examples shown to the student at first are those in which k actually is a constant. Why bother to mention that k must be a constant, when there are no other conceivable meanings for k? So the student memorizes the first (incorrect) formula, rather than the second (correct) formula.

Every mathematical formula should be accompanied by a few words of English (or your natural language, whatever it is). The words in English tell when the formula can or can't be applied. But frequently we neglect the words, because they seem to be clear from the context. When the context changes, the words that we've omitted may become crucial.

Students have difficulty with this. Here is an experiment that I have tried a few times: At the beginning of the semester, I tell the students that the correct answer to is not but rather and I tell them that this problem will be on their final exam at the end of the semester. I repeat these statements once or twice during the semester, and I repeat them again at the very end of the semester, just before classes end. Nevertheless, a large percentage (sometimes a third) of my students still get the problem wrong on the final exam! Their original, incorrect learning persists despite my efforts.

I have a couple of theories about why this happens: (i) For most students, mathematics is a foreign language, and the student focuses his or her attention on the part which seems most foreign -- i.e., the formulas. The words have the appearance of something familiar ("oh, that's just English, and I already know English"), and so the student doesn't pay a lot of attention to the words. (ii) Undergraduate students tend to focus on mechanical computations they are not yet mathematically mature enough to be able to think easily about theoretical and abstract ideas.

A sort of footnote: Here is a common error among readers of this web page. Several people have written to me to ask, shouldn't that formula say "if k is any constant except 0", or "if k is any constant except or something like that? They think some special note needs to be made about the logarithm case. Actually, my formula is correct as it stands -- i.e., for every constant real number k -- but if you want to tell the whole story, you'd have to append some additional formula(s). When my formula just says the derivative of 1 is that's true but not very enlightening. My formula doesn't mention, but also doesn't contradict, the fact that the derivative of is . You can always say more about any subject, but I just wanted to contrast the formulas and as simply as possible. . And of course, for simplicity's sake, I haven't mentioned the complications you run into when x is zero or a negative number I'm only considering those values of x for which and are easy to define.


Algebraic Thinking

Number tricks are fun for children. The fun, all by itself, is valuable, but is not mathematics. But understanding how the trick works is good mathematical, often algebraic, learning. And understanding a trick well enough lets children make up their own tricks.

“Think-of-a-number” tricks

These tricks come in two types:

  1. Think of a number (but don’t tell me), do some arithmetic with that number, and I can predict your result.
  2. Think of a number, do some arithmetic, tell me your result, and I can instantly say what number you started with.

Fourth graders love these tricks! Most of them (and even some younger children) are also ready to understand how they work and even to learn to make up their own tricks! Without the high-school notation, what they are learning is the beginnings of algebra!

The trick

An example of predicting the answer:

  • Think of a number.
  • Add 3.
  • Double that.
  • Subtract 4.
  • Cut that in half.
  • Subtract your original number.
  • Your result is 1!

How it works

I say Think of a number. I don’t know what number you are thinking of, so I just imagine a bag with that number of marbles or candies in it.

The bag is closed, and tied, so I can’t see in, but it doesn’t matter. Your number is in there.

I know that if I tell you to add 3, I can picture that bag and three extras, this way:

When I tell you to double that, I double the quantities in my picture like this:

Then, when I say subtract 4, I mentally erase 4 of the extras:

From the picture, itself, I can be sure that you can cut that in half, and I picture this:

The last instruction, subtract your original number, gets rid of the bag!

That’s why I can predict your result without knowing what number you thought of. Your answer must be 1.

We can summarize this in a table.

Words for each step ——— Pictures of the results
Think of a number.
Add 3.
Double that.
Subtract 4.
Cut that in half.
Subtract your original number.
Now it is easy to see… The result is 1!

Inventing your own tricks

You can make up tricks on your own as long as your rules[1] allow you to:

  • draw the pictures,
  • use whole bags and whole marbles (no fair cutting bags in half!), and
  • subtract only the marbles you can see. (No fair taking marbles out of the bag. The bag might have been empty!)

Drew, age 9, kept asking for new tricks, and then started inventing tricks of his own. Here are two to practice on. Draw the pictures yourself, to figure out what the “magician” should predict the result will be. Then make up your own tricks.

Trick 1: words for each step——— Trick 1: Pictures
Think of a number.
Double it.
Add 10
Divide by 2.
Subtract your original number.
Triple the result.
Aha! Your result is…

Trick 2: words for each step——— Trick 2: Pictures
Think of a number.
Add two.
Multiply by 3.
Add 2.
Subtract your original number.
Divide by 2.
Subtract your original number again.
Aha! Your result is…

Drew’s inventions

Here are three tricks Drew invented. Draw the pictures to figure out how he can easily figure out your starting number!

Drew’s trick: words for each step——— Trick 1: Pictures
Think of a number.
Add 20.
Quadruple that!
Divide by 2.
Subtract your original number.
Tell me your result…
Aha! You started with…

Drew’s second trick: words for each step——— Trick 2: Pictures
Think of a number.
Add 2.
Double that!
Subtract 4.
Subtract your original number.
Aha! Your answer is your original number!

  1. It is certainly possible to make up tricks without the restrictions given here, but they are not suitable for most students in elementary school. The algebra is not harder, but the pictures and arithmetic can be harder.

Making the Most of Constructive Play

There are so many ways to play! Here are thirteen simple ways to invite (or extend) construction play experiences in your home, childcare centre or classroom

1. Add a range of open ended loose materials to your child’s block play (or to other construction sets) – pieces of vinyl, pieces of fabric, balls of wool, small tiles, shells, bottle tops, lengths of ribbon, planks of wood, stones.

2. Add lengths of PVC pipes, clean tin cans, and measuring tapes to your block play area.

3. Add a variety of figurines and vehicles to your construction area, to be used with construction sets.

4. Look for interesting block shapes to stretch the child’s constructing abilities.

5. Add clipboards, paper and pencils to your construction area so that children can draw their building ideas.

6. Buy a bag of wood off cuts and some strong glue as an introduction to woodworking. Over time, add a small handsaw, nails and small hammer.

7. Build cubbies or blanket forts from sheets, chairs, milk crates, large boxes, paint, hay bales, tyres, lengths of bamboo or dowel.

8. Add lengths of plastic pipe and guttering to your sandbox or mud pit.

9. Add creative materials when constructing with boxes and other recycled materials – popsticks, buttons, googly eyes, string, sequins, felt tipped pens, tape, stapler, cotton wool, paint.

10. Adding open ended materials to clay or playdough – matchsticks, popsticks, patty pans, lengths of curling ribbon, googly eyes, buttons, sequins.

11. Set preschool aged children building challenges which require them to work together to develop co-operative and language skills.

12. Teach preschoolers and primary school aged children to weave, finger knit or sew.

13. Choose engaging construction challenges for school aged kids.

What constructive play experiences do your children enjoy? Is there a new way of playing constructively you would like to try?


Christie Burnett is a teacher, presenter, writer and the mother of two. She created Childhood 101 as a place for teachers and parents to access engaging, high quality learning ideas.