Mathematical arguments can be very persuasive. They lead inexorably toward their conclusion; barring any mistakes in the argument, to argue is to argue with the foundations of logic itself. That is why it is particularly disconcerting when a mathematical argument leads you down an unexpected path and leaves you face to face with a bewildering conclusion. Naturally you run back and retrace the path, looking, often in vain, for the wrong turn where things went off track. People often don’t deal well with challenges to their world-view. When a winding mountain path leads around a corner to present a view of a new and strange landscape, you realise that the world may be much larger, and much stranger, than you had ever imagined. When faced with such a realisation, some people flee in horror and pretend that such a place doesn’t exist; the true challenge is to accept it, and try to understand the vast new world. It is time for us to round a corner and glimpse new and strange landscapes; I invite you to follow me down, in the coming entries, and explore the strange hidden valley.

We begin with a mere glimpse of what is to come along this road. Still, even this glimpse has been enough to frighten some. Indeed the (potentially apocryphal) tale of the first man to tread this road, a member of the Pythagorean Brotherhood, makes this very clear. The story goes that the insight came to him while travelling by ship on the Aegean. Excited, he explained his cunning proof to the fellow members of the Brotherhood aboard the boat. They were so horrified by the implications that they immediately pitched him overboard, and he drowned. For the secretive Pythagorean Brotherhood, who believed that reality was simply numbers, mathematics was worth killing over.

So what was this truth that the Brotherhood was willing to kill to keep secret? The fact that the square root of 2 is not expressible as a fraction. The proof of this is surprisingly simple, and runs roughly as follows. Let’s presume that √2 can be expressed as a fraction, and so we have numbers n and m such that √2 = n/m. As you may recall from A Fraction of Algebra, a particular fraction is really just a chosen representative of an infinite number of ways of expressing the same idea — we can choose whichever representative we wish. For the purposes of the proof we will assume that we have chosen n/m to be as simple as possible (i.e. there is no common factor that divides both n and m); you may want to verify for yourself that such a thing is always possible (it’s not too hard). Now, using the allowable manipulations of algebra we have:

√2 = n/m

⇒ 2 = n^{2}/m^{2}

⇒ 2m^{2}= n^{2}

Now 2m^{2} is an even number no matter what number m is, so n^{2} must be an even number as well. However, an odd number squared is always odd (again, this is worth verifying yourself if you’re uncertain, again it isn’t hard). That means the only way n^{2} can be even is if n itself is even. That means there must be some number x such that 2x = n. But then

2m

^{2}= n^{2}

⇒ 2m^{2}= (2x)^{2}

⇒ 2m^{2}= 4x^{2}

⇒ m^{2}= 2x^{2}

and so m^{2}, and hence m, must also be even. If both n and m are even then they have a common factor: 2; yet we specifically chose n and m so that wouldn’t be the case. Clearly, then, no such n and m exist, and we simply can’t express √2 as a fraction!*

This result (if not necessarily the proof) is well known these days; sufficiently so that many people take it for granted. It is therefore worth probing a little deeper to see what it actually means, and perhaps gain a better understanding of why it so incensed the Pythagorean Brotherhood. The first point to note is that √2 does crop up in geometry: if you draw a square with sides of unit length (and we can always choose our units such that this is so) then, by Pythagoras’ Theorem, the diagonal of the square has length √2. That, by itself, is not necessarily troubling; but consider that we’ve just seen that √2 is not expressible as a fraction. Recall that a fraction can be considered a re-interpretation of the basic unit, and you see that what we’re really saying is that there simply doesn’t exist a unit of length such that the diagonal of the square can be measured with respect to it. If you were measuring a length in feet and found that it was between 2 and 3 feet then you could simply change your units and work in inches — the distance is hopefully an integer number of inches. If inches aren’t accurate enough we can just use a smaller unit again (eighths of an inch for example). What we are saying when we say that √2 cannot be expressed as a fraction is that, no matter how small a unit we choose, we can still *never* accurately measure the diagonal of the square. Because we can simply keep dividing indefinitely to get smaller and smaller units, that means we need *infinitely* small units. And note the difference here:, unlike in Part I, arbitrarily small is not good enough, we need to go past arbitrarily small to actually infinitely small. For the Pythagoreans infinity was unreachable — something that could never be completed or achieved — and thus an infinitely small unit could never be realised. Therefore, in their world-view, the diagonal of a square couldn’t exist since its length was an unreachable, unattainable, distance**. That, as you an imagine, caused quite a bit of cognitive dissonance! Hence their desire to pretend such a thing never happened.

As you can see, it turns out (even though it may not have looked that way at first) that we are really butting our heads up against infinity again, just from a different direction this time. Things get worse however: if we had a line of length 2 then there surely exists a point somewhere along that line that is a distance of √2 away from the origin. We have just seen, however, that such a distance is not one we can deal with in terms of fractions. If we were to put points at every possible fractional distance between 0 and 2 we would have a hole at √2, and continuous lines don’t have holes in them. A new problem starts to raise its head.

If we wish to have a continuum we have to fill in all the holes. The question is how we can do that — where exactly are the holes? And, for that matter, how many holes are there? The first of these questions turns out to be rather easier than the second (which we will address next time we venture down this fork of the road). The trick to finding holes is to note that, since fractions allow us the arbitrarily (if not infinitely) small, we can get arbitrarily close to any point in the continuum, holes included. That is, while we can’t actually express a hole in terms of fractions, we can sidle up as close beside it as we like using only fractions. And that means we must reach again for the useful tools of *distance* and *convergence* to determine that we are getting closer and closer to, and hence converging to, a hole.

For our current purposes the definition of distance between numbers defined in Part I will be sufficient. What we want to do is figure out a way to ensure that a sequence of fractions converges — that is, that it gets closer and closer to *something*, without necessarily knowing what the something is. The trick to this is to require that the distance between different terms in the sequence gets smaller and smaller. In this way we can slowly but surely squeeze tighter and tighter about a limit point, without necessarily knowing what it is that we are netting. More formally, if we have an infinite sequence S_{1}, S_{2}, S_{3}, … then we require that for any ε > 0 there exists an integer N≥1 such that, for all m, n>N ,

|S

_{m}− S_{n}|<ε

(recalling that |x − y| gives the distance between numbers x and y). Such a sequence is called a *Cauchy sequence*. Now, since any Cauchy sequence converges to something, we can identify (consider equivalent) the sequence and the point at its limit. Furthermore, since we know that using fractions we can get arbitrarily close to any point on the continuum, there must be some sequence of fractions that converges to that point, and so if we consider all the possible infinite Cauchy sequences of fractions, we can cover all the points on the continuum — we are assured that no holes or gaps can slip in this time. We’ve caught all the holes — without even having to find them!

It is worth looking at an example: can we find a sequence of fractions converges to √2? Consider the decimal expansion of √2 which starts out 1.41421… and continues on without any discernible pattern; clearly the sequence 1, 1.4, 1.41, 1.414, 1.4142, 1.41421, … (where the nth term agrees with √2 for the first n−1 decimal places) converges to √2. More importantly each term can be rewritten as a fraction since each term has only finitely many non-zero decimal places; for example 1.4 = 14/10 and 1.4142 = 14142/10000 etc. Finally it is not hard to see that this sequence is a Cauchy sequence. We can do the same trick for any other decimal expansion, arriving at a Cauchy sequence that converges to the point in question. Of course there are many other Cauchy sequences of fractions that will converge to these values: we are dealing with something similar to our dilemma with fractions when we found that there were an infinite number of different pairs of natural numbers that all described the same fraction. In that case we simply selected a particular representative pair that was convenient (and could change between different pairs that represented the same fraction if it was later convenient to think of the fraction that way). We can do the same here: noting that a point is described by an infinite number of Cauchy sequences, we can simply select a convenient representative sequence to describe the point. For our purposes the sequence constructed via the decimal expansion will do nicely — in some sense you can think of the Cauchy sequence as an infinite decimal expansion.

Now that we at least have some idea of what these sequences might look like, it is time to take a step back and consider what is actually going on here. Back in The Slow Road we constructed natural numbers as a property of collections of objects. Then, in A Fraction of Algebra, we created fractions to allow us to re-interpret an object within a collection. This was another layer of abstraction — fractions were not really numbers in the same way that natural numbers were — fractions were a way of re-interpreting collections, and we could describe those re-interpretations by pairs of natural numbers. Perhaps rather providentially it turned out that the rules of algebra, the rules of arithmetic that were true no matter what natural numbers we chose, also happened to be true no matter what fractions we chose. It is this stroke of good fortune, combined with the fact that certain fractions can take the role of the natural numbers, that allows us to treat what are really quite different things in principle (fractions and natural numbers) as the same thing in practice: for practical purposes we usually simply consider natural numbers and fractions as “numbers” and don’t notice that, at heart, they are fundamentally different concepts. Now we are about to add a new layer of abstraction, built atop fractions, to allow us to describe points in a continuum. While all that was required to describe the re-interpretation of objects that constituted a fraction was a pair of numbers, points in a continuum can only*** be described by an infinite Cauchy sequence of fractions. Thus, in the same way that natural numbers and fractions are actually very different object, so fractions and points in a continuum are quite different. Again, however, we find that when we define arithmetic on sequences (which occurs in the obvious natural way) they all behave appropriately under our algebraic rules. When we consider that it is easy enough to find sequences that behave as fractions (any constant sequence for instance) it is clear that, again, for practical purposes, we can call these things numbers and assume we’re talking about the same thing regardless of whether we are actually dealing with natural numbers, fractions, or points in a continuum.

It should be pointed out that sometimes these distinctions are actually important. A simple example is computer programming, which does bother to distinguish floating point numbers (ultimately fractions) from integers. You can usually convert or cast from one to the other via a function (and at times that function can be implicit), but the distinction is important. Later we will start getting into mathematics where the distinction becomes important.

So, now that we have this construction, several layers of abstraction deep, that allows us to describe the continuum, does it resolve the problem the Pythagorean Brotherhood struggled with? Certainly within the continuum there is a point corresponding to √2, but even with our construction it is the *limit* of an infinite sequence — we still require a completed infinity. Of course accepting the idea of a completed infinite would get us out of this conundrum; what we require is a coherent theory of the completed infinite — were we to have that, then we needn’t fear the idea as the Pythagorean Brotherhood did. The next time we venture along this particular road we will discuss just such a theory, and explore the remarkable transfinite landscape that it leads to. We would be remiss to conclude here, however, without noting that there is some dissent on this topic. While the theory of the continuum based on completed infinites we will cover is remarkably widely accepted and used, there are still those who do not wish to have to deal with the completed infinite. So what is the alternative?

The idea is to construct a continuum using infinitesimals: a number ε such that we have ε^{2}=0, yet ε≠0. Using such a value we can create a continuum without holes as desired. If adding a seemingly arbitrary new element to the number system seems like cheating, remember that both fractions, and the infinite decimals via Cauchy sequences, are just as much artificial additions to the number sequence — they just happen to be ones we’re familiar with and now take for granted. The real dilemma is that, assuming the required properties of infinitesimals, we can deduce contradictions. As we noted at the start of this post, when a mathematical argument leads you somewhere you don’t wish to go you are left having to challenge the very foundations of logic itself. Surprisingly, that turns out to be the resolution: smooth infinitesimal analysis rejects the law of the excluded middle. The logic used for this alternative conception of the continuum rejects the idea that, given a proposition, either the proposition is true, or its negation is true. That means that saying that x=y is not true, does *not* mean that x≠y. This sounds like nonsense at first, because we generally take the law of excluded middle for granted, and it is ingrained in our thinking. We have to remember, however, that this is a theory dealing in *potential*, but not completed infinites, and it is that key word “potential” that helps clarify things. Consider two numbers x and y that have infinite decimal expansions; are they equal, or are they unequal? We can check the first decimal place, and they might agree; that does not mean they are equal, they might disagree further down; nor does it mean they are unequal, since they might indeed agree. We can check the first billion decimal places, and they might still agree; that does not mean they are equal, since it might be the billion and first decimal place at which they disagree; and yet we still can’t conclude they are unequal — they’ve agreed so far and could continue to do so. We even check the first 10^{20} decimal places, and still we can’t conclude either way whether x and y are equal or unequal. Because we can never complete the infinity and check *all* the decimal places, unless we have more information (such as that both number are integers****), it is not possible to conclude either way — we have an in between state where the numbers are neither equal, nor unequal, and it is this in-between possibility that causes the law of excluded middle to fall apart. To say that x=y is not true simply means we have not yet concluded that x=y, but that does not require that we must have concluded x≠y since we may still be torn in between, unable to reach a conclusion. As strange as this sounds at first, it actually provides a surprisingly natural and intuitive model of the continuum — and a remarkably different one from the classical one we will be developing. Enough sidetracks, however; it is time to return to the path.

We have rounded the bend, and can make out the rough expanse of the landscape below, but the land itself remains unexplored, and potentially quite alien. The next time we return to this road we’ll try and understand the implications of a continuum of completed infinites, including a variety of initially unsettling results. In the meantime, however, we will return the study of patterns and symmetry, and try and build a robust theory from our simple examples.

* This is, of course, a precis version of the proof. The devil is always in the details, and many details here have been glossed over as obvious, or left for the reader to verify. If you are interested in the nitty-gritty however, I recommend you try the Metamath proof of the irrationality of √2. Each and every step in the proof is referenced and linked to an earlier theorem previously proved. By following the links you can drill all the way down to fundamental axioms of logic and set theory. If you don’t care to follow the details yourself, you might note that, in this (extremely) explicit form, the proof can be machine verified.

** If you think you can get out of this by just starting with the diagonal as your unit of measure you will simply find that now the sides of the square are unmeasurable distances. The sides and diagonal of the square are *incommensurable* — we can’t measure both with the same units, no matter how fine a unit of measure we choose.

*** Technically other methods of describing such points exist. Indeed a very common formal approach is Dedekind cuts. Ultimately, however, Dedekind cuts represent more detail than we need right now, and will serve more as a distraction than anything. The interested reader is, however, encouraged to investigate them, and puzzle out why I chose to go with Cauchy sequences here.

**** Why does knowing that both numbers are integers help? because integers have fixed decimal expansions – the first decimal place is necessarily the same as all the rest: 0. As long as they agree up to the first decimal place, we are done. An exercise for the interested reader: what other knowledge about the numbers might allow us to conclude equality or inequality?

May 1, 2007 at 7:13 pm |

For those who were anticipating Zeno’s third paradox, Cantorian transfinite theory, and the Continuum Hypothesis, I must apologise. Things took a slightly different direction than I had orginally thought, and the result was a much longer entry here. I like to keep each entry to a manageable size, so that has meant that Cantorian theory and the CH have been deferred till next time. Zeno’s third paradox has slipped even further down the ladder, and will not make its re-appearance till quite a bit later on.

May 2, 2007 at 2:07 am |

That’s the beauty of the path… even you never know what’s next until you go there yourself.

I need to go over previous ground again, try to keep all these abstractions straight

May 3, 2007 at 2:38 am |

Isn’t the inablility to check an infinite number of digits, and hence the inability to decide if two numbers are equal, a property of our own minds, our own lack of knowledge, rather than a property of the numbers themselves?

May 3, 2007 at 4:22 am |

— Isn’t the inablility to check an infinite number of digits, and hence the inability to decide if two numbers are equal, a property of our own minds, our own lack of knowledge, rather than a property of the numbers themselves?

Aren’t numbers (and hence all their properties) just creations/abstractions of our own minds? I dont think that numbers have any properties or existence independent of our minds (though of course Pythagoras would have it otherwise). I think the inability mentioned above is a result of the rules we set up when we first talked about numbers. Since these seem inadequate for the task at hand, we might revise them

May 3, 2007 at 11:12 pm |

We are free to create axioms. We are free to create rules. After that, we have no freedom. Some things follow, some don’t; once the axioms & rules are created, the paths before us are fixed. In that sense I’d say there is some sort of independence from our minds.

But anyway, ok, I get that two numbers can be neither equal nor unequal by rules that’ve been laid down, but still I object to the post text I quoted. Or rather, I object to the wording. Maybe I’d feel better if it said “The Math can’t ever conclude” instead of “we can’t ever conclude”. Hmm.

May 4, 2007 at 12:35 am |

Well this is a difficult point, because it represents a deep philosophical division in mathematics — is mathematics a human creation, or does it have some platonic existence? Those who hew to intuitionist math (and reject the law of excluded middle) tend to take the former rather than latter view. Ultimately I tend to take a pluralist view. You are right in that it is the rules that force our conclusion — in a world where there is only the

potentiallyinfinite the numbers remainpotentiallyequalandalsopotentiallyunequal. In that sense they are neither. I may revise the wording along the line susggested to make that clearer.It is worth noting, btw, that while computers tend to use Boolean logic, and hence we tend to expect the bivalence of the law of excluded middle, rejecting and allowing only the potentially infinite (and the constructable/calculatable) is important for much computer science. How do we know if to two computed numbers are equal? If they infinite precision then we don’t, at least not until the computation is finished — and who knows when that might be?

May 7, 2007 at 12:16 am |

“since any Cauchy sequence converges to something”

This is incorrect according to your definition of convergence in Part I which requires knowing the limit first. With an unknown limit you cannot have a convergent sequence. Your new definition does not define a limit point but an arbitrarily small limit range, but choosing a limit from an arbitrarily small range still can allow an arbitrarily large number of “holes or gaps to slip in”. Then there is the issue of the relationship between your definitions.

“since we know that using fractions we can get arbitrarily close to any point on the continuum, there must be some sequence of fractions that converges to that point”

That doesn’t follow (unless you beg the question), at least not without additional non-obvious assumptions and arguments.

Undefined or ambiguous phrases such as “arbitrarily large” or “infinitesimally close” are fine in normal discourse, but not when you’re trying to avoid precisely the subtle errors this sort of language engenders.

May 9, 2007 at 3:27 am |

Toby: Cauchy sequences have limit points. That requires a little work to prove, but I’m not in the habit of providing proofs for anything that isn’t simple: I am trying to express ideas, not write a textbook; I didn’t prove integers are commutative either for example, I simply stated it. Given the limit point, covergence falls out almost immediately. The trick is that you can test for the Cauchy condition without knowing what your limit point is, while you cannot test for convergence without a limit point already in mind.

As to the ability to express any point on the continuum as a limit point for a convergent sequence of fractions — a constructive proof would be hard, but we aren’t discussing a constructive continuum here (that is referred to later when we exclude LEM). Given the axiom of choice we can certainly do this — there is no problem here. You could claim AoC is non-trivial, but this is a top down approach, not a bottom up textbook. We’ll come to such things as difficulties get raised. For now intuitive ideas are close enough to express the concepts.

May 21, 2007 at 3:38 am |

Damn this is good stuff.

Best,

Brad Neuberg

July 2, 2007 at 7:04 pm |

[…] The classic example of something infinite and larger in size than the natural numbers is the continuum (at least as classically conceived; the constructivist/intuitionist continuum is a little more tricky on this front) as discussed in Paradoxes of the Continuum, Part II. In that post we determined that points on the continuum were able to be identified with Cauchy sequences, which were akin to (though a little more technical than) infinite decimal expansions. We’ll stick with infinite decimal expansions here as most people have a better intuitive grasp of decimals than they do of Cauchy sequences or Dedekind cuts. To make things simple we’ll consider the continuum ranging between 0 and 1; that is, all the possible infinite decimals between 0 and 1, such as 0.123123123… We do have to be a little bit careful here since, as you should recall from Paradoxes of the Continuum, Part II, in the same way that there are many fractions that represent the same ratio, there were many Cauchy sequences that represent the same point in the continuum, and in particular there are different decimal expansions that represent the same point, such as 0.49999999… and 0.50000000… ***. To be careful we have to make sure we always pick and deal with just one representative in all such cases; to do this we can simply only consider representations that have infinitely many non-zero places. Showing this is sufficient, and still covers all real numbers between 0 and 1 isn’t that hard, but amounts to some technical hoop jumping that is necessary for formal proofs, but not terribly elucidating for discussions such as this. Suffice to say that it all works out. […]

October 24, 2007 at 3:31 pm |

[…] spent some time contemplating and discussing the intricacies of the infinite. We started off with a very natural abstraction, and quickly got […]

December 20, 2007 at 7:42 am |

I would like to see a continuation of the topic

July 17, 2008 at 11:49 pm |

I would like to see a continuation of the topic