Last time we managed to add an infinite series, starting with 1/2 then adding, one by one, all the higher powers of 1/2. The result was 1. As a consequence, we said that Achilles overtakes the turtle after one unit of time. For many mathematicians, that solves Zeno's paradox once and for all. There's a problem though: we "took a limit," and when we do that, as we shall see, we are working in the realm of infinity. This doesn't frighten mathematicians: they are used to it. Still, infinity presents us with many strange results and further paradoxes (results which are contrary to "common sense"), and that will be the theme of this lecture.

The first strange result we'll deal with is the following: suppose we have a line segment AB and another twice as long, CD. Which has more points? You may be tempted to say, "Of course the longer one has more points," but this is wrong. As far as I know, Galileo was the first to realize that the two segments have the same number of points (clearly an infinite number). To see that, we must first make sure we know what we mean by "having the same number of points," or "having more points." In other words, we must analyze what we do when we count things. Let us start with the most primitive form of counting, let's count with our fingers. Here's a group of people, call them Mary, John, Sue and Tim: let's count them. I look at Mary and put out my index finger, then I look at John and stick out my middle finger, then Sue and show my ring finger, finally Tim and my pinkie; then I say: there are the same number of people as there are fingers on my hand, excluding my thumb. A fact which you may not find surprising but which is pretty amazing if you think about it: no matter in which order I single out those people (and there are 24 different ways of doing it!), I'll arrive at the same count: all my fingers except my thumb. Never mind that we have a name for that number: four. Names vary from language to language. The important thing we must keep in mind is that we assigned, in quite an arbitrary fashion, a finger to each person, and that we were very careful, first, not to assign two different fingers to the same person, and second, to assign a finger to each and all of them. If we don't use fingers for counting, usually we use counting words: we assign the word "one" to John, say, the word "two" to Mary, "three" to Sue and "four" to Tim. Since we covered all people, we say there are four of them. In any case, it is as a result of this operation of assigning the elements of one set, be it fingers or counting words, to each element of the other set of people, that we can say, "There's the same number of people as fingers in my hand except for the thumb, or counting words from 'one' to 'four'." Let's generalize: whenever we have two sets of things, S and T, we will say that they have the same number of elements when we can set up a correspondence, to each element of S an element of T, in such way that two different elements of S never correspond to the same element of T, and so that any element of T corresponds to some element of S (we have covered all of T). We will call such a correspondence a one-to-one correspondence or an "equivalence" between the two sets S and T. So when two sets are equivalent we say they have the same number of elements.

Now nothing stops us from generalizing what we did with finite sets to infinite sets, and that's what we are going to do. We will say that two infinite sets have the same number of elements if they are equivalent, that is, if we can find a one-to-one correspondence between them. And here's what Galileo realized: we can indeed do that with our two segments AB and CD. Let's construct any isosceles triangle over our longer segment CD, and call the other vertex E, so that CE = DE. Now place the shorter segment AB so that its ends A and B lie on the two sides CE and DE of our triangle, and so that AB is parallel to CD. Now any straight line passing through the point E and crossing the segment AB will also cross the segment CD, and thus a one-to-one correspondence will be established between the points of the segment AB and those of CD, proving that they have the same number of points. It is clear that the shorter segment AB can be considered a part (half) of the longer segment CD, so what we have done is to prove that the whole is NOT greater than the part! So Aristotle was wrong, and Euclid too, when they supposed that it was a self-evident truth that the whole must be greater than a part.

Let's give other examples. Take all natural numbers: 1, 2, 3, etc. and so on. It's an infinite set. Now look at a subset of it, the even numbers: 2, 4, 6, etc. It is also an infinite set, and it is part of the previous one. Which has more elements? Actually, they have the same number. For we can set up a one-to-one correspondence as follows: 1 goes to 2, 2 goes to 4, 3 goes to 6, and in general n corresponds to 2n. Two different numbers never go to the same one, and all the even numbers are covered, so this is indeed a one-to-one correspondence. There are as many even numbers as there are numbers period. The same argument can be applied to the multiples of 3, or to the prime numbers: all these sets have the same number of elements as the set of all the natural numbers.

At this point you may object that our definition of "having the same number of elements" is not the right one because it is counter-intuitive, and that's why we get these paradoxes. Well, try to give another one, which will agree with the usual one when we are dealing with finite sets. I think you'll find it very hard; and it's no good to say: "A set has the same number of elements as another set when it looks that way." For what "looks that way" lies in the eye of the beholder, and we may not be able to apply logical thought to it. Another objection you may raise is that with our definition, it seems that ALL infinite sets have the same number of elements. This is not so. Georg Cantor, a German mathematician (1845-1918), proved that the set of natural numbers (1, 2, 3, etc.) does not have the same number of elements as the set of all real numbers between 0 and 1. What is such a real number? It is a decimal expansion (possibly infinite): 0.abcd..., where the letters a, b, c, etc. after the decimal point are any digits whatsoever. Cantor proved that the infinite set of all such real numbers has MORE elements than the infinite set of all natural numbers. The proof is very famous and very ingenious, and it involves the logical principle of non-contradiction, but I will not give it here.

Let's move on to other paradoxes involving
the concept of infinity. Let us try to add more infinite series.
Suppose we start from 1/2 as before, but now we keep adding 1/2
all the time. Clearly this sums get larger and larger, in fact
they get as large as you want. For example, if we want to get
a million for the sum, it's enough to add up two million terms
(each equal to 1/2). So this sum is not any real number: we say
that this series __diverges__. The same will happen if you
keep adding the same number over and over again, unless the number
is 0. You may say: of course, this happens because our terms
are not getting smaller and smaller. But things are not so simple.
Let's look at the infinite series: 1/2 + 1/3 + 1/4 + 1/5 +...
Now our terms __are__ getting smaller and smaller. Yet this
series also diverges. To see it, let's add it up in the following
way: take the first term, 1/2. Now add to it the next __two__
terms, 1/3 and 1/4. Observe that since 1/3 is larger than 1/4,
these tow terms together add up to more than 1/4 + 1/4 = 1/2.
So far, then, when we add the first three terms we get something
larger than 1/2 + 1/2. Next, take together the terms 1/5 + 1/6
+ 1/7 + 1/8: again, since each of them is larger or equal to 1/8,
our result will be larger than 1/8 + 1/8 + 1/8 + 1/8 = 1/2. So
these four terms together add up to something larger than 1/2.
Continue in this way: next take all the terms starting with 1/9
all the way up to 1/16: their sum is again larger than 1/2! Since
we can go on like this forever, our sum will be larger than if
we added up an infinite number of terms equal to 1/2, which, as
we saw already, diverges. So this series diverges too, even though
its terms are getting smaller all the time. So, you see, the
terms getting smaller is a necessary but not sufficient condition
for a series to converge to a finite sum. This is not really
a paradox, and believe it or not, this result is quite old: it
goes back to the middle ages, to the 1300's.

Let's do something a little newer. Let's take positive and negative terms in our sum. We will take positive 1/2 then negative 1/3 then positive 1/4 and so on. So our new series is: 1/2 - 1/3 + 1/4 - 1/5 + 1/6 ... All even denominators correspond to a plus, all odd ones to a minus. It turns out (as you may see if you take Calculus II) that this new series has a finite sum: in case you're interested, this sum is 1 - ln2, where ln2 is the natural logarithm of 2 (you can find it in your calculator). As you see, the alternating signs make a tremendous difference, which is not surprising, since there's a lot of cancellation here. But here's the surprising, the paradoxical result: when you are adding this series, which is called the alternating harmonic series, you get different results for the sum if you change the order of the terms. Didn't we learn in elementary school that the sum is commutative, that the order in which you add doesn't matter? Well, that's true for finite sums, but not for infinite ones. As a matter of fact, the full result may be called weird, more than simply paradoxical: give me any number whatsoever, positive or negative; then I can rearrange the order of the terms so that my new series will add up to that number! I can also, by rearranging the terms conveniently, make it diverge. So anything can happen, if you are allowed to change the order of the terms.

Now, just for fun, let's leave for a
moment the paradoxes of infinite sums and stick to finite ones.
When Gauss (1777-1855) was in grammar school, he was misbehaving,
so as punishment (and to keep him quiet) the teacher told him
to add up all the natural numbers up to 1,000. It took Gauss
just one minute to arrive at the answer: 500,500. The teacher
couldn't believe it, so she (or he) did it laboriously and one
by one: indeed, the sum was correct. How did little Gauss do
it? He had found the general formula: to get the sum of all numbers
starting from 1 and up to any number *n*, you just multiply *n* times
*n+*1 and divide by 2. So he just multiplied 500 times 1,001.
You should try to prove this formula; after all, if little Gauss
could... of course, he went on to become one of the greatest mathematicians.
Gauss was certainly not the first to discover this formula; it's
been known for very long. What kinds of numbers do you get out
of it? Let's see: if *n* is 1 you just get 1, if *n* is 2 you get
3, if *n* is 3 you get 6, and so on. Here's a list of the first
few: 1, 3, 6, 10, 15, 21, etc. These are called "triangular
numbers," and their general expression is, as I already said,
*n*(*n*+1)/2.

Let's now get back to infinite sums.
In the heroic age of mathematics, famous scientists challenged
each other with unsolved problems; here's one which Christian
Huygens (1629-1695), a great Dutch mathematician and physicist
who made the first pendulum clock and created the wave theory
of light (about which we'll talk in a future lecture), proposed
to Leibniz: look at the reciprocals of the triangular numbers
and add them all up. In other words, we want to find the sum
of the infinite series: 1 + 1/3 + 1/6 + 1/10 + 1/15 +... And
here's how Leibniz solved it. He observed that the reciprocal
of the *n*th triangular number is 2/n(n+1), and then he observed
that his expression is equal to 2/n - 2/(n+1) (do it by taking
common denominator). Therefore, if we add the first four terms,
say, the sum can be written as follows: (2 - 1) + (1 - 2/3) +
(2/3 - 2/4) + (2/4 - 2/5) + ... Now, by re-grouping these terms
(without changing their order!), we get a lot of cancellations:
2 + (-1 + 1) + (-2/3 + 2/3) + (-2/4 + 2/4) - 2/5, and all those
parentheses are 0, so the only terms remaining are the first,
2, and the last one, -2/5. The same thing can be done with any
number of terms, say n of them, and the only two remaining terms
will be the first, 2, and the last, -2/(n+1). Next, Leibniz reasoned
that these last term, -2/(n+1) becomes smaller and smaller (tends
to 0) as n gets large. Therefore, the sum of the infinite series,
in other words, the limit of these sums, is just 2. End of proof.

Leibniz added up a lot of infinite series of a similar kind. But adding infinite series is not an easy task (unless your series happens to be easy), and even now we don't know how to add a lot of series that appear quite natural. Mathematicians have lots of tricks for adding infinite series, but no general method. There's nothing paradoxical about that. Infinite series are not merely a mathematical curiosity: they play very important roles in scientific theory and practice.

We will now play a different game, involving
infinity, which will lead us to very paradoxical results. Imagine
we have infinitely many balls, each labeled with a natural number:
1, 2, 3, and so on. Imagine, too, that we have a very capacious
container or urn, into which we'll be putting balls. We'll do
it as follows: at one minute to 12, we'll place balls numbered
1 through 10 into the urn and delete ball number 10 (which we
just leave aside). Suppose this operation takes no time. Then,
at 1/2 minute to 12, we put balls 11 through 20 into the urn,
and delete ball 20. At 1/4 minute to 12 we place balls 21 through
30 and delete ball 30. And so on. The question is: what balls
remain inside the urn at 12 o'clock? As we have already seen,
the steps we take at 1 minute to 12, at 1/2 minute to 12, at 1/4,
1/8, 1/16, etc. minute to 12 are infinite in number, yet they
all take place before 12 o'clock. This is because the sum of
the infinite series 1/2 + 1/4 + 1/8 + 1/16 +... is equal to 1.
And regarding our question, it has a very easy answer: all balls
are inside the urn at 12 o'clock, except for those labeled with
a multiple of 10, since those are the only ones we deleted. So
far so good. Now let's change the game just a little bit. At
1 minute to 12 we will place inside the urn balls labeled 1 through
10 as before, but now we'll delete ball number 1 (instead of ball
10). At 1/2 minute to 12 we will put balls 11 to 20, and delete
ball number 2. At 1/4 minute to 12 we'll put balls 21 to 30,
and delete ball number 3, and so on. In general, at the nth step
we put balls numbered 10(n-1) + 1 through 10n, and delete ball
number n. Question: which balls remain inside the urn at 12?
Answer: none. Why is this? Well, suppose you say there must
be __some__ balls left inside the urn at 12; after all, we
are playing almost the same game as before, each time placing
10 balls inside the urn and deleting just one of them; last time
we had lots of balls left, actually infinitely many, so how come
now we have none? Okay, I say, then tell me ONE ball which is
left in the urn at 12. Suppose you answer, "Ball number
567." Then I'll say, "No, that ball was deleted at
step number 567, and left aside, never to come back." The
same argument goes for any ball you'd care to mention. So there
are no balls left inside the urn at 12. It all depends on WHICH
one ball you delete at each step: in one case there were infinitely
many balls left inside, in the other, there were none.

I think this is quite a surprising result. To make it even more surprising, let's suppose we choose at each step the ball to be deleted in a random way. This is a technical expression meaning that at each step you'll choose the ball you'll delete by drawing lots from a hat, say, so that each of the balls already in the urn has the same likelihood of being chosen. Then we ask the same question as before: which balls are left in the urn at 12?

Clearly, this question doesn't have a straight answer, since we've seen before that if we happen to choose at each step, consistently, to delete balls which are multiples of 10, then there will be many left inside at 12, whereas if we choose, also consistently, to delete ball number n at step number n, then there will be no balls left at all. These consistent choices happen to be extremely unlikely if we are choosing at random, by drawing lots each time, but they are certainly possible. So the answer must involve probabilities. If you take a course in Probability Theory, you will be able to prove the following surprising result: if we play this new game, choosing the ball to be deleted at each step at random, then the probability that at 12 o'clock there are any balls left in the urn is 0. This, let me warn you, does not mean that it is impossible (we have seen, indeed, that it is possible to have balls left in the urn at 12): I can't tell you exactly what it means without getting into technical stuff, but let's just say that if you play this new game, you can safely bet that there's going to be not a single ball left inside the urn. Not a chance of you losing your money.

I have shown you a few paradoxes arising from the use of the concept of infinity, but there are many more. The important point is that infinity plays a fundamental role not only in adding series and playing with balls and urns, but in all of Calculus (the notion of "taking the limit" involves infinity) and much of mathematics, and this implies that this role is also fundamental as a basis of all our science. The moral of these stories is: at the basis of scientific thought--the most certain thought available to us--there's something, infinity, whose very nature is extremely paradoxical.