Computers and Us, a lecture by Prof. Ricardo Nirenberg for Project Renaissance, Spring 1997.

This is not going to be a sociological or economic survey of the effect of computers on our daily lives, nor will I be dealing with questions such as "Are computers good for us, are they bad, or sometimes good sometimes bad?" Such questions are usually badly put and impossible to answer if not by empty platitudes. We are going to concentrate on two very basic questions: first, What is a computer?, and second (once the first one is answered), Are we justified in saying that our brain is a computer? I'm not going to provide an answer to the second question, but only indicate some of the directions in which current scientific thought on it has been proceeding. You will note that these two questions, especially the second one, are essential in any discussion of our human identity and of technology.

Let's start with the first question: What is a computer? Obviously it is a machine, but this doesn't tell us much; we have to be more precise and specify what kind of machine. In a general sense, a computer is a machine that carries out algorithms. So we must define what we mean by this strange word, "algorithm." Actually, you are already familiar with many algorithms, from the time when you were in elementary school; the surprising fact is that only in this century thinkers have begun asking about the general meaning of the concept. The word itself comes from the name of an Arabian mathematician, Mohammed ibn-Musa Al-Khwarizmi (died before 850 AC), who wrote a book titled Al-jabr wa'l muqabalah: from the first Arabic word, "Al-jabr," comes our word "algebra," whose original meaning is "putting or setting together" (indeed, in 16th century Spanish, "algebrista" still meant one who set broken bones.) But, history apart, before we attempt a general definition of algorithm, let's look at some simple examples.

First, let us find an algorithm for computing the remainder in a division of two natural numbers, A divided by B. A simple algorithm is as follows. Take the two numbers A and B, with B not zero (this is called "the input"), and decide: is B larger than A? If yes, stop the process and print out the answer, A (that's your remainder, "the output".) If not, replace A by A-B and repeat the process: is B larger than A-B? If yes, print out the answer: A-B. If not, replace A-B by A-B-B = A-2B and do it again. At some point the process must stop, because after subtracting B enough times from A we will eventually get a number smaller than B. Once the process stops, we get our final answer, the remainder of dividing A by B. This can be summarized in a flow chart, which I'll show in the lecture. Let's say we are dividing 10 by 3: so here A=10, B=3. Since 3 is not larger than 10, we subtract 10-3 = 7. Since 3 is not larger than 7, we subtract again: 7-3 = 4. Since 3 is not larger than 4, we subtract once more: 4-3 = 1. Now we got 1, which is smaller than 3, so we stop, and print out the answer: 1. It took this "machine" three passes to arrive at the final answer or output. Of course, if the input, the numbers A and B were much larger, it could take many more steps, but it's always a finite number of steps, after which the "machine" will come to a stop and give us the output, and this is the important point.

As a second example, we'll look at the so-called Euclidean algorithm for finding the greatest common divisor of two numbers A and B. Let us say that A is the larger one and B is not zero. As a first operation #1, we divide A by B and store the remainder C. Then, if C is zero we stop and print the answer or output: B (that's your greatest common divisor between A and B). If C is not zero, we replace A by B and B by C: those two new inputs are fed again into operation #1. Again, this process must come to an end since the remainders are getting smaller and smaller, and eventually we must get remainder 0. Let's do it for the two numbers (chosen more or less at random) A=537 and B=243. (1) Divide 537 by 243 and get remainder C=51. (2) Divide 243 by 51 and get remainder 39. (3) Divide 51 by 39 and get remainder 12. (4) Divide 39 by 12 and get remainder 3. (5) Divide 12 by 3 and get remainder 0. That's it: our answer, the output, is the last remainder we got before we got 0, that is 3. 3 is the greatest common divisor of 537 and 243 (3 is the largest number diving both 537 and 243 exactly). Again, we can summarize all this in a flow chart, and notice that our operation #1, finding the remainder, is exactly what our first example was, so what we can do is inject that first algorithm or flow chart into this new one, so it will become what's called a sub-routine of the routine of finding the greatest common divisor.

These two examples of algorithms were chosen because of simplicity; the problem now is: can we define what we mean by algorithm is general? The question was first raised in 1900, when the German mathematician David Hilbert proposed a number of interesting problems to the mathematicians of the world meeting in Paris. Remember, incidentally, that Hilbert was mentioned in my first lecture last semester, in connection with Euclid's axiomatic method: Hilbert was the first to give a complete list of axioms for doing Euclidean geometry. Hilbert was the foremost champion of a mathematical (and philosophical) school know as Formalism. This means roughly the notion that all of math and all of logic are games with symbols, that we don't have to know what these symbols represent or stand for, but only how to put them together in strings (sentences or propositions), and how to operate with these strings so as to get new strings. In other words, all we need is symbols, a grammar, a set of rules for deducing true propositions from true propositions, and a set of axioms (propositions which we agree to consider true so we can start from somewhere). Think of the game of chess: what does the king stand for? Only for the set of rules that tell us how we are allowed to move it; obviously the king of chess need not be made of wood or have a crown, or resemble anything from real life. Once we are given those symbols, those rules and the axioms, it should be possible to play this game with a machine (think of how recently an IBM computer defeated the chess world champion!) This dream goes back some centuries, at least to Leibniz, who imagined (but did not construct) a universal language which would allow us to compute quite automatically whether any logical proposition is true or false. The only thing that we had to be careful about, thought Hilbert, is that when playing with our game of symbols according to the rules we should not arrive at a contradiction, defined as "the statement that a proposition p is true and its negation, not-p, is also true". Well, in 1900 one of the problems Hilbert proposed was the so-called Entscheidungsproblem, German for "Decision Problem": can we decide on the truth or falseness of any proposition belonging to a mathematical discipline, for example arithmetic (the part of math having to do with the properties of natural numbers), by means of a suitable machine or algorithm? We saw already that the proposition "Given any two natural numbers A and B there's a third number C which is the greatest common divisor of A and B" can be shown to be true by means of our Euclidean algorithm. On the other hand, there are arithmetical propositions such as "Any even number bigger than 2 is the sum of two prime numbers," for which, still today, we do not known whether they are true or false.

To try to solve Hilbert's problem, a British mathematician and code-breaker, Alan Turing (1913-1954), proposed in 1936 a way of defining precisely what is meant by algorithm and by computing machine. His work is the origin of modern computer science. In the following, I'll give and explain the definition of a Turing machine. We have a potentially infinite tape, or infinite amount of paper, and this tape is divided in compartments or cells. On each cell we have a symbol, of which we need at least two different ones: these could be a blank and a dash, or a zero and a one; we could also have more symbols, such as 0, 1, 2, up to 9, but we can play this game with just two symbols. The input will be a number, and this number is written on the tape according to a pre-determined coding. For example, we can agree to write a number simply with dashes or ones, so the number 4, for example, would be 1111, or ////. Before and after this input, the tape contains only blanks or zeros. Inputting the number 4 on the tape, then, would yield: ...000011110000... Or, if we use blanks (*) and dashes (/), it would look as follows: ...****////****... The next thing is, we have a reader or head, which performs the following functions: (1) it records, by number 0, 1, 2, etc., which instruction it is following (this is called a "state" of the machine; the number of instructions, or states, must be finite —no infinite programs are allowed!), so the head knows, at each point, which part of the program it is performing; (2) the head is located at some cell, and it can replace the symbol on that cell by another, or leave it as it is; so for example it can replace 0 by 1, or 1 by 0 (or * by /, or / by *); (3) it can then move one cell to the right or one cell to the left. That is all. At some point in the program (at some instruction) we can order the machine to STOP. Once the machine stops, the tape contains the output, coded in the same pre-determined way. Notice how similar this is to a game such as chess. Also, with this kind of machine we can perform an amazing variety of computations. As an example, let's look at the following program or set of instructions. We will assume that we code with blanks (*) and dashes (/), that the head starts at some point to the right of the input, and that the input is not zero. Also, in order not to write too many words, we'll use abbreviations: instruction number 0 will be: "If head finds * in the cell, it moves one cell to the left and goes back to instruction number 0"; we will write this as: 0 * L 0. The 0 on the left means this is instruction number 0, the * means that the head finds a * in that cell, L means move one cell to the left, and the final 0 means go back to instruction 0. Also, R means move one cell to the right, rep means replace one symbol by the other (* by / and / by *), and STOP, of course, means the machine must stop. Here's the program:

0 * L 0

0 / L 1

1 * STOP

1 / L 2

2 * STOP

2 / rep 3

3 * R 4

4 / rep 5

5 * R 6

6 / rep 0.

Even with this relatively simple program, you will notice that it is not easy to tell what the machine is actually doing, but if you are patient and diligent enough, you will find that it is giving us, as output, the remainder of dividing the input by 3. Also, this is not the only program for performing that task; there may be others, even shorter: the art of the computer programmer is to device programs which are efficient, in particular not too long. But, you may say, this machine only performs this particular task, finding the remainder upon division by 3; if we want to perform some other task, such as finding the remainder upon division by 7, we will need another Turing machine. As we will see next, we can invent a machine which will perform all kinds of tasks!

To do that, we must refer to Kurt Gödel (1906-1978), born in what was then the Austro-Hungarian Empire, who studied and taught in Vienna until 1938. At that time he moved to the US, to the Institute of Advanced Studies in Princeton, NJ, where he was Einstein's colleague and friend. In the years 1930-33, Gödel revolutionized logic, proving, among other things, that Hilbert formalism was doomed. But what concerns us at this point is that Gödel invented a method for coding logical or mathematical propositions as numbers (huge numbers, by the way). This means, in particular, that we can code each of the instructions in any program as a number, so we can input these numbers, constituting the whole program, into a Turing machine, which will then proceed to operate according to that program. This is called a Universal Turing Machine, and this is what computers are. So now we have answered the first question, "What is a computer?" And the answer is: a universal Turing machine. We have also answered the question, "What is an algorithm?" The answer is: anything that can be carried out by a Turing machine. That this is so, that Turing machines define what we mean by "algorithm," or "mechanical procedure," is called "the Church-Turing Thesis" (named also after Alonzo Church, a US logician).

But there's a stronger form of that thesis, "the Strong Church-Turing Thesis," which asserts that ALL our mental operations are of that nature (i.e. capable of being performed by a Turing machine or computer), and this is what one means when one says, "The brain is a computer." This brings us to our second question: is that true? You may say, "This is ridiculous! Is my decision to marry my sweetheart something performed by a Turing machine?" But this objection is not very persuasive, for after all, it might turn out that your decision to marry is a mechanical procedure, and indeed, there are many cognitive scientists nowadays who would argue it is. To object persuasively to the Strong Church-Turing Thesis, one must meet logicians on their own turf, one must argue on the basis of logical and mathematical propositions which cannot possibly be answered by a Turing machine. There is a host of results, going back to the 1930s, which are of relevance here. Let me mention two: first, Gödel proved (1931) that in any axiomatic system such as the one governing arithmetic, we can find propositions which cannot be proved true or false within the system, that is, by mechanical procedures; and the amazing thing is, cursory inspection of them from the outside of the system tells us that they are true (or false, as the case may be). This is called the Incompleteness Theorem of Gödel (and it is this and similar theorems that doomed Hilbert's formalism). Second, following Gödel's proof, Turing himself proved that the following problem (called "the halting problem") cannot be solved by a Turing machine: "Given a Turing machine, will it ever stop or not?" (in other words, "Will we ever get an output out of this machine?") These two results are intimately related, for if we had a mechanical method for deciding whether or not a Turing machine will ever stop, then it should be possible, given any arithmetical proposition, to prove it true or false.

It's hard to exaggerate the importance of these results; we can say that in our century, for the first time, logic has found and studied the limits of its own resources. I cannot go into the details of these fundamental theorems (it would take a whole graduate course), but I can show you the principle on which they are all based. This principle goes back to Georg Cantor, whom I mentioned in my lecture on the paradoxes of infinity. I also mentioned the principle itself back then, but did not explain it. Now I will. Cantor (1845-1918) showed, as I said, that the real numbers are "more infinite" than the natural numbers. To do that he used a trick called "the diagonal procedure," and this is the principle on which all those more modern results are based. Let us first review the definition of real number. We will confine ourselves to real numbers between 0 and 1. What we mean by a real number between 0 and 1 is an infinite sequence of zeros and ones. You can imagine such a thing as, for example, 0.10011100110..., continuing forever. Of course, we could write real numbers using all 10 digits, but it is simpler in binary notation, with just zeros and ones. We will consider two such real numbers to be different if they differ in at least one position; if, let us say one of them has a 0 in position #47, and the other one has a 1 in position #47, then the two sequences are different (this presents a small technical problem because in binary notation the number 1/2, for example, can be written in two different ways: 0.10000... and 0.01111..., but we will skip it and agree that two sequences are different if they differ in some position). Now Cantor argued as follows: he assumed that the set of all such real numbers could be put into one-to-one correspondence with the set of natural numbers 1,2,3, etc., and from this he got a contradiction: this showed that the set of real numbers cannot be put into one-to-one correspondence with the natural numbers (in other words, the set of real numbers is "more infinite" than the set of natural numbers). Here's how he got the contradiction by means of the "diagonal procedure": suppose that there is a one-to-one correspondence between all real numbers and the numbers 1,2,3, etc. This means that we can arrange all real numbers in a list (an infinite list, of course, but still a list!). We write the first real number (a sequence of zeros and ones), then, below, we write the second real number (another such sequence), then, below, the third one, and so on. So we end up with a list whose beginning is something like this (we omit the 0. at the beginning of each row):

 
000111001101...
010100111000...
100101110001...
100010101100...
...............

You must understand that the dots mean etcetera: each row goes on forever, and the list continues down also forever. Now that we have this list, let's define a real number (a sequence of zeros and ones) as follows: we go down the main diagonal of this list and when we see a 0 we write (on a separate sheet of paper) a 1, if we see a 1 we write a 0. So, starting at the upper left corner, since in our example we find a 0, we write a 1 on the separate sheet; then we go to second row, second position, and find a 1 there, we write a 0; in third row third position there's a 0, so we write a 1, and so on forever. This way we get an infinite sequence, a real number, whose beginning looks as follows: 1011... Now let's ask: is this real number anywhere in this list, which was supposed to contain them all? The answer is NO. For if you say that this new sequence is somewhere in the list, it must be some row, say row number k. But if we take a look at row #k, we observe that (by construction!) the symbol in position #k is different from the symbol in our new sequence (if there was a 0 now there's a 1, and viceversa). So this new sequence (or real number) is nowhere in this list, which contradicts the assertion that this list contained all real numbers. This is Cantor's diagonal procedure, showing that there cannot be a complete list of all real numbers. It is a fairly simple argument, yet it takes some thinking to understand it. Once I tried to explain it to a Professor of Literature at Harvard without success: in spite of all my efforts, he could not fathom it. I hope to do better with you.

What is the relation between Cantor's diagonal procedure and statements about Turing machines and computers? As an example, let us go back to the "halting problem": is there a Turing machine which will tell us whether or not any Turing machine will ever stop? Turing himself was interested in computing the digits of certain irrational numbers, for example π = 3.141592653... Whenever one can design a Turing machine which will yield, one by one, the digits in such a number, we'll say that the number is "computable." Now, of course we will never finish computing all the digits, since there are infinitely many; the point, however, is that our machine should produce the digit in a given position after a finite number of steps (a number of steps which will depend on the position). Well, it turns out that π is computable in this sense. But there might be other numbers which are not computable, i.e., for which it is not possible to design such a Turing machine. Indeed, we can see, thanks to Cantor, that this is so. For a Turing machine, or a computer, is defined by its program, and this program is a finite sequence of instructions, and each instruction is a finite sequence of words: we can code (as Gödel did) all these words into numbers (into zeros and ones), and so what we get is that any Turing machine or computer program is defined by a FINITE sequence of zeros and ones. How many of this kind of objects (finite sequences of zeros and ones) do we have? It turns out (not hard to prove) that there are as many as natural numbers, so we can produce an infinite list of such programs. Therefore, since each computable number corresponds to one such program, we could make an infinite list of all computable numbers (in fact, a number may be computable by means of different programs, but this doesn't affect the argument). By the way, this shows that most real numbers are not computable! (Why?) Imagine now we have such an infinite list of computable numbers. Now we carry out Cantor's diagonal procedure, changing zeros into ones and ones into zeros down the main diagonal of this list, and get a number N which is NOT in the list. How can that be? There's a paradox here, actually a contradiction. This is because since the number N is not in the list, it cannot be computable, but on the other hand it is! How do we know this? As follows: if we want to compute the digit in the kth position, we just use the program that computes the number in the kth row (we can use the universal Turing machine running with that program), and once we get the output for the kth position, we simply change it: if it's a zero we write a one, and viceversa. So N is both computable and not-computable: a contradiction (this is called the Jules Richard's paradox). We may accept to have a contradiction in life, but not in logic, in math, or in computer science; the way logicians explain what's wrong with Richard's argument is that here we are confusing two different languages: the language of which our symbols are part, and the metalanguage (the language we use to speak of the first language). Incidentally, many paradoxes are explained in this way; the most ancient one, going back to the Greeks, is the Cretan paradox: A Cretan (a native of the island of Crete, not a cretin!) says, "I'm lying"; is he lying or telling the truth? In the one and only course I took in logic, back in Buenos Aires when I was 18, the professor was careful to write the language and the metalanguage with chalks of two different colors. But now comes the interesting thing: if we did have a Turing machine which was capable of deciding which programs will eventually stop (and yield an output) and which will not, there would be no problem with metalanguage! For this super-machine would examine all programs, one by one, throw out those which do not stop, and write down (if instructed to do so) the symbol (0 or 1) in the kth position of the number computed by the kth good, terminating program. And Richard's argument would produce an unbeatable contradiction, which, in turn, must mean that there cannot be such a Turing machine. In other words, the halting problem has no solution: no computer can decide when any given program will stop and when not, and therefore, if by mechanical we mean computer or Turing machine, we have that Hilbert's decision problem is answered negatively. This, by the way, was proved by Turing himself. The irony of the story is that (as far as I know) Turing was a believer in the Strong Church-Turing Thesis: our brains are like computers. But other scientists, like the British physicist and mathematician Roger Penrose, use this and other similar results to argue that our brains are NOT like computers. For the amazing thing is, even though a computer may not be able to come to a decision, in many instances the human being who's watching can!

This kind of abstract thought--logical thought--may be difficult, especially if you've never seen it before; yet it is in this realm that the question is being debated, whether we are computers (Turing machines) or not. The 20th century has seen greater advances in logic than any other time since Aristotle, and these advances are related to basic questions about our human identity.


Suggested Readings:

  • Roger Penrose, The Emperor's New Mind, Oxford, 1989, chapters 1 and 2. This is a popular exposition, but not always easy to follow, by one of today's leading scientists.
  • Marc Kac and Stanislaw M. Ulam, Mathematics and Logic, Dover, 1992. This short book contains in chapter 2 some of the topics discussed here.


Ricardo Nirenberg's home page   Mail to: [email protected]