Matter, Heat and Light, a lecture by Ricardo Nirenberg. U. at Albany Proj. Renaissance, Spring 1997.

Atoms and Molecules:

In the first two lectures of the semester I talked about Calculus, Newton's laws and paradoxes of infinity. In this lecture I'll try to give you some idea about how modern science works, in particular, how it explains the observable properties of matter, heat and light. Modern science is a complex method involving experimentation, logic and mathematics: the disciplines which analyze this method are Philosophy of Science and History of Science. To start with, and as an important illustration, we will concentrate on the following proposition: all matter is an aggregate of very small particles, called atoms, which move. As Prof. Isser and I have mentioned before, this is a very old proposition, going back to some ancient Greek thinkers, Leucippus, Democritus and Epicurus. It was called "the atomic hypothesis" until fairly recently, and this name indicates an attitude: the "practical" person could (and did) say, "It's nothing but a wild hypothesis, and those guys who like to think about abstract stuff may talk about it and speculate all they want, but have you ever seen or touched such a thing as an atom? All I see and touch is big chunks of matter, this table, this body, etc." The atomic hypothesis, therefore, had the same status as other hypotheses such as saying that the celestial bodies had to travel in circles because the circle is the most perfect curve.

How did the atomic hypothesis become a universally accepted scientific fact? The fathers of modern science, Francis Bacon, Descartes, Newton, they all believed in it, yet it was still pure speculation, until for the first time Daniel Bernoulli (Swiss, 1700-1782) deduced from this hypothesis a fact which could be experimentally checked: he showed mathematically that if a gas is composed of atoms moving at high speeds, whose size is small compared to their separation, then the pressure of a confined sample of the gas will be inversely proportional to its volume, something which had been shown experimentally by the English scientist Robert Boyle in the 17th century. This was the beginning of what's called the Kinetic Theory of Gases. But the decisive facts came from chemistry, a science which was rapidly developing by the end of the 18th century and the beginnings of the 19th.

These decisive facts can be summarized as follows: when a substance, water for example, is decomposed into its component elements, these always appear in the same proportion by weight: in the case of water, it will decompose into 8 parts (by weight) of oxygen for each part of hydrogen, and if you perform the inverse operation, making oxygen and hydrogen combine into water, the same proportions obtain: any excess of either gas will just stay there and not combine. Furthermore, if two elements, for example carbon and oxygen, may combine in more than one way (we may get either CO, carbon monoxide, or CO2, carbon dioxide), then for a given weight of carbon, the weight of oxygen in CO2 is exactly twice as much as in CO. All these facts, verified for many other chemical reactions, led John Dalton in 1808 to propose his atomic theory as the simplest way of accounting for them. He stated that all matter consists of indivisible atoms, so that all atoms of a given element are identical in all respects; different elements have different kinds of atoms, and these are indestructible; chemical reactions are just recombinations of atoms, and the atoms of each compound substance contain a definite and constant number of atoms of each element. One "atom" (molecule, really) of water, said Dalton, consists of two atoms of hydrogen and one of oxygen. The next decisive and surprising fact was discovered by the French chemist Gay-Lussac also in 1808: if we combine a certain volume, say one liter, of oxygen, with two liters of hydrogen, we get, as we expected, water steam, but no oxygen or hydrogen is left! The surprising thing is that we are looking at volumes here, not weights. The same thing happens with corresponding volumes of other gases, for instance nitrogen and hydrogen (they combine into ammonia). How can we explain that? The only plausible explanation is this: a given volume of ANY gas (well, of almost any gas), at a given temperature and pressure, contains the same number of atoms, and that's why there is nothing left: there are so many atoms in a liter of oxygen, and twice as many in two liters of hydrogen. The annoying thing, however, is that the result of combining one liter of oxygen with two liters of hydrogen is two (not three) liters of water vapor. This puzzle was solved by the Italian Amedeo Avogadro (1776-1856). He assumed that oxygen and hydrogen don't come in atoms but rather in molecules, each molecule of these gases consisting of two atoms. Thus, the reaction can be explained as follows: two molecules of hydrogen, each consisting of two atoms, combine with one molecule of oxygen, similarly consisting of two atoms, to give two molecules of water. Algebraically, 2(H2) + O2 _ 2(H2O). So, said Avogadro, the right statement is: equal volumes of any gas, at a given temperature and pressure, contain the same number of molecules (not of atoms), and that's why two liters of hydrogen combine with one liter of oxygen to give two liters of water vapor. It checks!

By around 1860 what used to be called the atomic hypothesis, as modified by Avogadro, became an established fact. Why? Because there were just too many experiments confirming it. It has been found experimentally that 2 grams of hydrogen, or 32 grams of oxygen, or 28 grams of nitrogen, etc., that is, a number of grams equal to the molecular weight of the gas, occupies the same volume no matter which gas, and that the number of molecules in that volume is about 6.02252 x 1023. This huge number (about a trillion trillions) is called Avogadro's constant, and the remarkable thing is, it has been measured again and again using very different procedures, and we always get the same number to a very close approximation.

All these chemical facts involve a lot of experimentation, and a lot of logical ingenuity, but not much math except for counting. Let us now consider an airtight cylinder with a movable piston at one end, and inside the cylinder some gas. If what we have is 32 grams of oxygen, we have seen that there are a trillion trillions of molecules moving in there, at high speeds and in all directions. Some of them hit the movable piston, which moves only in one direction, say along the x-axis. We will also assume that on the other side of the piston (toward the positive x-axis) there is a vacuum, so that no molecules are hitting it from the outside. Suppose the area of the piston is A. The question is, What's the total force exerted by the gas molecules on the piston? I will not carry out the math, for which I refer you to chapter 39 of Feynman's book. I will only say that as a consequence ONLY of Newton's laws of motion which we saw in the first lecture (especially the law that says that force is equal to mass times acceleration), the net force on the piston is F = 2nAmvx2, where n is the number of molecules per unit volume, A is the area of the piston, m is the mass of each molecule, and vx is the velocity of a molecule in the x direction. There are two problems here: first of all, we must assume that the hits of the molecules against the piston are perfectly elastic, which means that the piston doesn't absorb any energy —doesn't heat up—, and that the molecules dont' absorb energy that goes into jiggling its component atoms, no energy is wasted; this can be arranged by using a gas such as helium whose molecules have only one atom, and a piston of the right material: we will not dwell on this. Secondly, not all molecules have the same velocity: this is solved by taking the average velocity of all molecules. Then, taking into account that the pressure on the piston is by definition the force divided by the area A, we get: P = nm[vx2], where the brackets indicate we have taken the average. Using some simple math tricks, we deduce that PV = (2/3)N[mv2/2]. Here P is the pressure, V is the volume of gas, N is the total number of molecules (or atoms) in that gas, m is the mass of each molecule, and v is the speed of the molecule. Again, the brackets indicate average. Notice that, again according to Newton, what we have inside the brackets is the kinetic energy of a molecule, so that N multiplied by that average gives us the total energy of the gas, which we call U. Summarizing, we get: PV = (2/3)U. Now, to make a long story short, the temperature of a gas turns out to be proportional to the average kinetic energy of its molecules, so that calling the temperature T, we get the so-called law of ideal gases: PV = kNT. Here k is a universal constant, and P, V and N are as before. What do we deduce from here? That a given volume of ANY gas (as long as its molecules are well-behaved), at a given temperature and pressure, always has the same N, the same number of molecules. We have deduced Avogadro's law from Newton's laws of motion! And, of course, this experiment with the piston provides a way of finding Avogadro's number.

What we have been doing here is an example of the kinetic theory of gases, in which, given the enormous number of particles involved, probability and statistics play a major role. Of these two theories, probability and statistics, I'll talk on the next two lectures; what I've been trying to show you is an illustration of how science works: using math we can connect very different facts, such as something coming from observing chemical reactions with something having to do with molecules hitting a surface and applying to them the same laws we apply to billiard balls, and we get results fitting with each other and with new observations to a remarkable degree. And more remarkable yet, Avogadro's number has been computed using still other phenomena: Einstein did it by looking at Brownian motion. These marvelous fits are what makes modern science the most certain and persuasive system of thought in the history of mankind. No system of thought, religious or metaphysical, has ever conquered the whole world, regardless of differences in culture, as completely as modern science has. No wonder that, as Prof. Isser has mentioned, social thinkers so diverse as Adam Smith and Karl Marx have claimed scientific status for their doctrines.

But there's a problem with modern science: no one, not even scientists, can keep up with the vast output of new scientific knowledge. And it has become terribly specialized. This, plus a general decay in the quality of science education, has caused in the last few years a very curious phenomenon. A large number of scholars in the humanities and social sciences have concluded that science does not have a higher level of certainty than other symbolic systems; they claim that scientific truth is "culturally determined," like religion, law, table manners, or criminal procedure. They are relativists in the sense that for them all truth, including scientific truth, is never absolute, never universal, but only valid for a given community. Not surprisingly, most if not all of them know very little about modern science. As homework, you should read the material on the recent Sokal controversy.

Thermodynamics

Historically, the study of the relations between heat and mechanical work started before these computations involving molecules hitting each other and hitting walls (before the kinetic theory of matter). Sadi Carnot (1796-1832), a French engineer, formulated the fundamental laws of thermodynamics, which is the study of heat and work without going into the atomic or molecular structure of matter. The steam engine, the device which was at the base of the Industrial Revolution (as in our day the computer is at the base of the Post-Industrial Age) was the motivation for this study. Since these thermodynamical laws are very important both from the physical and from the cultural and historical points of view, I'll say a few words about the first two laws (there are three).

The first law is called "the conservation of energy." An easy illustration is a pendulum. Suppose we hold the pendulum at a height h: its potential energy, due to the gravity of the earth at that point, is defined as mgh, the mass of the bob times the acceleration of gravity times the height. Then we let go, and the pendulum falls: as it moves, the original potential energy is changed into kinetic energy: (1/2)mv 2, one half of the mass of the bob times the square of its velocity. When the bob reaches the lowest point, the kinetic energy is maximum and the potential is a minimum; as it goes up again on the other side, the kinetic energy is transformed into potential energy, and when it reaches again height h, all of it is again potential energy. Now, the sum of the potential plus the kinetic energy is constant throughout the whole process. But in the real world, things are not so simple. We all know that the pendulum slows down until eventually, unless an exterior force is applied to it (as in grandfather clocks), it will stop. This is because of friction against the molecules of air and at the suspension point, even if we use the best ball bearings. But friction always means heat: the molecules of air and those of the bearings are being heated. If we take heat into account, the law of conservation of energy says that the potential energy plus the kinetic energy plus the heat energy is constant! This is true provided we are not adding energy or pulling it off from the outside, that is, provided the pendulum, the air and the bearings are all inside a perfectly insulated box, forming what's called "a closed system."

The second law of thermodynamics can be roughly expressed as follows: whenever two bodies are in contact, heat will flow from the hotter to the colder body, from higher to lower temperature, and we will not be able to make it flow the other way unless we add or pull off energy from the outside. Another way of stating this is to say that a quantity called "entropy" always increases in a closed system. I don't want to define entropy mathematically here, but it can be viewed as a measure of the disorder in the molecules: at the beginning the more energetic molecules were in the first body and the less energetic in the second body, so they were organized in a way, but if we let time go by, the molecules at all levels of energy tend to get mixed up, and the temperatures of the two bodies become equal. An interesting parallel notion of entropy plays a major role in Information Theory. These two laws of thermodynamics are extremely useful in chemistry and in engineering, and have been used and abused by many thinkers who have drawn conclusions about economics, society, and the destiny of the universe. A commonplace of the last century and a half is that our universe is doomed to a cold death because of the second law of thermodynamics.

Electricity and Light

Let's now leave the subject of heat, atoms and molecules, and say something about the phenomena of light. For a long time scientists had believed that light consisted in very small particles, but some observations seemed to contradict this view. For one thing, it was noticed that when two rays of light cross each other, there seem to be no collision of particles, no obstacles, but rather a reinforcement of the intensity. On the other hand, if we divide a ray of light by means of a glass that lets part of the ray go through but reflects the rest of it (a semi-reflecting mirror), and if we carefully choose the distance traveled by the two resulting rays, when they meet again they annihilate each other: this is called interference. This, plus other observed phenomena, led physicists to the wave theory of light. The first scientists to propose such a theory were the Englishman Robert Hooke (1635-1703) and the Dutch Christiaan Huygens (1629-1695).

The wave theory says that light consists in periodic disturbances or changes. Let's first define what we mean by periodic. This means two things: first, if we look at a given point in space P at a given time t, and we observe a disturbance D, staying at the same point and letting a certain time T go by, we will observe the same disturbance. Secondly, if we move from P a certain distance in a certain direction, at the same time t we will also observe the same disturbance. Thus, periodic means that the disturbance is repeated regularly both in time and in space. We are familiar with this kind of phenomena, for we all have seen the ripples that occur in water when we disturb it by throwing an object in it. And sound is another example: periodic disturbances in the pressure of air. But in this two examples we have something, some substance, that's being disturbed: water and air. Now, what is disturbed when we look at light? Certainly not water or air, for the light from the stars travels through vast regions containing nothing of those two. To find out what is disturbed in the case of light, we must talk about electricity and its forces.

When we studied gravitation we saw that two point masses attract each other without contact and through empty space with a force proportional to each of the two masses and inversely proportional to the square of the distance in between. Electric charges behave similarly. But there are two important differences. One, electric charges, unlike Newtonian mass, come in two types, positive and negative: two electric charges of the same sign repel each other, and they attract each other when they have opposite signs. And second, although the formula for the electrical force has the same form as the formula for the gravitational force, F = Cq1q2/r2, where C is a constant, the qs are the electric charges and r is the distance in between, the constant C is much, much larger than the corresponding gravitational constant G. How much larger? About 4.17 times 1042. A number with 42 zeros!

If an electric charge, say an electron, is located at point P, we can imagine a force pointing towards P at each point Q in space: the intensity of that force will depend on the distance between P and Q, and it will be inversely proportional to the square of that distance; this is exactly what happens if we have a mass at P without any electrical charge, except that in the latter case the force will be much smaller. We can picture the force at Q as an arrow pointing toward P (or away from P), its length representing the intensity of the force: in math we call these arrows "vectors." So what we have is a "vector field," a vector at each point Q of space (except for P itself). Now comes an interesting fact. Suppose the electron at P starts moving. As a result, the vector field of forces will be changing: both the direction and the length of those arrows will vary. But not only that: a new vector field will appear, new forces, which are called magnetic forces. This is a fact of nature: every time a charge moves, we have not only electric but also magnetic forces. And viceversa, when a magnetic field changes, we automatically have an electric field as well: on those two principles electric motors and generators are based, as well as telephones, loudspeakers and countless other gadgets. Furthermore, the magnetic forces are perpendicular to the electric ones. And if we move the electron back and forth, we will then have two vector fields, one electric and the other magnetic, which are changing, moving in space periodically (in the sense explained above). We call this "electromagnetic radiation." A great British physicist, James Clerk Maxwell (1831-1879) discovered the equations governing the changes of electromagnetic fields, and it so happens that light is precisely that: an electromagnetic field in motion. As I've said before, no substance such as air or water is involved here; so what is it that's disturbed? Just fields of forces, arrows in space! Again, a "practical person" might doubt that math means much in "the real world"; yet here we have that light is explained as varying fields of forces, a mathematical concept. And as you may have heard, it was Maxwell's equations that allowed Herz and others to produce still other electromagnetic fields: radio waves. And later, TV waves, X-rays, gamma rays, etc. The only difference between light and radio waves is their frequency, the number of oscillations per second, in other words how many times per second the fields return to the same value. The frequency of a radio or TV broadcast is between 105 and 108 oscillations/sec. The frequency of light is between 1014 and 1015, depending on the color. Infrared has a slightly lower frequency than visible light, and X-rays have a higher one. One way of characterizing the technological revolution of the 20th century is to say that it was the widening of the spectrum of electromagnetic radiation we can control; it used to be just visible light, and now it is much wider.

 


Bibliography:

Chapters of Feynman's book: 1, 2, 3, 26, 28, 29, 39, 44.

Obviously you are not expected to read all those whole chapters (it's too much), but look at them to try to understand better the notions presented in this lecture.


To Nirenberg lectures   To Nirenberg bio.