Read Death by Black Hole: And Other Cosmic Quandaries Online

Authors: Neil Degrasse Tyson

Tags: #Science, #Cosmology

Death by Black Hole: And Other Cosmic Quandaries (10 page)

TEN
 
ANTIMATTER MATTERS
 

P
article physics gets my vote as the subject with the most comical jargon in the physical sciences. Where else could a neutral vector boson be exchanged between a negative muon and a muon neutrino? Or how about the gluon that gets exchanged between a strange quark and a charmed quark? Alongside these seemingly countless particles with peculiar names is a parallel universe of
anti
particles that are collectively known as antimatter. In spite of its continued appearance in science fiction stories, antimatter is decidedly nonfiction. And yes, it does tend to annihilate on contact with ordinary matter.

The universe reveals a peculiar romance between antiparticles and particles. They can be born together out of pure energy, and they can die together (annihilate) as their combined mass gets reconverted back to energy. In 1932, the American physicist Carl David Anderson discovered the antielectron, the positively charged antimatter counterpart to the negatively charged electron. Since then, antiparticles of all varieties have been routinely made in the world’s particle accelerators, but only recently have antiparticles been assembled into whole atoms. An international group led by Walter Oelert of the Institute for Nuclear Physics Research in Jülich, Germany, has created atoms where an antielectron was happily bound to an antiproton. Meet antihydrogen. These first anti-atoms were created in the particle accelerator of the European Organization for Nuclear Research (better known by its French acronym CERN) in Geneva, Switzerland, where many modern contributions to particle physics have occurred.

The method is simple: create a bunch of antielectrons and a bunch of antiprotons, bring them together at a suitable temperature and density, and hope that they combine to make atoms. In the first round of experiments, Oelert’s team produced nine atoms of antihydrogen. But in a world dominated by ordinary matter, life as an antimatter atom can be precarious. The antihydrogen survived for less than 40 nanoseconds (40 billionths of a second) before annihilating with ordinary atoms.

The discovery of the antielectron was one of the great triumphs of theoretical physics, for its existence had been predicted just a few years earlier by the British-born physicist Paul A. M. Dirac. In his equation for the energy of an electron, Dirac noticed two sets of solutions: one positive and one negative. The positive solution accounted for the observed properties of the ordinary electron, but the negative solution initially defied interpretation—it had no obvious correspondence to the real world.

Equations with double solutions are not unusual. One of the simplest examples is the answer to the question, “What number times itself equals nine?” Is it 3 or—3? Of course, the answer is both, because 3 × 3 = 9 and—3 ×—3 = 9. Equations carry no guarantee that their solutions correspond to events in the real world, but if a mathematical model of a physical phenomenon is correct, then manipulating its equations can be as useful as (and much easier than) manipulating the entire universe. As in the case of Dirac and antimatter, such steps often lead to verifiable predictions, and if the predictions cannot be verified, then the theory must be discarded. Regardless of the physical outcome, a mathematical model ensures that the conclusions you might draw are logical and internally consistent.

 

 

QUANTUM THEORY
, also known as quantum physics, was developed in the 1920s and is the subfield of physics that describes matter on the scale of atomic and subatomic particles. Using the newly established quantum rules, Dirac postulated that occasionally a phantom electron from the “other side” might pop into this world as an ordinary electron, thus leaving behind a hole in the sea of negative energies. The hole, Dirac suggested, would experimentally reveal itself as a positively charged antielectron, or what has come to be known as a positron.

Subatomic particles have many measurable features. If a particular property can have an opposite value, then the antiparticle version will have the opposite value but will otherwise be identical. The most obvious example is electric charge: the positron resembles the electron except that the positron has a positive charge while the electron has a negative one. Similarly, the antiproton is the oppositely charged, antiparticle of the proton.

Believe it or not, the chargeless neutron also has an antiparticle. It’s called—you guessed it—the antineutron. The antineutron is endowed with an opposite zero charge to the ordinary neutron. This arithmetic magic derives from the particular triplet of fractionally charged particles (quarks) that compose neutrons. The quarks that compose the neutron have charges–1/3,–1/3, +2/3, while those in the antineutron have 1/3, 1/3,–2/3. Each set of three add to zero net charge yet, as you can see, the corresponding components have opposite charges.

Antimatter can seem to pop into existence out of thin air. If a pair of gamma rays have sufficiently high energy, they can interact and spontaneously transform themselves into an electron-positron pair, thus converting a lot of energy into a little bit of matter as described by the famous 1905 equation of Albert Einstein:

E = mc
2

 

which, in plain English reads

Energy = (mass)
×
(speed of light)
2

 

which, in even plainer English reads

Energy = (mass)
×
(a very big number)

 

In the language of Dirac’s original interpretation, the gamma ray kicked an electron out of the domain of negative energies to create an ordinary electron and an electron hole. The reverse is also possible. If a particle and an antiparticle collide, they will annihilate by refilling the hole and emitting gamma rays. Gamma rays are the sort of radiation you should avoid. Want proof? Just remember how the comic strip character “The Hulk” became big, green, and ugly.

If you somehow managed to manufacture a blob of antiparticles at home, you would immediately have a storage problem, because your antiparticles would annihilate with any conventional sack or grocery bag (either paper or plastic) in which you chose to carry them. A cleverer solution traps the charged antiparticles within the confines of a strong magnetic field, where they are repelled by the magnetic walls. With the magnetic field embedded in a vacuum, the antiparticles are also rendered safe from annihilation with ordinary matter. This magnetic equivalent of a bottle is also the bag of choice when handling other container-hostile materials such as the 100-million-degree glowing gases of (controlled) nuclear fusion experiments. The real storage problem arises after you have created whole (and therefore electrically neutral) anti-atoms, because they do not normally rebound from a magnetic wall. It would be best to keep your positrons and antiprotons separate until absolutely necessary.

 

 

IT TAKES AT
least as much energy to generate antimatter as you recover when it annihilates to become energy again. Unless you had a full tank of fuel in advance, self-generating antimatter engines would slowly suck energy from your starship. I don’t know whether they knew about this on the original
Star Trek
television and film series but I seem to remember that Captain Kirk was always asking for “more power” from the matter-antimatter drives and Scotty was always saying that “the engines can’t take it.”

While there is no reason to expect a difference, the properties of antihydrogen have not yet been shown to be identical to the corresponding properties of ordinary hydrogen. Two obvious things to check are the detailed behavior of the positron in the bound company of an antiproton—does it obey all the laws of quantum theory? And the strength of an anti-atom’s force of gravity—does it exhibit antigravity instead of ordinary gravity? On the atomic scales, the force of gravity between particles is unmeasurably small. Actions are instead dominated by atomic and nuclear forces, both of which are much, much stronger than gravity. What you need are enough anti-atoms to make ordinary-sized objects so that their bulk properties can be measured and compared with ordinary matter. If a set of billiard balls (and, of course, the billiard table and the cue sticks) were made of antimatter, would a game of antipool be indistinguishable from a game of pool? Would an anti-eightball fall to Earth at exactly the same rate as an ordinary eightball? Would antiplanets orbit an antistar in exactly the same way that ordinary planets orbit ordinary stars?

I am philosophically convinced that the bulk properties of antimatter will prove to be identical to those of ordinary matter—normal gravity, normal collisions, normal light, normal pool sharking, etc. Unfortunately, this means that an antigalaxy on a collision course with the Milky Way would be indistinguishable from an ordinary galaxy until it was too late to do anything about it. But this fearsome fate cannot be common in the universe because, for example, if a single antistar annihilated with a single ordinary star, then the conversion of matter to gamma-ray energy would be swift and total. Two stars with masses similar to that of the Sun (each with about 10
57
particles) would become so luminous that the colliding system would temporarily outproduce all the energy of all the stars of a hundred million galaxies. There is no compelling evidence that such an event has ever occurred. So, as best as we can judge, the universe is dominated by ordinary matter. In other words, being annihilated need not be one of your safety concerns on that next intergalactic voyage.

Still, the universe remains disturbingly imbalanced: when created, every antiparticle is always accompanied by its particle counterpart, yet ordinary particles seem to be perfectly happy without their antiparticles. Are there hidden pockets of antimatter in the universe that account for the imbalance? Was a law of physics violated (or an unknown law of physics at work) during the early universe that forever tipped the balance in favor of matter over antimatter? We may never know the answers to these questions, but in the meantime, if an alien lands on your front lawn and extends an appendage as a gesture of greeting, before you get friendly, toss it an eightball. If the appendage explodes, then the alien was probably made of antimatter. If not, then you can proceed to take it to your leader.

SECTION 3
 
WAYS AND MEANS OF NATURE
 

HOW NATURE PRESENTS HERSELF TO THE INQUIRING MIND

ELEVEN
 
THE IMPORTANCE OF BEING CONSTANT
 

M
ention the word “constant,” and your listeners may think of matrimonial fidelity or financial stability—or maybe they’ll declare that change is the only constant in life. As it happens, the universe has its own constants, in the form of unvarying quantities that endlessly reappear in nature and in mathematics, and whose exact numerical values are of signal importance to the pursuit of science. Some of these constants are physical, grounded in actual measurements. Others, though they illuminate the workings of the universe, are purely numerical, arising from within mathematics itself.

Some constants are local and limited, applicable in just one context, one object, or one subgroup. Others are fundamental and universal, relevant to space, time, matter, and energy everywhere, thereby granting investigators the power to understand and predict the past, present, and future of the universe. Scientists know of only a few fundamental constants. The top three on most people’s lists are the speed of light in a vacuum, Newton’s gravitational constant, and Planck’s constant, the foundation of quantum physics and the key to Heisenberg’s infamous uncertainty principle. Other universal constants include the charge and mass of each of the fundamental subatomic particles.

Whenever a repeating pattern of cause and effect shows up in the universe, there’s probably a constant at work. But to measure cause and effect, you must sift through what is and is not variable, and you must ensure that a simple correlation, however tempting it may be, is not mistaken for a cause. In the 1990s the stork population of Germany increased and the German at-home birth rate rose as well. Shall we credit storks for airlifting the babies? I don’t think so.

But once you’re certain that the constant exists, and you’ve measured its value, you can make predictions about places and things and phenomena yet to be discovered or thought of.

 

 

JOHANNES KEPLER
, a German mathematician and occasional mystic, made the first-ever discovery of an unchanging physical quantity in the universe. In 1618, after a decade of engaging in mystical drivel, Kepler figured out that if you square the time it takes a planet to go around the Sun, then that quantity is always proportional to the cube of the planet’s average distance from the Sun. Turns out, this amazing relation holds not only for each planet in our solar system but also for each star in orbit around the center of its galaxy, and for each galaxy in orbit around the center of its galactic cluster. As you might suspect, though, unbeknownst to Kepler, a constant was at work: Newton’s gravitational constant lurked within Kepler’s formulas, not to be revealed as such for another 70 years.

Probably the first constant you learned in school was pi—a mathematical entity denoted, since the early eighteenth century, by the Greek letter π. Pi is, quite simply, the ratio of the circumference of a circle to its diameter. In other words, pi is the multiplier if you want to go from a circle’s diameter to its circumference. Pi also pops up in plenty of popular and peculiar places, including the areas of circles and ellipses, the volumes of certain solids, the motions of pendulums, the vibrations of strings, and the analysis of electrical circuits.

Not a whole number, pi instead has an unlimited succession of nonrepeating decimal digits; when truncated to include every Arabic numeral, pi looks like 3.14159265358979323846264338327950. No matter when or where you live, no matter your nationality or age or aesthetic proclivities, no matter your religion or whether you vote Democrat or Republican, if you calculate the value of pi you will get the same answer as everybody else in the universe. Constants such as pi enjoy a level of internationality that human affairs do not, never did, and never will—which is why, if people ever do communicate with aliens, they’re likely to speak in mathematics, the lingua franca of the cosmos.

So we call pi an “irrational” number. You can’t represent the exact value of pi as a fraction made up of two whole numbers, such as 2/3 or 18/11. But the earliest mathematicians, who had no clue about the existence of irrational numbers, didn’t get much beyond representing pi as 25/8 (the Babylonians, about 2000
B.C
.) or 256/81 (the Egyptians, about 1650
B.C
.). Then, in about 250
B.C
., the Greek mathematician Archimedes—by engaging in a laborious geometric exercise—came up with not one fraction but two, 223/71 and 22/7. Archimedes realized that the exact value of pi, a value he himself did not claim to have found, had to lie somewhere in between.

Given the progress of the day, a rather poor estimate of pi also appears in the Bible, in a passage describing the furnishings of King Solomon’s temple: “a molten sea, ten cubits from the one brim to the other: it was round all about…and a line of thirty cubits did compass it round about” (1 Kings 7:23). That is, the diameter was 10 units, and the circumference 30, which can only be true if pi were equal to 3. Three millennia later, in 1897, the lower house of the Indiana State Legislature passed a bill announcing that, henceforth in the Hoosier state, “the ratio of the diameter and circumference is as five-fourths to four”—in other words, exactly 3.2.

Decimal-challenged lawmakers notwithstanding, the greatest mathematicians—including Muhammad ibn Musa al-Khwarizmi, a ninth-century Iraqi whose name lives on in the word “algorithm,” and even Newton—steadily labored to increase the precision of pi. The advent of electronic computers, of course, blew the roof right off that exercise. As of the early twenty-first century, the number of known digits of pi has passed the 1 trillion mark, surpassing any physical application except the study (by pi-people) of whether the sequence of numerals will ever not look random.

 

 

OF FAR MORE
importance than Newton’s contribution to the calculation of pi are his three universal laws of motion and his single universal law of gravitation. All four laws were first presented in his master work,
Philosophiæ Naturalis Principia Mathematica,
or the
Principia,
for short, published in 1687.

Before Newton’s
Principia
, scientists (concerned with what was then called mechanics, and later called physics) would simply describe what they saw, and hope that the next time around it would happen the same way. But armed with Newton’s laws of motion, they could describe the relations among force, mass, and acceleration under all conditions. Predictability had entered science. Predictability had entered life.

Unlike his first and third laws, Newton’s second law of motion is an equation:

F = ma

 

Translated into English, that means a net force
(F)
applied to an object of a given mass
(m)
will result in the acceleration
(a)
of that object. In even plainer English, a big force yields a big acceleration. And they change in lockstep: double the force on an object, and you double its acceleration. The object’s mass serves as the equation’s constant, enabling you to calculate exactly how much acceleration you can expect from a given force.

But suppose an object’s mass is not constant? Launch a rocket, and its mass drops continuously until the fuel tanks run out. And now, just for grins, suppose the mass changes even though you neither add nor subtract material from the object. That’s what happens in Einstein’s special theory of relativity. In the Newtonian universe, every object has a mass that is always and forever its mass. In the Einsteinian, relativistic universe, by contrast, objects have an unchanging “rest mass” (the same as the “mass” in Newton’s equations), to which you add more mass according to the object’s speed. What’s going on is that as you accelerate an object in Einstein’s universe, its resistance to that acceleration increases, showing up in the equation as an increase in the object’s mass. Newton could not have known about these “relativistic” effects, because they become significant only at speeds comparable to the speed of light. To Einstein, they meant some other constant was at work: the speed of light, a subject worthy of its own essay at another time.

 

 

AS IS TRUE
for many physical laws, Newton’s laws of motion are plain and simple. His universal law of gravitation is somewhat more complicated. It declares that the strength of the gravitational attraction between two objects—whether between an airborne cannonball and Earth, or the Moon and Earth, or two atoms, or two galaxies—depends only on the two masses and the distance between them. More precisely, the force of gravity is directly proportional to the mass of one object times the mass of the other, and inversely proportional to the square of the distance between them. Those proportionalities give deep insight into how nature works: if the strength of the gravitational attraction between two bodies happens to be some force
F
at one distance, it becomes one-fourth
F
at double the distance and one-ninth
F
when the distance is tripled.

But that information by itself is not enough to calculate the exact values of the forces at work. For that, the relation requires a constant—in this case, a term known as the gravitational constant
G,
or, among people on the friendliest terms with the equation, “big G.”

Recognizing the correspondence between distance and mass was one of Newton’s many brilliant insights, but Newton had no way to measure the value of
G
. To do so, he would have had to know everything else in the equation, leaving
G
fully determined. In Newton’s day, however, you could not know the whole equation. Although you could easily measure the mass of two cannonballs and their distance from each other, their mutual force of gravity would be so small that no available apparatus could have detected it. You might measure the force of gravity between Earth and a cannonball, but you had no way to measure the mass of Earth itself. Not until 1798, more than a century after the
Principia,
did the English chemist and physicist Henry Cavendish come up with a reliable measure of
G
.

To make his now-famous measurement, Cavendish used an apparatus whose central feature was a dumbbell, made with a pair of two-inch-diameter lead balls. A thin, vertical wire suspended the dumbbell from its middle, allowing the apparatus to twist back and forth. Cavendish enclosed the entire gizmo in an airtight case, and placed two 12-inch-diameter lead balls kitty-corner outside the case. The gravitational pull of the outside balls would tug on the dumbbell and twist the wire from which it was suspended. Cavendish’s best value for
G
was barely accurate to four decimal places at the end of a string of zeroes. In units of cubic meters per kilogram per second squared, the value was 0.00000000006754.

Coming up with a good design for an apparatus wasn’t exactly easy. Gravity is such a weak force that practically anything, even gentle air currents within the laboratory encasement, would swamp gravity’s signature in the experiment. In the late nineteenth century the Hungarian physicist Loránd Eötvös, using a new and improved Cavendish-type apparatus, made mild improvements in
G’
s precision. This experiment is so hard to do that, even today,
G
has acquired only a few additional decimal places. Recent experiments conducted at the University of Washington in Seattle by Jens H. Gundlach and Stephen M. Merkowitz, who redesigned the experiment, derive the value 0.000000000066742. Talk about weak: as Gundlach and Merkowitz note, the gravitational force they had to measure is equivalent to the weight of a single bacterium.

Once you know
G,
you can derive all kinds of things, such as Earth’s mass, which had been Cavendish’s ultimate goal. Gundlach and Merkowitz’s best value for that is just about 5.9722 x 10
24
kilograms, very close to the modern value.

 

 

MANY PHYSICAL CONSTANTS
discovered in the past century link with forces that influence subatomic particles—a realm ruled by probability rather than precision. The most important constant among them was promulgated in 1900 by the German physicist Max Planck. Planck’s constant, represented by the letter
h,
was the founding discovery of quantum mechanics, but Planck came up with it while investigating what sounds mundane: the relation between the temperature of an object and the range of energy it emits.

An object’s temperature directly measures the average kinetic energy of its jiggling atoms or molecules. Of course, within this average some of the particles jiggle very fast, whereas others jiggle relatively slow. All this activity emits a sea of light, spread over a range of energies, just like the particles that emitted them. When the temperature gets high enough, the object begins to glow visibly. In Planck’s day, one of the biggest challenges in physics was to explain the full spectrum of this light, particularly the bands with the highest energy.

Planck’s insight was that you could account for the full sweep of the emitted spectrum in one equation only if you assume that energy itself is quantized, or divided up into itty bitty units that cannot be subdivided further: quanta.

Once Planck introduced
h
into his equation for an energy spectrum, his constant began to appear everywhere. One good place to find
h
is in the quantum description and understanding of light. The higher the frequency of light, the higher its energy: Gamma rays, the band with the highest frequencies, are maximally hostile to life. Radio waves, the band with the lowest frequencies, pass through you every second of every day, no harm done. High-frequency radiation can harm you precisely because it carries more energy. How much more? In direct proportion to the frequency. What reveals the proportionality? Planck’s constant,
h
. And if you think
G
is a minuscule constant of proportionality, take a look at the current best value for
h
(in its native kilogram-meters squared per second): 0.00000000000000000000000000000000066260693.

Other books

The Darkest Secret by Gena Showalter
The Four Stages of Cruelty by Keith Hollihan
Seeing Redd by Frank Beddor
Taken By The Karate Instructor by Madison, Tiffany
Skin Privilege by Karin Slaughter
Damaged and the Cobra by Bijou Hunter
Outbreak: The Hunger by Scott Shoyer
Blood Hunt by Lee Killough