Speaking of Science

The Scienticity Blog

Aug
29

Atoms are not Watermelons

Posted by jns on 29 August 2005

A few days back I finished reading How to Write: Advice and Relfections, by Richard Rhodes. Although I’m frequently drawn to read them, books about writing are rarely satisfying, interesting, or useful. Rhodes’ book managed all three, and I can recommend it.

Here are three passages I made note of as I read that I wanted to copy into my blog, which also serves as my commonplace book.

People lost in a wilderness have been known to find their way out guided by the wrong map; orienting is apparently a function only loosely tied to locality.1 [p. 30]

Not everyone liked the arrangement.2 Dixie Lee Ray, the eccentric former chair of the U.S. Atomic Energy Commission and governor of the state of Washington, reviewed The Making of the Atomic Bomb for the Washington Times. My war scenes were too graphic, Dr. Ray complained. Everyone knows that war is terrible; why go on about it? Worse, she wrote, the book jumps around. [p. 108, italics in original]

A less global structural problem was deciding at what level to pitch scientific explanation. I’d read enough popular science to be impatient with explanation that depended on fanciful analogies. Besides being condescending, comparing an atom to a watermelon wastes half the analogy. Fortunately, nuclear physics is largely an experimental science. Reading through some of the classic papers in the field, I realized that I could explain a result clearly and simply by describing the physical experiment that produced it: a brass box, the air evacuated, a source of radiation in the box in the form of a vial of radon gas, and so on. Then I and the reader could visualize a process in terms of the manipulation of real laboratory objects, not watermelons, just as the experimenters themselves did, and could absorb the culture of scientific work at the same time–the throb of the vacuum pump, the smell of its oil. [pp. 109--110]

———-
1He had been discussing memory and the occasional difficulty of coming up with just the right word. He discusses the use of dictionaries and thesauruses to help, and how he frequently finds words that were “just right” but weren’t what he was looking for.

2“The arrangement”, that is, of his masterly The Making of the Atomic Bomb, in which he uses historical narrative to follow several threads in science and politics, carrying each one to some stopping point before going to previous times to pick up a thread put down for awhile.

Jul
20

Not All Things Freeze

Posted by jns on 20 July 2005

Some time ago I started reading1 Robert Wolke’s What Einstein Told His Cook 2. It is a collection of very short pieces about food and cooking from a chemist’s point of view, assembled from his Washington Post columns.

Rather early on though, he made a small error of fact. I point this out not to chastise the author, but as an excuse to talk about helium, one of my favorite science topics.2

He wrote:

[In answer to a reader's question about why frozen cola exhibits separated ice crystals:]

All liquids turn into solids — that is, they freeze — when they get cold enough.

[Robert L. Wolke, What Einstein Told His Cook 2: The Sequel -- Further Adventures in Kitchen Science (W. W. Norton & Company, New York, 2005) p. 5.]

Before I make my point, there are a couple of preliminaries to discuss about freezing.

“Freeze” itself is easy enough: it just means turning from a liquid or a gas — better to say “from a fluid”, because “fluid” encompasses both3 — into a solid. “Solid” generally implies some sort of crystaline structure, but we can be generous and include amorphous solids like glasses.4 In other words, we can take “freeze”, as a synonym for “solidify”, to mean much the same as it means in casual English.

“When they get cold enough” is also without serious hidden landmines, although usually a physicist will want to know under what conditions “they get cold enough”. One common condition is that “they get cold” just sitting around in the air, under normal atmospheric pressure. This is frequent and familiar to us: we are very familiar with water freezing under normal atmospheric conditions.

However, thermodynamically speaking, freezing under normal atmospheric conditions is not terribly interesting scientifically, even if it can be well defined. A particular condition of thermodynamic interest is when a substance freezes “under its own vapor pressure”.

Imagine a closed vessel — make it clear glass so we can see what’s going on inside — filled only with pure substance of interest and then sealed off. We put enough stuff into the vessel and lowered the temperature enough, that there is liquid stuff and gaseous stuff in the vessel, both visible at the same time. On the earth, all the liquid will be at the bottom of the vessel, and all the gas will be at the top, with the two phases separated by a “menicus”, or interface.5 Let’s also say, to avoid misunderstanding, that the meniscus is exactly in the middle (by volume) of the vessel. That way, as we vary the temperature there will always be gas and liquid in the vessel, and the meniscus will always be right in the middle. If we keep the temperature of the vessel constant for awhile, so that all the stuff is at the same temperature, then the two phases are said to be “coexisting in thermodynamic equilibrium”.

Now, lower the temperature of the stuff in the vessel and, sooner or later, solid stuff will appear and the material is said to have frozen “under its own vapor pressure”. In fact, there will be a unique temperature, called the “triple-point temperature”, at which gas, liquid, and solid phases can all coexist in thermodynamic equilibrium.6

To be precise then, our author probably meant by his statement that all liquids freeze under their own vapor pressure if they get cold enough.

However, this isn’t strictly true. The element helium has many interesting and surprising properties. Among them, helium is the only element that will not freeze under its own vapor pressure. It will indeed freeze, but it must be under at least 25 atmospheres of pressure to do so. This also implies that helium has no triple point, unlike any other elemental substance you can think of.

For many years around the turn of the 19th to 20th century, there was thought to be a class of substances called “permanent gasses”. These were gases that could not be caused to condense into droplets of liquid, no matter how much pressure was applied to them. Then it was discovered that they would condense, provided they were cooled to low enough temperatures first. The temperature below which each must be cooled before condensation is even possible is its “critical temperature”.7

The critical temperature is usually not the same as the “boiling point” temperature. “Boiling” usually implies that the substance is at atmospheric pressue. For example, nitrogen boils at 77 K8 but its critical temperature is about 126 K; oxygen has a critical temperature 155 K, but boils at 90 K.

The critical temperature of helium is 4.2 K, which is really, really, really cold. Until helium is cooled at least to that temperature, condensation is impossible and liquid cannot be produced under any pressure. As it turns out, the critical pressure is low, so helium will liquefy rather easily at atmospheric pressure, if one can get it cold enough.

Helium, the last of the “permanent gases”, was finally liquefied by the Dutch physicist Heike Kamerlingh Onnes in 1908 at his laboratory in Leiden.9 Some of us, who have been low-temperature physicists in previous lives, think of Kamerlingh Onnes as a sort of scientific grandfather, since we date the beginning of low-temperature physics to his liquefaction of helium. We also esteem the memory of Sir James Dewar(1842–1923), inventor of the Dewar flask (commercially known as a Thermos bottle), without a couple of which my own thesis experiment would not have been possible. Like atomic physics, low-temperature physics is a distinctly 20th-century discipline.

Helium does not freeze under its own vapor pressure, but it does do very odd things when it is cooled further below its critical temperature. At 2.17 K (at vapor pressure, i.e., at liquid-vapor coexistance), pure helium-4 (by far the most abundant isotope of helium10) undergoes what is know as a “superfluid transition”, also called the “lambda line” (the reason why to be explained in another essay sometime). The superfluid phase of helium exhibits many wondrous properties, like the lack of viscosity — the ability to flow through microscopic channels unimpeded11 — and various other unusual behaviors.

However, we must save those topics for another time.12

———-
1I’ve long since finished, too, and moved on to the prior volume in the series. I find them a little on the light-weight side, but considering the audience and the venue, that’s not altogether surprising. Regardless, they have been fun and informative reading.

2I have no doubt that this is because in my formative years, i.e., when I was a graduate student, I did low-temperature work (cryogenics) at liquid-helium temperatures (about 2–5 K, or 2–5 degrees above “absolute zero”) measuring properties of helium itself.

3Operationally, “fluid” is anything that “flows”, i.e., any substance which is subject to the equations of motion from fluid dynamics.

4There is controversy over whether “glassy solids” are a different form of matter from gas, liquid, and solid, but for the present purposes it’s not necessary to choose sides.

5Observing the meniscus, for example its curvature, can tell us many interesting things about the properties of the substance.

6Practically speaking, triple-points are very useful since they occur at a unique, well-defined and reproducible temperature. If one can contrive, say, to have a glass vessel filled with pure water in equilibrium with all three phases present, then one knows exactly what the temperature is of the entire system. This procedure is actually used in the definition of the “International Practical Scale of Temperatures”, a set of standard procedures for establishing nearly thermodynamic absolute temperature calibrations in the laboratory. Temperature, though, is a whole other story.

7Since “critical phenomena”, the study of elemental properties very near the critical point (i.e., near the critical temperature and critical pressure) was my area of research for some 15 years or more, there’s much more I could say about it, but this isn’t the place.

8“K” is the abbreviation for Kelvins, the units of the thermodynamic temperature scale. A Kelvin (NB, not “degrees Kelvin”!) is the same size as a Celsius degree; “0″ on the Kelvin scale is “absolute zero”, which is about -273 centigrade degrees, or -459 Farenheit degrees.

9Read a fascinating essay about this called “Heike Kamerlingh Onnes and the Liquefaction of Helium“, written by Jedtsada Laucharoen, a student at the Horace Mann School, The Bronx. His essay was the 1st place prize winner in the physics category of the “Laureates of Tomorrow Nobel Essay Contest”.

10The only other naturally occuring isotope is helium-3; I forget offhand what the relative abundances are. But, this does give me an excuse to quote (from memory, so my precision may only be close) my favorite first line from a book, J.D. Wilke’s Properties of Liquid and Solid Helium (my bible in graduate school): “Helium exists in three naturally occuring isotopes: helium-4, helium-3, and helium-6; as the latter has a half-life of only 0.67 seconds, it need concern us no further.” And, indeed, in the ensuing 700+ pages, helium-6 is never mentioned again.

11Every low-temperature physicist’s nightmare is to develop a microscopic “superleak” in his apparatus. Below the superfluid transistion helium will pour out of the apparatus, but it is exceedingly difficult to do leak detection at such low temperatures, so the usual response is to throw the thing out and start over. Fortunately, I never faced encountered that problem.

12As a closing treat, I will thrill you with the title of my Ph.D. dissertation: Shear Viscosity and Thermal Conductivity in Liquid Helium-4 and Dilute Mixtures of Helium-3 in Helium-4 near the Lambda Transition

Jul
02

Statistical Fluctuations

Posted by jns on 2 July 2005

Abraham Pais, a physicist who wrote what is generally regarded as the definitive scientific biography of Einstein, said of his subject that there are two things at which he was “better than anyone before or after him; he knew how to invent invariance principles and how to make use of statistical fluctuations.” Invariance principles play a central role in the theory of relativity. Indeed, Einstein had wanted to call relativity the “theory of invariants”.
["Miraculous Visions: 100 Years of Einstein", The Economist, 29 December 2004.]

By way of explanation for the quotation: I came across it a few months ago and wanted to make note of it 1) because it’s quite true, and gives a remarkable insight into Einstein’s mode of thinking; and 2) because fluctuations loom large in my own way of looking at the physical world — because of my working experience in science — and because invariance principles are an interesting and important concept in physics. I’d like to discuss both of them sometime, but it will require far more presence of mind, and time, than I have to give it right now. So, I’ll preserve the quotation here and maybe get to it later.

Jun
27

The Purpose of Science (Part I)

Posted by jns on 27 June 2005

About 10 or 12 years ago, when I was still a scientist producing science, I was working on an experiment that eventually flew on two Space Shuttle missions (in 1994, then 1996 — our project was called “Zeno”1). We were working under the umbrella of “microgravity” research, research that wanted to exploit the very reduced gravity available while orbiting the Earth.2 We were studying some general properties of fluids in very unusual thermodynamic states; when they were in these states, they were very susceptible to the effects of gravity, which supressed the effect we were trying to look at. “Turning down gravity” was our answer.
At any rate, our experiment was of the type often referred to as “pure science”; we were doing “science for science’s sake”. Of course, to us, the goal of the experiment was importantly related to questions about thermodynamics, critical phenomena, universality, the renormalization-group theory, and other things that we physicists got excited about but no one else had ever heard of.
I spent quite a bit of time working with the NASA Public-Affairs Office (at Marshall Space Flight Center, our home for mission operations, in Huntsville, AL) trying to find interesting things that they could say to the public-at-large about our project. We all took that goal — inviting the public to share our excitement — very seriously and worked hard at it, but it was a challenge to explain in a sound-bite why we were doing it all and why we were spending $20 million to do it.
I still think that elucidating science to interested non-scientists is an important thing to do. Generally, my feeling is that understanding full-blown concepts deserves more than bite-sized explanations, at least when there’s time, but there’s not always time.
The perennial question about any science experiment seems to be “what’s it good for”, that is, “what new product to make out life better are you working on”. It’s very frustrating to be asked over and over, when we felt that our work was important but that our distance from products on the shelf was rather large. We often felt that it was not the best question to ask.
I wanted to expound about the thrill of intellectual pursuit, the great adventure, exploring the unknown corners of the physical universe … but those weren’t the answers that were wanted. We could say some things about how it might lead to new, environmentally friendlier refrigerants, or help in industrial painting applications (both were true), but that seemed so trivializing.
I still don’t have the sound-bit answer. The best I’d been able to come up with then was a small parable, a metaphor for the place of science in a technology consumer’s life.

Think of technology as being a house that we all live in. The house of technology is built on a foundation of science. The foundation is made of many, many bricks. Each brick is a scientific idea, or scientific discovery, or the result of a scientific experiment. All the bricks fit together and make a solid foundation for the house of technology.
Perhaps, we think, all those bricks aren’t really necessary to hold up the house. Surely we could take some out and the house would still stand.
Undoubtedly this is true. Pull out some of the bricks. Choose some more and yank them out, too. For awhile the house is fine, but sooner or later trouble arrives. The house develops cracks in the walls, the floor shifts precariously, windows no longer open properly. Ultimately the house collapses, unable to stand without a solid foundation.
Which bricks are the most important ones? Who can say which bricks are supporting the house and which ones are not essential for holding the house up?
Technology is built on a solid foundation of science, a foundation that gets its strength from many, many interconnected bricks. Although individual bricks look individually unimportant, and any one or two might be removed with no apparent effect, all of them are needed to keep the foundation strong.

[Edited and updated from the orignal post of 4 April 2005.]
__________
1There is a Zeno home page, which is very rudimentary. I put it together during the second Zeno mission in 1996. It was my first website, and the technology was still in the early stages, which explains why there was no website at the time of the first mission in 1994.
2“Microgravity” was meant literally as a measure: micro, 10-6, times g, the acceleration due to gravity. One micro-g was about the level of residual accelerations in quiet orbit on the space shuttle, i.e., provided the astronauts weren’t exercising or bouncing (literally) off the walls. These tiny accelerationswere mostly caused by tidal forces on the shuttle itself, due to the fact that the spacecraft is large enough (an “extended body”, i.e., not a “point mass” without size) that different parts of the craft, being at slightly different distances from the center of the Earth, would prefer to orbit the Earth at slightly different velocities. Thus, the magnitude also depended slightly on the “attitude” of the Shuttle, whether it was moving with its nose in front of it (in the direction of the velocity vector) or pointed away from the center of the Earth (tail to the Earth); the latter was common, apparently because it was led to more stable orbits that required fewer firings of the retro-rockets to maintain. However, I’m no expert at orbital dynamics, which would nevertheless be an entirely different posting anyway.

Jun
24

Polling: “Margin of Error”

Posted by jns on 24 June 2005

This is not a particularly recent poll, although the assertion is still true. But that’s not the point.

The New York Times > Washington > New Poll Finds Bush Priorities Are Out of Step With Americans

The poll was conducted by telephone with 1,111 adults from Thursday through Monday. It has a margin of sampling error of plus or minus three percentage points.

Have you ever wondered where that “margin of error of + or – 3 % points” comes from, or why the weird number of people polled (which most people react to by thinking it’s much too small)?
The simple answer is simple. Follow along:

  1. The sample size is 1,111
  2. The square root of 1,111 is 33.33
  3. 33.33 / 1111 = 0.03, or 3%

The answer is that simple. In random sampling from a uniform population, the best estimate of how good the average result is will be

+/- [sqrt(N) / N] = [1/sqrt(N)],

where N is the number of [statistically independent] samples.
The inverse works too. If you are told that the error is E%, then

N = 1 / (E/100)2

is the original sample size.
There is no mystery about this relationship between error and sample size in polls, and it is not what determines careful or “scientific” polling. It is simply an unvarying, mathematical result giving the best guess you can make about the error in an average calculated from random (i.e., statistically independent) samples taken from the larger population that one is trying to characterize.
The trick, of course, is in that bit about taking “random samples”. That’s the part that polling organizations work very hard at: to convince their customers that they (and they alone among their competitors) know how to take very good, very nearly “random samples” from any given population — all Americans, all likely Republican voters, all women under 18 who watch MTV, all men over 50 who eat chocolate ice cream at least twice a week, whatever group the poll’s sponsor is interested in.
All the work, or artistry (some would like to say “science”) goes into selecting the samples so that they will be randomly drawn from the population of interest; none of it goes into calculating the margin of error.
So now, when you hear a margin of error quoted, you can amaze all your friends by revealing the exact number of people who were asked the question, and sound amazingly clever.

Jun
16

Traditional Atomic Theory

Posted by jns on 16 June 2005

Reminding us that atoms were “just a theory” until the twentieth century when experiment finally established atomic reality (in some quantum mechanical sense yet to be understood fully):

But as late as 1894, when Robert Cecil, the third Marquis of Salisbury, chancellor of Oxford and former Prime Minister of England, catalogued the unfinished business of science in his presidential address to the British Association, whether atoms were real or only convenient and what structure they hid were still undecided issues:

“What the atom of each element is, whether it is a movement, or a thing, or a vortex, or a point having intertia, whether there is any limit to its divisibility, and, if so, how that limit is imposed, whether the long list of elements is final, or whether any of them have any common origin, all these questions remain surrounded by a darkness as profound as ever.”

[Richard Rhodes, The Making of the Atomic Bomb (Simon & Schuster, New York, 1986) p. 31.]

———-
*Perhaps the idea of atoms is the oldest surviving scientific concept in that “just a theory” category — far older certainly than the continually changing, ever evolving “traditional marriage”.

May
08

The Discovery of Helium

Posted by jns on 8 May 2005

“Observations of the 1868 [solar] eclipse led to the discovery of a bright yellow emission line in the spectrum of the [sun's] chromosphere, which is normally not observable except during a few seconds just before and just following totality [in a solar eclipse]. What happened next is nicely described by C.A. Young in the 1895 edition of his book The Sun:

The famout D3 line was first seen in 1868, when the spectroscope was for the first time directed upon a solar eclipse. Most observers supposed it to be the D line of sodium, but P.J.C. Janssen noted its non-coincidence; and very soon, when Lockyer and Frankland took up the study of the chromosphere spectrum, they found that the line could not be ascribed to hydrogen or to any then known terrestrial element. As a matter of convenient reference Frankland proposed for the unknown substance the provisional name of “helium” [after the Greek name for the sun, "helios"] …
Naturally there has been much earnest searching after the hypothetical element, but until very recently wholly without success ….
The matter remained a mystery until April, 1895, when Dr. Ramsey, who was Lord Rayleigh’s chemical collaborator in the discovery of argon, in examining the gas liberated by heating a specimen of Norwegian cleveite, found in its spectrum the D3 line, conspicuous and indubitable … Cleveite is a species of uraninite or pitch blende, and it soon appeared that helium could be obtained from nearly all the uranium minerals.

“As we now know, the connection between uranium and helium is that radioactive decay of uranium involves what were at that time called alpha particles, which are helium nuclei. These nuclei pick up electrons to become atoms of helium, which can become trapped in uranium-rich rocks, to be released when the rocks are heated.”
[Nearest Star: The Surprising Science of Our Sun, Leon Golub and Jay M. Pasachoff (Harvard University Press, Cambridge, MA, 2001) pp. 141--142.]

May
02

Rickets & Windows

Posted by jns on 2 May 2005

File under “unintended consequences”:

In 1696 a window tax was introduced in Britain when the financially hard-pressed govenment started taxing properties based on the number of windows. The citizenry responded by bricking up windows and the darker houses are thought to have contributed to an increased incidence of rickets and tuberculosis.

[David Whitehouse, The Sun: A Biography (Wiley, Chichester, 2005), p. 93.]

Apr
27

A Star Explodes in Slow Motion

Posted by jns on 27 April 2005

I’ve thoroughly enjoyed reading this book by Peter Atkins (reference below), and I found his slow-motion description of the process that leads to the creation of a supernova uncommonly gripping and dramatic, as well as enlightening.

Stars bigger than about eight Suns have a violent future. The temperature in these giants can rise so much, to around 3 billion degrees, that “silicon burning” takes place, in which helium nuclei can merge with nuclei close to silicon and gradually build heavier elements, stepping through the periodic table and finally forming iron and nickel. These two elements have the most stable nuclei of all, and no futher nuclear fusion releases energy. At this stage, the star has an onion-like structure with the heaviest elements forming an iron core and the lighter elements in successive shells around it. The duration of each of these episodes depends critically on the mass of the star. For a star twenty times as massive as the Sun, the hydrogen-burning epoch lasts 10 million years, helium burning in the deep core then takes over and lasts a million years. Then fuels get burned seriously fast in the core. There, carbon burning is complete in 300 years, oxygen is gone in 200 days, and the silicon-burning phase that leads to iron is over in a weekend.

The temperature is now so high in the core, about 8 billion degrees, that the photons of radiation are sufficiently energetic and numerous that they can blast iron nuclei apart into protons and neutrons, so undoing the work of nucleosynthesis that has taken billions of years to achieve. This step removes energy from the core, which suddenly cools. The outer parts of the core are in free fall and their speed of collapse can reach nearly 70 thousand kilometres a second. Within a second, a volume the size of the Earth collapses to the size of London. That fantastically rapid collapse is too fast for the outer regions of the star to follow, so briefly the star is a hollow shell with the outer regions suspended high over the tiny collapsed core.

The collapsing inner core shrinks, then bounces out and sends a shockwave of neutrinos through the outer part of the core that is following it down. That shock heats the outer part of the core and loses energy by producing more shattering of the heavy nuclei that is passes through. Provided the outer core is not too thick, within 20 milliseconds of its beginning, the shock escapes to the outer parts of the star hanging in a great arch above the core, and drives the stellar material before it like a great spherical tsunami. As it reaches the surface the star shines with the brilliance of a billion Suns, outshining its galaxy as a Type II supernova, and stellar material is blasted off into space.

[Galileo's Finger: The Ten Great Ideas of Science, Peter Atkins (Oxford University Press, Oxford, 2003) pp. 256--257.]

Apr
23

NPC ID “Debate”

Posted by jns on 23 April 2005

Bob Park, a physicist who writes the brief “What’s New” reports for the American Physical Society with a great deal of wit and withering obervation (archives here, subscribe here), apparently attended a recent press “event” at the National Press Club put on by the irrepressible [so-called] Design [so-called] Insitute:

EVOLUTION: DISCOVERY INSTITUTE FINDS A SCIENTIST TO DEBATE.
The National Press Club in Washington, DC is a good place to hold a press conference. If a group can make its message look like an important story, it can get national coverage. The message of the Seattle-based Discovery Institute is simple: “Intelligent Design is science.” That’s bull feathers of course, but that’s why they have PR people. Science is what scientists do, so they gotta look like scientists. Nothing can make you look more like a scientist than to debate one. Scam artists all use the “debate ploy”: perpetual-motion-machine inventors, magnet therapists, UFO conspiracy theorists, all of them. They win just by being on the same platform. So, the Discovery Institute paid for prominent biologist Will Provine, the Charles A. Alexander Professor of Biological Sciences at Cornell, to travel to Washington to debate one of the Discovery Institute’s “kept” PhDs, Stephen Meyer, at the National Press Club on Wednesday. It was sparsely attended. Most were earnest, well-scrubbed, clean-cut young believers, who smiled, nodded in agreement and applauded at all the right times. The debate was not widely advertized. I’m not sure they really wanted a lot of hot-shot reporters asking hard questions. The only reporter was from UPI, which is owned by the Rev. Sun Myung Moon and the Unification Church, a spiritual partner of the Discovery Institute. The next day I searched on Google for any coverage of the debate. The only story I could find was in the Washington Times, a newspaper owned by the Rev. Sun Myung Moon.

["What's New", by Robert Park, 22 April 2005.]