Jan
16
Posted by jns on 16 January 2008
As so often happens, this began innocently enough.
It all started on Monday, when a friend of mine sent me a YouTube link, claiming that he had found the prefect theme song for Ars Hermeneutica’s Sun Truck project. Indeed he may have done. The song was called “Why the Sun Shines?”. Fans of the group called “They Might Be Giants” will find the song familiar, because TMBG appear to have performed the song frequently, and many versions and performance recordings exist. This one is my favorite so far.
Then, in an amazing bit of thought convergence, on Tuesday night, another friend announced that he had found the perfect theme song for the Sun Truck project!
“Oh?” I asked, innocently enough. “Does it begin with the line ‘The sun is a mass of incandescent gas…’?”
He was a bit deflated, but only a bit. Being a big fan of very alternative music, his version was a mash-up called “Shining Sun Flash”, put together from Moog Machine’s “Jumpin’ Jack Flash,” Tom Glazer’s “Why Does The Sun Shine?,” and Earth Wind & Fire’s “Shining Star”. It comes from an online album of extraterrestrially themed music called “Sounds for the Space Set“.
This did get us some more information about the song, though, the suggestion that it was originally performed by one Tom Glazer. I decided to do a little follow up to verify that and maybe look into getting permission to use the song with the Sun Truck project.
Well, a little follow up turned into the beginning of a whole project about science songs, a worthwhile topic in itself. I’ve only scratched the surface.
The song “Why Does the Sun Shine?” was indeed first performed by folk-singer Tom Glazer. The song first appearance was as part of a six-LP set of recordings known collectively as the “Singing Science Records” — or, “Ballads for the Age of Science” (Wikipedia entries for Glazer or Zaret differ on this fact).
That 6-LP series contained dozens of songs on science written by Hy Zaret* (lyrics) and Lou Singer (music), produced by Zaret in the late 1950s and early 1960s. The albums,
- Space Songs
- Energy & Motion Songs
- Experiment Songs
- Nature Songs
- More Nature Songs
were packed with songs that had titles like:
- Planet Minuet
- Ultra Violet And Infra Red
- It’s A Magnet
- Warm Fronts, Cold Fronts
- Why Do Leaves Change Their Color
- How Does A Cow Make Milk
and, of course, “Why Does the Sun Shine?”. Two of the albums were performed by Tom Glazer.
I was delighted to find that all of the songs on all of the (long out of print) albums are preserved and available online, at the “Singing Science Records” page of Jef Poskanzer.
The song “Why Does the Sun Shine?” appears to have a unique cultural status. Before this week I was blissfully ignorant of its existence, but plenty of other people have enjoyed it for years. Not only that, but it has enough status that its lyrics, for reasons that are not entirely clear to me, appear on a web page served by the National Institutes of Health. At least it saves me the trouble of reproducing them here, although they fail to mention that the couplets following the ellipses are done in interrupted voice-over. But you’ll notice that if you listen to one of the recordings available.
Now, that didn’t quite exhaust the subject for me. While I was searching for information about Tom Glazer and the origins of this particular song, I turned up several fascinating articles, web pages, and databases devoted to the topic of science songs. Hey, I have to put the links somewhere!
- The New York Times article referenced in the note below, is about a young musician named Timothy Sellers who, along with his band Artichoke, had (at that time) been working on a project to record 26 songs he wrote celebrating the lives of historic scientists, one for each letter of the alphabet. At the time of the article they had just released “26 Scientists: Volume 1, Anning to Malthus”. I admit that I haven’t yet heard any of the songs.
Science songwriting is a little-known avocation indulged in by many working scientists; in many cases their results deserve to remain little known. One sees occasional efforts shared, for example, in the pages of Physics Today. Having found several source-pages for these delightful treasures, I didn’t want to lose track of them again.
- New Scientist, on 28 June 2007, published a delightful article by Gaia Vince called “Top 10: Science Pop Songs“; along with all the appended comments, it’s a good survey of the state of the art, such as it is.
- Walter Smith, of Haverford College, has compiled and annotated a magnificent bibliography of Physics Songs as part of the project he calls PhysicsSongs.org.
- Greg Crowther, who is on the faulty at the University of Washington, apparently likes to write and perform his own science songs.
- But not only does he make his own, Greg Crowther maintains MASSIVE (“Math And Science Song Information, Viewable Everywhere”), a database of science and math songs.
There, perhaps that will keep us busy for awhile. I fear that I’m not through with this topic.
Oh dear, it seems that I even forgot to mention Tom Lehrer!
———-
* There is an interesting side-controversy here about the true author of the songs, or rather, about the real Hy Zaret. The name Hy Zaret is associated with the song “Unchained Melody“, which he wrote. We’re told may be the most recorded song of the 20th century. No doubt because of its popularity, there is a person named William Stirrat who claims that he wrote the song “Unchained Melody” using the pseudonym Hy Zaret.
Wikipedia assures us that Stirrat is an impostor, but the page for Zaret notes that the false claim has gotten around. In particular, I myself quickly found that the false information had gotten as far as the New York Times, where one finds this mention:
Around the same time [late 1950s], William Stirrat, an electronics engineer, co-produced six albums of science songs for children (“Why Does the Sun Shine?” and “Vibration”). Mr. Stirrat, whose songwriting nom de plume was Hy Zaret, was better known as the person who wrote the lyrics to “Unchained Melody.”
[Michael Erard, "When You Wish Upon an Atom: The Songs of Science", New York Times, 17 May 2005.]
Dec
30
Posted by jns on 30 December 2007
Isaac & I returned home yesterday, flying from Kansas City (central standard time) to Washington, DC (eastern standard time). As we arrived at the gate in DC, I overheard this conversational exchange from the seats in front of mine:
Mother: Oh, look! My cell phone has changed back to eastern time.
Teen-age son: That’s because they work with satellites, and they know where you are.
I was a bit surprised that, contrary to common belief, young people still don’t know everything. There is at least one surprising misconception in the son’s mind that I should have cleared up on the spot, but I didn’t. Y’all are so lucky.
Modern mobile phones, known also as cell phones, do not communicate through satellites, and they never have. There have existed satellite telephones that do, but they’re a much different beast, not to mention much larger and much heavier.
Cell phones communicate with cell towers–or, more generally, cell sites, since not all cellular antennae are on towers; many are hidden on building tops, for instance. If the cell sites are visible they are easily recognizable, most often triangular structures with vertical “bars” on each face of the triangle. Each of the “bars” is actually an antenna for transmitting to mobile phones or else receiving signals from them. The antennae are used in pairs so that they send and receive signals directionally.
The whole idea of “cells” was originally the way to provide coverage over a wide area without requiring a large amount of power in the handset, and also as a way to use restricted amounts of radio-frequency bandwidth efficiently and provide for a number of users.
In some area over which the cellular provider wants coverage, the area is divided into hexagonal “cells” that cover the area. (Look at a bathroom floor sometime that has 6-sided tiles and you will see that the area can be completely covered without gaps.) At the center of each cell is a cell site. The cell site has the three-sided shape so that it can hear in all directions. A cell site is responsible for all the cellular phones in its cell.
The imagined boundaries of the cells overlap a bit, so each cell actually operates on a slightly different frequency# from all of its neighbors. Because of that, cell sites over a wider area can reuse frequencies, but there is the added technical challenge of tracking a particular cell phone between the ranges of neighboring cell sites and switching an active conversation from being routed through one cell site to being routed to another without dropping the call. That process is call “handover”. When a particular phone switches cells it also shifts the frequency that it uses for the radio link by a small amount.
The size of each cell varies depending on terrain and obstructions and such things, but in denser areas cell sites will be about 5 to 8 miles apart, so that’s the furthest that you cell phone usually has to transmit its signal, which is something it can manage to do with the relatively tiny batteries that it carries.
Cell sites do not continuously track a cell phone unless the phone is engaged in a conversation. If your phone has been off, it will always talk with the nearest cell site when you turn it back on and the phone network takes note of your position. Occasionally every cell sites will query phones with broadcast messages; the phones respond, and that way your cellular network can quickly find which cell site to use to contact your phone when you are receiving a call. The cell sites also broadcast timing signals, which is how cell phones always seem to know the right time. Note this: the cell site doesn’t have to know where your phone is to get the time correct, instead your phone simply takes the local time of the cell site that it can hear.
Now, one last note about cell phones talking through satellites. Most communications satellites are in geostationary orbits, which means they are in orbits where they appear stationary in the sky. This is where you will find the satellites that broadcast satellite radio and television, too.
Anyway, to be in a geosynchronous orbit requires that the satellite be at an altitude of about 22,000 miles. That’s a long way compared to the 8-mile distance to the nearest cell site. In fact, it’s about 2,750 times as far away. Other things being equal* that means that the cell phone would require 7.6 million times the power for its signal to reach the satellite. As you might guess, the would require a much bigger battery than your cell phone has in it.
By the way, before I finish, I have to chastise the FCC for poor technical writing. I was looking around for a few details about cell-phone networks and I found this page of “Cell Phones FAQs” from “The FCC Kids [sic] Zone”. You may wish to look at the answer for “How Does a Cell Phone Work” and count how many errors and imprecise statements you can find in one paragraph. (For extra credit, read the other answers if you can stand it.) There are many irritants, but I could start with the absurdity of using the copper-wire based, home phone system as a conceptual basis, since kids don’t use that archaic communication system anymore as a conceptual referent.
There there’s this statement:
A cell phone turns your voice into a special type of electricity and sends it over the air to a nearby cell tower; the tower sends your voice to the person you are calling.
Calling propagating electromagnetic waves a “special type of electricity” is incorrect and unnecessary, an egregious error. Saying the tower “sends your voice” is no better. Despite what this FCC author seemed to think, it’s entirely possible not to go into pages of detail about the time-slice multiplexing and analog-signal digitization (most cell networks these days are digital) used to “send your voice” over the network and still get it right without the stupid and inaccurate “send your voice” gambit.
These are just the type of gratuitous and imprecise over-simplifications about science and technology that drive me into a frenzy and that I have vowed that Ars Hermeneutica will combat. If any of my four regular readers happen to know someone at the FCC, have them get in touch and we can straighten out these things before any more bad ideas get into kids’ (note the apostrophe) heads.
———-
# Cell phones also use a different frequency to transmit from the frequency they use for receive, but that’s a needless conceptual complication at this stage.
* There are details, naturally. The 7.6 million number is the square of 2,750, because radiated electromagnetic power diminishes as the square of the distance. However, satellite communications is possible with these geosynchronous satellites because their receivers have much higher gain (i.e., can hear much weaker signals) than terrestrial cell sites. They also are much too far away to be able to break an urban region up into cells and distinguish calls from different cells, let alone transmit in different cells, but that’s a whole other story.
Nov
30
Posted by jns on 30 November 2007
I learned about it from Science News Online (here), but evidently it has been on its way to becoming a mini-phenomenon since it was posted on YouTube in June, 2007.
It’s a short animation of some mathematical concepts, called “Moebius Transformations Revealed“. To quote from the creators’ website (here):
Möbius Transformations Revealed is a short video by Douglas Arnold and Jonathan Rogness which depicts the beauty of Möbius transformations and shows how moving to a higher dimension reveals their essential unity.
It does do what it claims it does, although it doesn’t go so far as to suggest what we learn from understanding their essential unity, nor what we might do with our new understanding. Perhaps I’ll have that “aha!” later on.
Nevertheless, it’s a very pretty little film (2:32 long), and the music (Robert Schumann, a movement from Kinderscenen, for piano) seems unusually well suited.
But it’s better that you just have a look rather than listen to me talk about it.
Nov
21
Posted by jns on 21 November 2007
From this week’s Physics News Update* a note about physicist Luis Alvarez, to whom all things were interesting. In case you’ve ever wondered about the source of the hypothesis that the dinosaurs were wiped out by a meteorite, read on.
EGYPTIAN PYRAMIDS, DINOSAUR EXTINCTION, THE JFK ASSASSINATION: all were studied by Berkeley physicist Luis Alvarez. Alvarez won a Nobel Prize for his discovery of new particles using a bubble chamber, but some of his fame comes from his work applying physics principles and methods outside the normal physics-research world. In the November issue of the American Journal of Physics, Charles Wohl of the Lawrence Berkeley National Lab looks at three notable examples of Alvarez*s extracurricular effort.
(1) To search for possible hidden chambers in the Chephren pyramid in Cairo-one of the three great pyramids built in the third millennium BCE-Alvarez designed an experiment in which cosmic rays would strike a detector set up inside a known chamber beneath the pyramid. Observing the penetrating muons from cosmic-ray showers, this detector would discern any intervening empty spaces in the overlying pyramid structure. The upshot: no hidden chambers.
(2) In scrutinizing the so called “Zapruder film”, a short filmed sequence that caught the assassination in progress, experts had been puzzled by the backwards jerk of President Kennedy’s head after one of the bullet impacts. Some took this to be evidence for another assassin shooting from in front of the president’s car. Alvarez and some of his colleagues performed impromptu experiments at a shooting range, and also considered the conservation of momentum and the forward-moving matter from the wound. From this they concluded that the movie sequence was consistent with a shot coming from the rear.
(3) Most famous of all was Alvarez’s hypothesis, made in collaboration with his son Walter Alvarez, that a thin but conspicuous layer of the otherwise rare element iridium in numerous places around the world, all at a geological stratum corresponding to the era just around the boundary between the Cretaceous and Tertiary periods (the KT boundary), signified a large asteroid impact at that time. This impact, it was further thought, cast enough dust into the air from a long enough time as to kill off many living things, including a large portion of dinosaurs.
———-
* Phillip F. Schewe, Physics News Update, The American Institute of Physics Bulletin of Physics News, Number 847, 20 November 2007.
Nov
01
Posted by jns on 1 November 2007
While some vaguely scientific notions are passing through my head, here’s a clipping from Physics News. It came as a bit of a surprise to me. I spent most of my laboratory research life doing stuff that came, in one way or another, under the general heading of “thermodynamics”, and yet it never occurred to me to wonder whether we needed to develop a relativistic theory of thermodynamics. Relativistic electrodynamics, sure. Relativistic quantum mechanics, obviously. But relativistic thermodynamics? I guess the worm goes to the bird who thinks of worms first.
What would relativistic thermodynamics be about? The main question would be whether the temperature of some mutually observed object would be measured the same by two different observers, each in a different inertial (i.e., unaccelerated) reference frame. Care to make a guess?
RELATIVISTIC THERMODYNAMICS. Einstein’s special theory of relativity has formulas, called Lorentz transformations, that convert time or distance intervals from a resting frame of reference to a frame zooming by at nearly the speed of light. But how about temperature? That is, if a speeding observer, carrying her thermometer with her, tries to measure the temperature of a gas in a stationary bottle, what temperature will she measure? A new look at this contentious subject suggests that the temperature will be the same as that measured in the rest frame. In other words, moving bodies will not appear hotter or colder. You’d think that such an issue would have been settled decades ago, but this is not the case.
Einstein and Planck thought, at one time, that the speeding thermometer would measure a lower temperature, while others thought the temperature would be higher. One problem is how to define or measure a gas temperature in the first place. James Clerk Maxwell in 1866 enunciated his famous formula predicting that the distribution of gas particle velocities would look like a Gaussian-shaped curve. But how would this curve appear to be for someone flying past? What would the equivalent average gas temperature be to this other observer?
Jorn Dunkel and his colleagues at the Universitat Augsburg (Germany) and the Universidad de Sevilla (Spain) could not exactly make direct measurements (no one has figured out how to maintain a contained gas at relativistic speeds in a terrestrial lab), but they performed extensive simulations of the matter. Dunkel (joern.dunkel@physik.uni-augsburg.de ) says that some astrophysical systems might eventually offer a chance to experimentally judge the issue. In general the effort to marry thermodynamics with special relativity is still at an early stage. It is not exactly known how several thermodynamic parameters change at high speeds. Absolute zero, Dunkel says, will always be absolute zero, even for quickly-moving observers. But producing proper Lorentz transformations for other quantities such as entropy will be trickier to do. (Cubero et al., Physical Review Letters, 26 October 2007; text available to journalists at www.aip.org/physnews/select)
[The American Institute of Physics Bulletin of Physics News, Number 843 October 18, 2007 by Phillip F. Schewe]
Oct
24
Posted by jns on 24 October 2007
Bubbles seem to be on the net’s mind today. I haven’t kept all the references (here’s one: “Scientists map near-Earth space bubbles“) but it seemed that I kept reading things involving bubbles.
Now, I’ve long been fascinated by bubbles although, despite my being a scientist with a history of doing some hydrodynamics and a relatively keen interest in things dealing with buoyancy, I’ve never worked on bubbles. This is odd, because I’ve long had a question about bubbles that I’ve never answered — probably because I never spent any time thinking about it. Now, apparently, I have to give it some thought.
My question is not particularly well formed, which makes it not a terribly good scientific question.* Nevertheless, I’ve always wondered what it is that determines the bubble size in effervescent drinks. In all classes of drinks with bubbles — soda, beer, champagne, effervescent water — there are those that have smaller bubbles and those that have larger bubbles. As a rule, I tend to prefer smaller bubbles, by the way.
The last time this question crossed my mind and there were people around who might agreeably talk about the issue, we talked about it but came to no conclusion. I think our problem was that we were imagining that it was the effect of bottle size or the shape of a bottle’s neck or some similar incidental phenomenon that determined bottle size, otherwise assuming that all carbonated beverages were otherwise equal. It now seems to me that making that assumption was rather naive and silly.
My thought today is that the bubble size is determined locally in the fluid, that is to say by the physical-chemical environment in the immediate vicinity of the bubble.# It seems to me that there are two questions to consider on our way to the answer:
- Why do bubbles grow?
- Why would a bubble stop growing?
The answer to the second question, of course, might be implicit in the answer to the first.
To begin: what is a bubble? Bubbles happen in mixtures of two or more substances when little pockets of stuff-B collect inside predominantly stuff-B. That means there’s a lot more of stuff-A than stuff-A, which is to say that we usually see bubbles of stuff-B when there’s only a little bit of stuff-B mixed in with a whole bunch of stuff-A.
To be concrete, let’s talk about bubbles of carbon-dioxide () in effervescent water, say “San Benedetto” brand.% Now, bubbles have very few features of any physical importance: their sizeis an important one, and the interface between the inside and outside of the bubble, which we might call the surface of the bubble. Some chemical properties of the stuff inside and outside the bubble might matter, too, but let’s save that for a moment.
Now, imagine that we start observing through our clear-glass bottle of San Benedetto water before we take the cap off. The water is clear and there are no bubbles. However, we are aware that there is dissolved in the water. In fact, we know that (in some sense) the is under pressure because, when we unscrew the cap, we hear a “pffft” sound. Not only that, bubbles instantly appear and float up to the surface of the water, where they explode and make little “plip plip plip” sounds.
What makes the bubbles grow? They start out as microscopically tiny things,** gradually grow bigger, until they are big enough to float to the top of the bottle and explode.
To be more precise, bubbles of grow if it takes less energy to let a molecule into the bubble across its surface than it costs to keep it outside the bubble. (This is a physicist’s way of looking at the problem, to describe it in terms of the energy costs. A chemist might talk about it differently, but we end up describing the same things.)
What causes the bubble to float up? That’s a property called buoyancy, the name of the pseudo-force that makes things float. It’s determined by the relative density of the two substances: less dense stuff floats up, more dense stuff floats down. Notice that the use of “up” and “down” requires that we have gravity around — there is no buoyancy in orbit around the Earth, for instance.
Bubbles, then, will start to float up when they get big enough that they have enough buoyancy to overcome the forces that are holding them in place. That basically means overcoming drag produced by the water’s viscosity, which works kind of like friction on the bubble — but that’s a whole other story, too.
Are we near the answer about bubble size yet? Well, here’s one possibility: bubbles grow until they are big enough to float up to the surface and pop, so maybe they don’t get bigger because they shoot up and explode before they have the chance.
I don’t like that answer for a few reasons. We began by observing that different brand products had different size bubbles, meaning different size bubbles exploding at the surface, so something is going on besides the simple matter of dissolved in water, or else they would all have the same size bubbles. You can look at the bubbles floating up from the bottom and see that they are different sizes in different products to begin with and they don’t change size much on the way up. Thus: bubbles grow very quickly to their final size and their size does not seem limited by how it takes them to float up. Another reason: sometimes bubbles get stuck to the sides of the bottle, but they don’t sit there and grow to arbitrary sizes; instead, they tend to look much like the freely floating bubbles in size.
There could be a chemical difference, with different substances in the drink limiting the size of the bubbles for some chemical reason. Certainly that’s a possibility, but it’s not what I’m interested in here because I want to explain how there can be bubbles of such obviously different sizes in different brands of what is pretty basically water with insignificant chemical differences in their impurities.
That seems to leave us with one option: the energy balance between the inside and outside of the bubble. Whatever it was that caused it to be favorable for dissolved to rush into the bubble at first has some limit. There is some reason that once the bubble reaches a particular size, the energy balance no longer favors crossing preferentially from water to bubble interior and the bubble stops growing.
Now we can look at the “energy balance” matter a little more closely. There are two forces at work. One is osmotic pressure. Osmosis describes how stuff-A moves relative to stuff-B across a barrier (in this case the surface of the bubble); osmotic pressure is the apparent force that causes one substance to move relative to the other across the interface.
The forces at work keeping the bubble in shape are two. One is osmotic pressure inside the bubble, where there is a significantly higher concentration of than in the fluid outside, so it creates an outward pressure at the bubble’s surface. The other is the pressure of the water on the surface of the bubble, trying to keep it from expanding.
So, it looks like the size of the bubbles are determined by a balance between water pressure –determined by the water’s density and gravity — and osmotic pressure inside the bubble, which is caused by the relative concentrations of inside and outside the bubble.
It would seem, then, that bubbles grow in size until the water pressure on the bubble, which is trying to squeeze it smaller, matches the osmotic pressure of the inside the bubble, which is trying to expand the bubble.
This works for me, because I know that the osmotic pressure of the is going to depend on how much was dissolved in the water to begin with. My conclusion, roughly speaking: drinks with bigger bubbles had more carbon-dioxide dissolved in them to start with than drinks with smaller bubbles. At least, that’s my working hypothesis for now, disregarding lots of other possible effects in bubbles. I’ll have to do some experiments to see whether it holds up.
This is just my answer for the moment, my provisional and incomplete understanding. It’s not a subject you can just look up on Wikipedia and be done with. (The articles at about.com that discussed bubbles in sodas I didn’t find credible.) I found several references (one, two, three) that, in journalistic fashion, touted the research of University of Reims’ Gerard Liger-Belair as “Unlocking the Secrets of Champagne Bubbles”, but in fact it was a contribution to nucleation and had nothing to offer about bubble size. The third reference, by the way, has a glaring error in the second sentence, and the rest of the text suggests that the author of the story had little understanding of what was being talked about.
Here, in fact, is a piece about bubbles by Liger-Belair (mentioned in the last paragraph), called “Effervescence in a glass of champagne: A bubble story“. It’s a nice read but it skirts ever so gracefully past the question of bubble size.
As we say in the biz: more research is indicated.
———-
* My usual contention being that most of the work of finding the answer lies in asking a good question.
# What might “immediate vicinity” mean, you ask? In physicist fashion, I’m going to suggest that length-scales in the problem will be roughly determined by the size of the bubbles, so let’s take “immediate vicinity” to mean anything within 1, or 2, or maybe 3 bubble radii. (On closer examination we’d have to consider thermal diffusivity and mass-species diffusivity and such things, but that’s for a more sophisticated analysis.)
%“San Benedetto” is the brand of effervescent water that we prefer here at Björnslottet, in case you were wondering.
**But why! Bubbles first start (“nucleate”) either around small impurities or bits of dust in the fluid, or just from fluctuations in the local concentration of bubble stuff. Let’s leave that as an interesting question for another time and just assume they get started somehow.
Oct
23
Posted by jns on 23 October 2007
For those who enjoyed the pictures a few days ago of undular bores — atmospheric waves visible in clouds — here are a few more treats via NASA’s Earth Observatory project.
This time the waves are in the atmosphere off the west coast of Africa, in a couple of satellite photos captured by the Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA’s Terra and Aqua satellites on October 9, 2007. Take a few moments to read the informative text on the page, too.
(While you’re there, you might want to look at the dramatic satellite photos of the wildfires in Southern California.)
Oct
17
Posted by jns on 17 October 2007
Here is “Science @ NASA” again, sending me another interesting story with pretty pictures. This one — they say for shock value — is about “undular bores”. What they’re talking about is waves in the atmosphere that show up dramatically in cloud patterns.
I have a personal interest in all things waves because they were one of the things I studied in my former life as a physicist. They were waves in many forms that interested me, too, since I studied fluid motions, in which one can find physical waves, and field phenomenon, in which the waves are mathematically abstract but still represent real phenomenon.
In looking at these wave pictures coming up, it might be fun to know that in the ocean — or on any water surface — waves come in two types, call them “waves” (the big ones) and “ripples” (the little ones). Ripples are about finger sized, while waves are more body sized. Both are caused by disturbances, generally wind, but they undulate for different reasons. Ripples ripple because of the surface tension of the water; waves wave because of gravity, or buoyancy, in the water. Most water waves in the ocean, and those that break on shorelines, are generated by strong winds associated with storms that can be thousands of miles away.
Anyway, air also has buoyancy (remember: hot air rises, cool air sinks — the hot air is buoyant relative to the cool) and can sometimes show some very large scale waves, with wavelengths of about a mile. When there are clouds around they can make the waves dramatically visible. They can be caused by storms moving about as high-pressure centers collide with low-pressure centers and hot-air masses encounter cool-air masses.
Here’s Tim Coleman of the National Space Science and Technology Center (NSSTC) in Huntsville, Alabama:
“These waves were created by a cluster of thunderstorms approaching Des Moines from the west,” he explains. “At the time, a layer of cold, stable air was sitting on top of Des Moines. The approaching storms disturbed this air, creating a ripple akin to what we see when we toss a stone into a pond.”
Undular bores are a type of “gravity wave”—so called because gravity acts as the restoring force essential to wave motion. Analogy: “We’re all familiar with gravity waves caused by boats in water,” points out Coleman. “When a boat goes tearing across a lake, water in front of the boat is pushed upward. Gravity pulls the water back down again and this sets up a wave.”
[excerpt from "Giant Atmospheric Waves Over Iowa", Science @ NASA for 11 October 2007.]
There are two gorgeous visuals to see by clicking the link. The first is a video of the undular bores over Iowa. I’d suggest watching the animated gifs rather than the video because the gifs extract just the interesting parts between 9:25 and 9:45, and it keeps playing.
Then, just below that, don’t miss the radar photograph, which shows the train of waves with stunning clarity.
Now, as if that weren’t enough, I found this cool video on YouTube, intended to demonstrate “Gravity Waves” in the atmosphere, of which it is a beautiful example. But wait! Incredibly, this video also shows undular bores, over Iowa (but Tama, rather than Des Moines), as recorded by the same television station, KCCI!
Oct
17
Posted by jns on 17 October 2007
This just in from “Science @ NASA”:
Newly assembled radar images from the Cassini spacecraft are giving researchers their best-ever view of hydrocarbon lakes and seas on the north pole of Saturn’s moon Titan, while a new radar image reveals that Titan’s south pole also has lakes.
Approximately 60 percent of Titan’s north polar region (north of 60o latitude) has been mapped by Cassini’s radar. About 14 percent of the mapped region is covered by what scientists believe are lakes filled with liquid methane and ethane:
The mosaic image was created by stitching together radar images from seven Titan flybys over the last year and a half. At least one of the pictured lakes is larger than Lake Superior.
[excerpt from "New Lakes Discovered on Titan", Science @ NASA, 12 October 2007.]
Isn’t that fascinating: “hydrocarbon lakes” filled with “liquid methane and ethane”!
The photograph accompanying the press release is really quite lovely — it’s what attracted my attention in the first place. Follow the link above to see the photomosaic.
Oct
04
Posted by jns on 4 October 2007
I nearly let pass this notable milestone: 50 years ago today the Soviet Union* launched the first artificial Earth-satellite, called Sputnik. It was a tiny thing — suitable I suppose to being the first baby of the birth of the space age — just 24 inches across and weighing only 184 pounds. It was made of shiny polished aluminum, so that it reflected sunlight and was easy to see from Earth. It carried two radio transmitters that emitted continuous signals that didn’t say anything, not that they had to. The message was obvious.
Launching a satellite, in principle, is a simple thing. Point it in the right direction, accelerate it to a speed of something like 11 km/s (or about 7&miles/s)# and it goes into orbit around the Earth. In practice this is not so easy. It takes a lot of rocket fuel to accelerate even 184 pounds to a speed near 7 miles/second, and that fuel takes more fuel to accelerate it, and that fuel takes more fuel to accelerate it, and so on.& After you figure all that out, you end up with a very tall, multi-stage rocket that is very impressive when it takes off, even for the smallest payloads.**
Then there’s all that goes into getting all the stuff to the launch-pad so it can take off. There’s a remarkable amount of engineering, mission planning, fabrication, transportation, and organization that goes into one of these events, and they only got bigger as the missions got more sophisticated. A modern space-shuttle launch comes at the end of years of planning and months of preparing the payloads; the launch itself involves hundreds of people at locations scattered around the world.
And it all started with that tiny little Sputnik. I was not quite two years old at the time, so I don’t remember its happening. I didn’t have any memorable artificial-satellite experiences until I went outside one night to see a transit of an Echo communications satellite some years later.
It surely affected my life, though. Sputnik was so alarming to the powers in Washington — perhaps to the average American, too — that we, the entire country, suddenly developed a keen, new interest in science and engineering, and in science, engineering, and mathematics education, and I was undoubtedly a product of that. When people today wring their hands about a shortage of scientists and engineers — which hasn’t been true for decades — I imagine it’s an echo from that time.
People looking to justify our commitment to sending a man to the moon thought of all sorts of alleged “spin-offs” from the space program, and proclaimed the marvels of Tang, Teflon, and Velcro, none of which were invented by NASA, nor invented for NASA. Computer systems and microelectronics got some boost, but the average computer user today would be shocked to see the primitive computer hardware that got Neil Armstrong to the moon.
One of the things that was touted as an accomplishment of NASA, a spin-off of the moon program, was project management. I think that may be a real contribution. My experience from doing a couple of space-shuttle missions is that the planning process is not fast nor particularly efficient, but it accomplishes its goals with deliberation and thoroughness. That care and deliberation has suffered some in recent years, perhaps a result of political and management hubris that believed we must know how to cut corners by now.
As a product of the Sputnik age, I take the growth of modern technology and America’s leading role in developing it rather for granted, but it’s far from established that we shall always be the leader. I believe that our remarkable achievements from the 80s and 90s in developing the personal computer, for instance, resulted from the investment our country made in science and technology education in the 60s, coupled with national interest, motivation, and pride.
Those emotions and commitments take nurturing; they musn’t be taken for granted or they whither. I fear that that’s been happening in recent years, and that our complacency will catch up with us if we do nothing about it. The renewal won’t be fast, because it takes new generations to grow into it, although current generations can do the plowing and fertilizing.
That’s part of the reason that I started Ars Hermeneutica, Limited in 2004, and that’s the big motivation behind our vision of a scientifically literate America.
I didn’t set out to write this as a justification or a motivational piece or an advertisement — or even as a fund-raising appeal## — but I guess these all have one thing in common: that I care deeply about them.
—–
*Which, one notes in passing, no longer exists. Things change, and even countries don’t last on forever.
#The speeds are near the escape velocity from Earth, which is a bit more speed than is needed to establish an orbit, but it gives an idea of the speeds involved.
&It’s not an infinite sum — the sequence does converge, and it has an exponential form, for roughly the same reason that the equation for compound interest has an exponential form. If you want details, Google “rocket equation”.
**Note, however, that there are big differences in actual acceleration depending on the payload and the rocket chosen to launch it. Those of us accustomed to the Saturn V rockets launching an Apollo mission, or the rockets for shuttle launches, imagine a stately launch in which the heavy payloads seem like they’re never going to move, then they finally stroll off into the wild blue yonder. With that in mind, seeing once the launch of a sounding rocket, which doesn’t even attain orbit, was a surprise: it jumped off its launch-pad like a startled rabbit.
## Although, it bears repeating that Ars Hermeneutica is a 501(c)(3) tax-exempt corporation, and contributions are tax deductible. Click to see how to Support Ars Hermeneutica.