Speaking of Science

The Scienticity Blog

Archive for the ‘All’ Category

Sep
23

Two More Bites of Pi

Posted by jns on September 23, 2007

I can’t help myself now. I’ve just read through another paper by some of the \pi-algorithm people*, and they provide two fascinating equations from the history of computing \pi. Although they have been used in practice, my purpose here is just to look at them in amazement.

This first one is an odd and ungainly expression discovered by the Indian mathematical genius Ramanujan (1887–1920):#

 \Large\frac{1}{\pi} \quad = \quad \frac{\sqrt{8}}{9801}\ \sum_{n=0}^{\infty} \ \frac{(4n)!}{(n!)^4}\ \frac{[1103\ +\ 26390n]}{396^{4n}}

One extraordinary fact about this series is that it converges extremely rapidly: each additional term adds roughly 8 digits to the decimal expansion.

The second has been developed much more recently by David and Gregory Chudnovsky (universally called “The Chudnovsky Brothers”) and used in their various calculations of \pi. In 1994 they passed the four-billionth digit.& This is their “favorite identity”:

 \Large\frac{1}{\pi}\quad = \quad 12\ \sum_{n=0}^{\infty} (-1)^n\ \frac{(6n)!}{(n!)^3 (3n)!}\ \frac{13591409\ +\ n545140134}{(640320^3)^{n + 1/2}}

———-
* D.H. Bailey, J.M. Borwein, and P.B. Borwein, “Ramanujan, Modular Equations, and Approximations to Pi or How to Compute One Billion Digits of Pi”, American Mathematical Monthly, vol. 96, no. 3 (March 1989), pp. 201–219; reprint available online.

# One purpose of the paper was to show how this formula is related — deviously, it turns out, although a mathematician would say “straightforward” after seeing the answer — to things called elliptic functions.

&Okay, it was 18 May 1994 and the number of digits they calculated was 4,044,000,000. They used a supercomputer that was “largely home-built”. This record number of digits did not remain a record for long, however.

Sep
23

Why Pi?

Posted by jns on September 23, 2007

As a little gloss to the previous entry on calculating \pi, I’m finally reading the entertaining and enlightening article “The Quest for Pi” and find this unique observation after asking why people persist in calculating π to billions of digits:

Certainly there is no need for computing π to millions or billions of digits in practical scientific or engineering work. A value of \pi to 40 digits would be more than enough to compute the circumference of the Milky Way galaxy to an error less than the size of a proton.

[David H. Bailey, Jonathan M. Borwein, Peter B. Borwein, and Simon Plouffe, "The Quest for Pi", Mathematical Intelligencer, vol. 19, no. 1 (Jan. 1997), pp. 50–57; reprint available online.]

Sep
22

A Big Piece of Pi

Posted by jns on September 22, 2007

How innocently it all begins, sometimes. For some reason a day or two ago I decided I would take a few google-moments and look into modern computational formulæ that are use to calculate the value of \pi. What a loaded question that turned out to be! Before I’d reached a point to pause and write — and I’m not confident that I’m there yet — I’ve read several mathematics papers, installed software so that I could type equations in the blog [1], thought more about number theory and complex analysis than I have for years, and even gave Isaac a short lecture on infinite series with illustrative equations written on the Taco Bell tray liner. Poor Isaac.

Suppose you want to calculate the value of \pi. The number \pi, you may recall, is transcendental and therefore irrational — serious categories of numbers whose details don’t matter much except to remember that \pi, written as a decimal number, has digits after the decimal point that never repeat and never end. Most famously \pi relates the diameter of a circle, D, to its radius, R, in this manner:

D\quad= \quad \pi\ \times \ R \quad .

There are also a surprising number of other mathematical equations involving \pi that have no obvious relationship with circles. One very important and useful type of relationship involves infinite series. Infinite series are written in a very compact mathematical notation that will look very inscrutable if it’s unfamiliar, but don’t be alarmed, please, because the idea is relatively simple. Here’s an example:

\sum_{n=1}^{\infty} \,\frac{1}{n^2}\quad = \quad\frac{1}{1}\ +\ \frac{1}{4}\ +\ \frac{1}{9}\ +\ \frac{1}{16}\ +\ \cdots

The big greek letter, capital sigma (\Sigma), is the symbol that signals the series operation. The letter n underneath the sigma is the index. The index is set in sequence to a series of integer values, in this case starting at one (because n = 1), and ending with infinity, as specified by the number on top. Then, for each value of n = 0,\ 1,\ 2,\ 3,\ \ldots, the letter n is replaced with that number in the mathematical expression following the big sigma and the successive terms are added together, the operation suggested by the sequence of fractions following the equals sign. (Feel free to look at the equation and think about it and look and think, if it’s new to you. Mathematics is rarely meant to be read quickly; it takes time to absorb.)

The other important idea about the series that we have to look at is the idea of convergence, a property of the series meaning that as one adds on each new term the result (called the partial sum, for obvious reasons) gets closer and closer to some numerical value (without going over!). It is said to converge on that value, and that value is known as the limit of the successive partial sums. [2]

Provided that the series converges, the limit of the partial sums is usually treated simply as the value of the series, as thought one could indeed add up the infinite number of terms and get that value.

It turns out that the series we wrote above (the solution to the so-called “Basel problem”) does indeed have a value (i.e., the partial sums converge to a limit) — a rather remarkable value, actually, given our interest in \pi :

\sum_{n=1}^{\infty} \,\frac{1}{n^2}\quad = \quad\frac{\pi^2}{6} \quad .

Really, it does. [3]

This equation also gives us a good hint of how the value of \pi is calculated, in practical terms, and this has been true since at least the time of Newton, whether one is calculating by hand or by digital computer. One finds a convenient expression that involves an infinite series on one side of the equals sign and \pi on the other side and starts calculating partial sums. The trick is to find a particularly clever series that converges quickly, so that each new term added to the partial sum gets you closer to the limit value as fast as possible.

It should come as no surprise that there are an infinite number of equations involving \pi with reasonable convergence properties that could be used to calculate its value, and an astounding number that have actually been used to do that. [4]

It may also be no great surprise to hear that new equations are discovered all the time, although a new type of equation is rather more rare. Now, this part is a bit like one of those obscure jokes where we have to have some background and think about it before we get it. [5]

This is where my innocent investigation into calculating \pi took an unexpected turn. Let me quote the fun introduction to a paper by Adamchik and Wagon [6] :

One of the charms of mathematics is that it is possible to make elementary discoveries about objects that have been studied for millenia. A most striking example occurred recently when David Bailey of NASA/Ames and Peter Borwein and Simon Plouffe of the Centre for Experimental and Computational Mathematics at Simon Fraser University (henceforth, BBP) discovered a remarkable, and remarkably simple new formula for \pi. Here is their formula:

\pi \quad=\quad \sum_{k=0}^{\infty} \frac{1}{16^k}\ \left(\frac{4}{8k+1}\ -\ \frac{2}{8k+4}\ -\ \frac{1}{8k + 5}\ -\ \frac{1}{8k + 6}\right) \quad .

The original paper in which Bailey, Borwein, and Plouffe published this result appeared in 1997. [7]

Don’t freak out, just breathe deeply and look at it for a bit. [8] How they came by this equation is an interesting story in itself, but not one to go into here. It’s enough to know that it can be proven to be correct. (In fact, Adamchik and Wagon actually say “The formula is not too hard to prove…”; if you didn’t read note #5, now would be a good time!)

Isn’t it remarkable looking! I saw it and my reaction was along the lines of “how could \pi ever be equal to that thing?” This thing, by the way, is referred to as the “BBP formula” or the “BBP algorithm”. And then the talk about this equation started to get a little weird and my original train of through about how to calculate \pi derailed.

People wrote about the BBP algorithm in very admiring terms, like “the formula of the century”, which started to sound like hyperbole, but really wasn’t. Then I tripped over some statements like this one [9] :

Amazingly, this formula is a digit-extraction algorithm for \pi in base 16.

Now, there’s a statement I had to pause and think for awhile to make some sense out of.

The “base 16″ part was easy enough. Usually we see the decimal expansion of \pi written in our everyday base 10 (this is truncated, not rounded):

3.1415926535

In base 16 the expansion looks like this (also truncated) [10] :

3.243F6A8885

In the same way that the digits after the decimal point in the base-10 expansion means this:

\pi  \approx 3\ +\ \frac{1}{10^1}\ +\ \frac{4}{10^2}\ +\ \frac{1}{10^3}\ +\ \frac{5}{10^4}\ + \ \cdots

the hexadecimal expansion means simply this:

\pi  \approx 3\ +\ \frac{2}{16^1}\ +\ \frac{4}{16^2}\ +\ \frac{3}{14^3}\ +\ \frac{F}{16^4}\ + \ \cdots

where the hexadecimal digits “A, B, C, D, E, and F” have the decimal values “10, 11, 12, 13, 14, and 15″.

For quite awhile I didn’t get the notion that it was a “decimal extraction algorithm”, which I also read to mean that this algorithm could be used to compute the k^{th} digit in the hexadecimal representation without calculating all the preceding digits.

Now, that’s an amazing assertion that required understanding. How could that be possible. You’ve seen enough about series formulæ for \pi to see that the way one calculates is to keep grinding out digits until you get to, say, the millionth one, or the billionth one, which can take a long time.

If only I had written down first thing the equation I wrote down above for how the various hexadecimal digits add together as the numerators of powers of (1/16), it would have been obvious. Look back now at the BBP equation. See that factor

\frac{1}{16^k}

that comes right after the big sigma? That’s the giant clue about the “digit-extraction” property. If the expression in the parentheses happened to be a whole number between decimal values 0 and 15, then the k^{th} term would be exactly the k^{th} digit in the hexadecimal expansion of \pi.

That’s an amazing idea. Now, it’s not exactly the k^{th} digit, because that expression doesn’t evaluate to a whole number between 0 and F (hexadecimal). Instead, it evaluates to some number smaller than 1, but not incredibly smaller than one. (If, e.g., k = 100 it’s about 0.000002).

Philosophically, and mathematically, it’s important that it’s not exactly a digit-extraction algorithm, but an approximate one. You can’t just calculate the millionth digit just by setting k = 1,000,000 and computing one term.

Remarkably enough, though, the BBP algorithm can be, in effect, rearranged to give a rapidly converging series for the millionth digit, if you choose to calculate that particular digit. [11]

Now, in the realm of true hyperbole, I read some headlines about the BBP algorithm that claimed the algorithm suggested that there was some pattern to the digits of \pi and that the digits are not truly random — words evidently meant to be spoken in a hushed voice implying that the mysteries of the universe might be even more mysterious. Bugga bugga!

Now, that business about how maybe now the digits of \pi aren’t random after all — it’s hogwash because the digits of \pi never were random to begin with. It’s all a confusion (or intentional obfuscation) over what “random” means. The number \pi is and always has been irrational, transcendental, unchangeably constant, and the individual digits in the decimal (or hexadecimal) expansion are and always have been unpredictable, but not random.

They cannot be random since \pi has a constant value that is fixed, in effect, by all the mathematical expressions that it appears in. The millionth digit has to be whatever it is in order to make \pi satisfy the constraints of the rest of mathematics. It’s the case that all of the digits are essentially fixed, and have been forever, but we don’t know what their values are until we compute them.

Previously, to know the millionth digit we had to calculate digits 1 through 999,999; with the BBP algorithm that’s not necessary and a shorter calculation will suffice, but that calculation still involves a (rapidly converging) infinite series. Individual digits are not specified, not really “extracted”, but they are now individually calculable to arbitrary precision. And, since individual digits are whole numbers with decimal values between 0 and 15, reasonable precision on calculating the digit tells you what the actual digit is.

Still, it is an amazing formula, both practically and mathematically. And now I just tripped over a paper by the BBP authors about the history of calculating \pi. [12] Maybe I can finally get my train of thought back on track.
———-
[1] At the core is mimeTeX, a remarkable piece of software written by John Forkosh that makes it possible for those of us familiar with TeX, the mathematical typesetting language created some two decades ago by mathematician Donald Knuth, to whip up some complicated equations with relative ease.

[2] This limit is the same concept that is central in calculus and has been examined in great care by anyone who has taken a calculus course, and even more intensively by anyone who’s taken courses in analysis. But, for the present, there’s no need to obsess over the refinements of the idea of limits; the casual idea that comes to mind in this context is good enough.

[3] It was proven by Leonhard Euler in 1735. If you really, really want to see a proof of this assertion, the Wikipedia article on the Basel problem will provide enough details to satisfy, I expect.

[4] The page called “Pi Formulas” at Wolfram MathWorld has a dazzling collection of such equations.

[5] There is an old joke of this type about mathematicians. We used to say “The job of a mathematician is to make things obvious.” This refers to the habit among those professionals of saying “obviously” about the most obscure statements and theorems. Another version of the joke has a mathematician writing lines of equations on a chalkboard during class and reaching a point when he says “it’s obvious that…”, at which point he pauses and leaves the room. The class is mystified, but he returns in 20 minutes, takes up where he left off in his proof, saying “Yes, it is obvious that….”

The version of this that one finds in math and physics textbooks is the phrase, printed after some particularly obscure statement, “as the reader can easily show.” That phrase appeared once in my graduate text on electrodynamic (by J.D. Jackson, of course, for those in the know), and showing it “easily” took two full periods of my course in complex analysis.

[6] Victor Adamchik and Stan Wagon, “\pi: A 2000-Year Search Changes Direction“, Mathematica in Education and Research, vol. 5, no.1, (1996), pp, 11–19.

[7] David Bailey, Peter Borwein, and Simon Plouffe, “On the Rapid Computation of Various Polylogarithmic Constants”, Mathematics of Computation, vol. 66, no. 218 (April 1997), pp. 903–913; reprint available online.

[8] And remember that writing the 8 next to the k just means to multiply them together. Although I used n as the index in the example above, here the index is called k — the name doesn’t matter, which is why it’s sometimes called a dummy index.

[9] For an example: Eric W. Weisstein, “BBP Formula“, MathWorld–A Wolfram Web Resource, accessed 23 September 2007 [the autumnal equinox].

[10] “Sample digits for hexa-decimal digits of pi“, 18 January 2003.

[11] If you want to know the details, there’s a nice paper written by one of the original authors a decade later in which he shows just how to do it: David H. Bailey, “The BBP Algorithm for Pi“, 17 September 2006, (apparently unpublished).

[12] David H. Bailey, Jonathan M. Borwein, Peter B. Borwein, and Simon Plouffe, “The Quest for Pi”, Mathematical Intelligencer, vol. 19, no. 1 (Jan. 1997), pp. 50–57; reprint available online.

Aug
22

Approaching Mars

Posted by jns on August 22, 2007

Before I even get that ridiculous e-mail about how Mars will soon look as big as the moon because of a close approach by the Earth, here’s a note from NASA:

August 21, 2007: By the time you finish reading this sentence, you’ll be 25 miles closer to the planet Mars.

Earth and Mars are converging, and right now the distance between the two planets is shrinking at a rate of 22,000 mph–or about 25 miles per sentence. Ultimately, this will lead to a close approach in late December 2007 when Mars will outshine every star in the night sky. Of a similar encounter in the 19th century, astronomer Percival Lowell wrote the following: “[Mars] blazes forth against the dark background of space with a splendor that outshines Sirius and rivals the giant Jupiter himself.”

Contrary to rumor, though, Mars is never going to outshine the Moon.

There is an email circulating the internet—called the “Mars Hoax” or the “Two Moons email”—claiming that Mars will soon swell as large as the full Moon, and the two will hang together side by side on the night of Aug. 27th. “Mars will be spectacular,” it states. “No one alive today will ever see this again.”

No one will see it, because it won’t happen.

It is true that Earth and Mars are converging–you’re now 300 miles closer–but even at closest approach the two planets are separated by a gulf of tens of millions of miles. From such a distance, Mars looks like a star, an intense yet tiny pinprick of light, never a full Moon.

[excerpt from Dr. Tony Phillips, "Hurtling Towards Mars", Science @ NASA, 21 August 2007.]

I rather like the poetry of their special-purpose units for velocity: miles / (sentence read).

Jun
14

Heat to Sound to Electricity

Posted by jns on June 14, 2007

From a recent Physics News Update comes this half-science, half-technology report about a device that uses heat to make electricity, with sound as an intermediary.

The story is interesting enough by itself, but it is also a useful illustration that sometimes there are new ideas in science and technology that are not as inscrutable as general relativity or string theory, but are nevertheless pretty startling and understandable.

There’s really nothing in this report that requires much in the way of deep technical or scientific understanding, although it might help if I describe the idea of the piezoelectric effect a little. There are some substances, largely ceramics but also some naturally occurring crystals that exhibit this property: applying stress to them (e.g., squeezing them) creates an electrostatic charge, i.e., a voltage across the crystal. Sometimes this property is used in reverse: put a voltage across a piezoelectric substance and it expands by a tiny amount. Piezoelectric devices are often used, therefore, to make precision actuators, devices that move things closer together or further apart depending on an applied voltage.

TURNING HEAT INTO ELECTRICITY THROUGH SOUND has been demonstrated by the University of Utah group of physicist Orest Symko. The group has built devices that can create electricity from the heat that would otherwise be wasted in objects such as computer chips. The devices might potentially make extra electricity from the heat of nuclear power plant towers, or remove extra heat from military electronics.

At last week’s meeting of the Acoustical Society of America in Salt Lake City, five of Symko’s students demonstrated the latest versions of the devices, which they have been developing for a few years. The devices first convert heat into sound, and then sound waves into electricity. Typically, each device is a palm-sized cylinder containing a stack of materials such as plastic or metal or fiberglass. Applying a heat source, such as a blowtorch, to one end of the stack creates a movement of air which then travels down the cylindrical tube. This warm, moving air sets up a sound wave in the tube, similar to the way in which blowing air into a flute creates sound. The pitch, or frequency, of the sound wave depends on the dimensions of the tube; current designs blast audible sound, but smaller devices would create ultrasound. The sound wave then strikes a piezoelectric crystal, a commercially available material that converts sound into electricity when the sound waves put pressure on the crystal.

Symko says a ballpark range of 10-25% of the heat gets converted into sound in typical situations. The piezoelectric crystals then convert about 80-90% of the sound energy into electrical energy. Symko expects the devices to be used in real-world applications within two years, and may provide a better alternative to photovoltaic solar cells in some situations. (Session 5aPA at meeting; also see University of Utah press release at http://www.unews.utah.edu/p/?r=053007-1)

[Phillip F. Schewe and Ben Stein, Physics News Update: The American Institute of Physics Bulletin of Physics News, Number 828, 13 June 2007.]

Jun
05

More to Worry About

Posted by jns on June 5, 2007

I know there are people who can’t sleep at night worrying about the impending explosion of the sun or the heat-death of the universe. Global warming is no doubt adding to their insomnia. Now it turns out that the consequences of global warming are even worse than we thought:

WARM THE WORLD, SHRINK THE DAY.
Global warming is expected to raise ocean levels and thereby effectively shift some ocean water from currently deep areas into shallower continental shelves, including a net transfer of water mass from the southern to the northern hemisphere. This in turn will bring just so much water closer to the Earth’s rotational axis, and this — like a figure skater speeding up as she folds her limbs inward — will shorten the diurnal period [i.e., the length of the day]. Not by much, though. According to Felix Landerer, Johann Jungclaus, and Jochem Marotzke, scientists at the Max Planck Institute for Meteorology in Hamburg, the day should shorten by 0.12 milliseconds [0.00012 seconds] over the next two centuries. (Recent issue of Geophysical Review Letters.)

[Phillip F. Schewe and Ben Stein, "Physics News Update: The American Institute of Physics Bulletin of Physics News", Number 826, 30 May 2007.]

Jun
03

Reason vs. Faith, Again

Posted by jns on June 3, 2007

This week Bob Park (What’s New for 1 June 2007) revisits presidential candidate Sam Brownback’s positive response when asked during a debate whether he was one who did not “believe” in evolution:

BELIEFS: BROWNBACK DEFENDS SCIENTIFIC ILLITERACY BY EXAMPLE.
A month ago at the Republican Presidential debate, there was a show of hands of those who don’t believe in evolution. One who raised his hand, Sam Brownback, was moved to explain why in yesterday’s New York Times: “I believe wholeheartedly that there cannot be any contradiction between faith and reason.” Which faith does he have in mind? Different faiths are often at war with each other, but no wars are fought over science. Science relies on Nature as the sole arbiter. There was much more, all in the language of the intelligent design movement, including the substitution of “materialism” for “naturalism.”

The op-ed in question is “What I Think About Evolution” (Sam Brownback, New York Times, 31 May 2007). In it he, apparently, tries to soften his position and find a way to say that he doesn’t not believe in evolution, mostly by trying to deny most of what evolution is and is all about, and then claiming that he doesn’t not believe in that. It’s not a successful tactic.

In matters of conflict between science and theology, there is a famous aphorism of the late John-Paul II: “Truth cannot contradict truth”, which is to say that if there is an apparent conflict between theological truth and scientific truth, it must be apparent only and due to incomplete understanding, because “truth cannot contradict truth”. Not so long ago I wrote an essay on the matter (“Evolution and the Vatican“), in which I ended up tracing the “truth cannot contradict truth” idea back to Pope Leo XIII, and then following forward papal writings and attitudes about evolution. In the context of mature Catholic theology it makes clear sense. Once again, it reminds me of my feeling that a mature theology like that of the Catholic church makes what passes for fundamentalist theology seem juvenile and exceedingly simple-minded by comparison.

Unfortunately, Mr. Brownback misunderstands and perverts the deep significance of “truth cannot contradict truth” — quite knowingly, I suspect — by offering in his op-ed “clarification” this updated fundamentalist version:

The heart of the issue is that we cannot drive a wedge between faith and reason. I believe wholeheartedly that there cannot be any contradiction between the two.

In other words: “faith cannot contradict reason”. Or, I suspect, he’d prefer “reason cannot contradict faith”, because he goes on to say that “Faith seeks to purify reason…”, which does not indicate a comparison of equals. He seems to assert that reason and faith are equally reliable except when there’s a contradiction, then faith wins — of course.

“Faith” is not interchangeable with a concept like “theological truth”. Faith, claimed as a revelation by the faithful, has virtually no connection to theological debate — debate is not necessary — and no connection to the use of reason which, in the context of a mature theology, is a God-given faculty provided to assist in the discovery of “truth”. “Faith” is a personal matter, but hardly the foundation of doctrine or theology.

Is this a naive misinterpretation of the John-Paul II aphorism, or a willful bending to suit Brownback’s own purposes? Either one is deplorable and neither does much to bolster Brownback’s claim that he doesn’t reject evolution, well, not reject entirely. In my opinion Brownback has only dug his hole deeper, but I’m sure his supporters will have faith that it brings him closer to heaven.

May
27

Global Warming Fact-Sheet

Posted by jns on May 27, 2007

Via NASA’s Earth Observatory mailing list my attention was drawn to their newly freshened Global Warming fact sheet, written by Holli Riebeek (dated 11 May 2007), and I wanted to take this space to draw more attention to it.

As most of my readers will know, there’s a great deal of misleading disinformation and obfuscation in our current global-warming “debate” here in the US, a concerted effort by some business and political forces to confuse the public into thinking that there is no scientific consensus on anthropogenic climate change, i.e., global warming because of carbon-dioxide (and other greenhouse gas) emissions being pumped into the atmosphere from human sources.

There is consensus among scientists working in the field; how and why and what it all means is nicely summarized in this short, succinct, and accurate fact sheet. Without being patronizing and without distorting the information, it’s a clear and understandable presentation of what we (the science “we”) know about global warming, the trends, the causes, and the likely or possible consequences.

In particular, the author addresses this question:

But why should we worry about a seemingly small increase in temperature? It turns out that the global average temperature is quite stable over long periods of time, and small changes in that temperature correspond to enormous changes in the environment.

It keeps popping up as a joke, especially during wintertime or a cool day in the summer, when people casually say “I wouldn’t mind a bit if it were a degree or two warmer”.

What is missing in this superficial understanding is a realization that, overall, the Earth’s temperatures are quite stable on average, and that very small changes in average temperatures can have very, very large effects on weather patterns and that those changes in weather patters lead to subsequently surprisingly large shifts in the weather we get at any particular location. In other contexts this is sometimes called “the butterfly effect”: consequences can be out of all proportion (i.e., nonlinear) to the causes. Ice ages have been accompanied by changes in the average global temperature of only about 5°C — which doesn’t sound all that big.

This is discussed quite well in the fact sheet, and summarized (in part) this way:

Potential Effects

The most obvious impact of global warming will be changes in both average and extreme temperature and precipitation, but warming will also enhance coastal erosion, lengthen the growing season, melt ice caps and glaciers, and alter the range of some infectious diseases, among other things.

For most places, global warming will result in more hot days and fewer cool days, with the greatest warming happening over land. Longer, more intense heat waves will become more frequent. High latitudes and generally wet places will tend to receive more rainfall, while tropical regions and generally dry places will probably receive less rain. Increases in rainfall will come in the form of bigger, wetter storms, rather than in the form of more rainy days. In between those larger storms will be longer periods of light or no rain, so the frequency of drought will increase. Hurricanes will likely increase in intensity due to warmer ocean surface temperatures.

It’s a good piece and a few minutes invested in reading through it will arm the reader with better understanding that will help cut a confident path through the thicket of opinions and misinformation that have clogged the information superhighway on the issue lately.

May
11

Exponential Growth

Posted by jns on May 11, 2007

Here’s a quick question with a pedagogical purpose. Would you buy a battery from this man?

“The energy capacity of batteries is increasing 5 percent to 8 percent annually, but demand is increasing exponentially,” Mr. Cooper[, vice president for business development of PolyFuel Inc., a company working on battery technology,] said.

[Damon Darlin and Barnaby J. Feder, "Need for Battery Power Runs Into Basic Hurdles of Science", New York Times, 16 August 2006.]

Forget basic hurdles of science, the basic hurdle here would seem to be an executive in a technical industry who doesn’t understand what exponential growth is.

In short: growth of something that is proportional to the current size of that thing is exponential growth. Thus, demand for batteries that grows 5% to 8% annually — i.e., 0.05 to 0.08 times current demand — is exponential growth.

The constant that governs how fast something grows exponentially is the “growth rate”. Small growth rate = slow growth; large growth rate = fast growth. In symbols, an exponential function of time, t, is

f(t) = A × est

where A is a constant amplitude and s is the growth rate. If s is relatively large, f(t) changes values rapidly; is s is very small, f(t) changes values slowly. If s happens to be a negative number, f(t) disappears over time, quickly or slowly depending on the size of s. The letter ‘e’ represents the base of natural logarithms. Why it shows up in the exponential function takes some explanation; for now, just think of it as a constant number nearly equal to 2.17 and don’t lose any sleep over it.*

Many people think “exponential growth” means “grows really, really quickly”, but this is a misconception. It is true that power-law growth is generally faster than algebraic growth (for instance, multiplying a number over and over again by some number, say, 47) all other things being equal, but any particular exponential function will grow slowly or quickly depending on its growth rate. Think of a $0.15 deposit in a bank account that pays compound interest; the account grows exponentially but it’s going to be awhile before you’re a millionaire.

So please, please can we stop saying things like “Wow! That growth is so exponential! It’s huge!”

And if I were you, I don’t think I’d buy a battery from Mr. Cooper, either.
———-
* In fact, ‘e’ is irrational (not expressible as the fraction of two integers, or whole numbers) and transcendental (not the solution to an algebraic equation, which is to say a polynomial with rational coefficients and integer powers). But that’s a lot of other story that we needn’t go into right now.

May
11

Don’t Need no Science

Posted by jns on May 11, 2007

Is Bob Park’s What’s New for 11 May 2007, this quick summary of the Republican presidential-candidate field, demonstrating that science is not a conservative, traditional-family value and that Ars Hermeneutica has its work cut out for it:

BELIEFS: SCIENTIFIC ILLITERACY REACHES CLEAR TO THE TOP.
Last week at the Republican presidential debate, moderator Chris Matthews asked whether any of the wannabes did not believe in evolution. Sam Brownback, Mike Huckabee and Tom Tancredo raised their hands. John McCain waffled: “I believe in evolution, “he said, “but I also believe when I hike the Grand Canyon that the hand of God is there also.” The Sunday Washington Post pointed out that they weren’t that far from mainstream. In an ABC poll a year ago, 61% thought Genesis is literally true.