A vintage 1940s J-3 Piper Cub sits on the tarmac at Sterling.

One hour west of Boston, past the Wachusett reservoir and the farms draped on rolling hills, down a tree-lined road with a stone arch bridge over a quiet, shaded creek, lies a little strip of tarmac in a big grass field, surrounded by the central Massachusetts forest. A dozen small airplanes sit on the cracked asphalt next to rusted-out hangars with oil-stained floors. This is Sterling Airport, and as you walk to the door, a big green sign greets you: LEARN TO FLY HERE.

The weather on this Saturday afternoon: clear skies, 50 ºF, over 10 miles visibility—more than enough to see Mt. Wachusett on the northwest horizon. It’s a spring day in February, and a beautiful one for flying. But the only thing up there now is a hawk, making lazy circles over the sun-beat tarmac, soaring on the rising air.

Inside the airport operator’s building, it smells like old couches, and three greying white men are sitting on them. Hangar talk. I pour myself a cup of coffee from the communal pot and leave a mental note to put a buck in the adjacent jar before I go. I sit down next to a tall, balding man with big glasses named Dan. He points to Richard, the man across the room who’s currently holding court. Dan lives 50 miles west of Sterling. Is he here to fly? “No,” he says. “Just came to talk.” He sits silently in his chair as Richard carries on about autopilots and air shows, and whether he’ll make it this year to Sun N’ Fun in Florida, and how you wouldn’t believe how much Oshkosh has changed.

Hanging on the wall in a row wrapping around the room and into the hallway is a flight school tradition: the T-shirts worn by student pilots on their first solo, signed with the date and the plane’s tail number. They begin in the early 90s and stop at 2007. One of them bears the slogan: “Sterling Airport: Grass roots aviation at its finest!”

If you take that slogan at its cheery word, then the grass has stopped growing. General aviation—that is, personal, non-airline flying—in the United States is in a nosedive. Rising fuel costs have rendered it an expensive hobby; as a mode of personal transportation, it hasn’t gotten any more practical, either. In 1980, according to FAA numbers, there were over 200,000 student pilots in the process of earning their private pilot’s license. In 2009, there were 73,000. The number of active pilots has fallen from 827,000 to less than 600,000.


Even on a warm spring day, there can be more sitting around than flying.

I got my pilot’s license in high school, training at a small airfield much like this one, off the state highway in the south end of my northern Wisconsin hometown. When I was a kid, Popular Mechanics still published cover stories about flying cars. But today, the personal airplane seems like a thing of the past, not the future.

After me, the next youngest person in the room at Sterling is Renee, a woman in her mid-40s with brown hair and a lilting Midwestern accent. I ask how many kids are training here these days. “It’s…a reasonable number,” she says.”A lot less than a few years ago.” She glances at the rates written on the wall: $88/hr to rent the Cessna 150 two-seater, plus $45 for the instructor. It’s double what I paid growing up in Wisconsin. “Not many can afford it anymore.” She sighs.

When I was a kid, I heard the small airplanes pass over my yard every day. I always looked up. It wasn’t the freedom that I craved, the “tumbling mirth of sun-split clouds in footless halls of air” that John Gillespie Magee, Jr. immortalized in his sonnet “High Flight.” No, more than anything else, I wanted to fly because so that I could look down. I wanted perspective. I wanted to know what my house looked like from above—to find out where I was, to place myself amongst my surroundings. The year after I got my license, an internet company in the Bay Area launched a website called Google Maps. And eventually, I stopped looking up.


Final approach to the grass strip at Sterling in a glider. Mt. Wachusett is just left of center on the horizon.

I remember to look up now as I walk back to my car. The skies are still clear; even the hawk is gone. From the south end of the strip, Mt. Wachusett rises from the treeline, seven miles distant as the hawk flies. On a good summer day, when the ground is warm, you can hop in a glider, get a two up to 3,000 foot, and soar all the way there to dance across its slopes and ridges on the uplift and glide back to Sterling—or so I’m told. Since I arrived, I haven’t see a single plane take off or land, nor heard a single engine fired.

But I’ll be back. I forgot to pay for my coffee.


On League of Denial

When I was in elementary school, the only part of the school day I looked forward to was playing football at recess. I grew up in Wisconsin, and the mid–90s was a good time to be a Packers fan—the Mike Holmgren glory years. My friends and I would draw up our plays and pretend we were Brett Favre, tossing TDs to Shannon Sharpe and Robert Brooks, handing off to Edgar Bennett and Dorsey Levens. My best friend was a perpetual team captain, and even though I was clearly the worst football player out there, I could always count on him to not pick me last. That’s what friends do.

There was just one problem for us: The school administrators would never let us play tackle. On the gridiron of Cooper Elementary, four-hand touch ruled, until our principal banned even that, and we had to resort to two-hand touch. If they weren’t watching, we tackled anyway. Usually, nobody got hurt. But every now and then, two kids would run into each other, you’d hear the clang of skulls clashing, one of them wouldn’t get up, and we’d stand around and hope they would before a grownup noticed something was wrong.

I’ve never put on a football uniform, but those days on the playground came back to me yesterday when I finally got around to watching League of Denial, PBS Frontline’s excruciating two-hour special on how the NFL spent over a decade covering up and attempting to destroy a growing body of scientific evidence linking football to long-term brain damage and CTE. There’s not much new to report, but to see the full body of evidence against the NFL so compellingly told is nothing short of damning. It highlights the NFL’s hypocrisy, selling its violence while hiding its consequences. And although it doesn’t say so explicitly, it inevitably questions our own complicity in consuming a sport so cruel to those we pay to play it.

League of Denial is in part a straightforward whistleblower narrative, and it finds its heroes and villains quickly. One hero is Bennet Omalu, the Pittsburgh medical examiner who discovered and published the first known case of CTE in an NFL player, Steelers center Mike Webster. Another is Ann McKee, the Boston University brain researcher and lifelong football fan who would go on to find it 45 of her 46 cases. That inner juxtaposition was perfectly captured in one shot that brought a smile to my face: a NY Giants helmet on her bookshelf next to a boxed copy of Endnote, a software tool academics use to manage references and bibliographies. (Personally, I prefer BibTeX.)
With similar economy, the film needs only two lines of taut narration to establish one bad guy of the mid–90s: “NFL Commissioner Paul Tagliabue orchestrated the league’s response. Tagliabue had begun his career as a lawyer.”

It was he who dismissed the concussion crisis as the manufactured product of “pack journalism” and personally appointed a rheumatologist to a sham medical committee to “investigate” the issue. The committee cranked out shoddy papers with sweeping denials based on small sample sizes that the editor-in-chief of the journal Neurosurgery, himself a consultant for the New York Giants, accepted over the objections of his own sports section editor.

As the film chronicles the researchers’ struggle to raise awareness, clashing with the NFL’s panel of experts, there are plenty of chilling moments:

  • Bennet Omalu, reflecting on the entire episode: “I wish I had never met Mike Webster. CTE has drawn me into the politics of science, the politics of the NFL. You can’t go against the NFL. They’ll squash you. I really sincerely wished it didn’t cross my path of life. Seriously.”
  • Mike Nowinski co-director of BU’s CTE research center, who went from a Harvard football player to pro wrestler, taking countless blows to the head along the way: “What motivated me every day was that my head was killing me.”
  • And the haunting voiceover of Junior Seau in an NFL Films production, years before he would succumb to CTE at only 43 years old and put a bullet through his heart, preserving his brain for research: “You have to sacrifice your body. You have to sacrifice years down the line. When we’re 40, 50 years old, we probably won’t be able to walk. That’s the sacrifice that you take to play this game.”

The other central figures of the documentary are the journalists themselves. The film is based mostly on reporting by Mark Fainaru-Wada and Steve Fainaru, brothers who both work for ESPN. And as it goes on, it acquires a muckraking metanarrative, as the Fainarus and New York Times reporter Alan Schwarz (who was nominated for a Pulitzer for his coverage) work to cover the story. This is only enhanced by the fact that ESPN, who had collaborated with Frontline for most of the production, suddenly pulled out just weeks before it was to air. Although the network claimed they quit because they didn’t exercise “full editorial control,” the New York Times reported it was due to NFL pressure applied personally by Roger Godell. (Both the NFL and ESPN denied this, and ESPN has since aired excerpts of the documentary and promoted the Fainaru’s book of the same name.)

Although the film is built upon a David vs. Goliath structure, at its core is a health and science issue: can playing football cause CTE? It plays out not on the field or in dramatic Congressional hearings but in stuffy medical journals. League of Denial is at its most innovative when its reporters make this academic prose leap off the page. A key moment is a letter to the editor in which the NFL’s in-house concussion committee calls on Omalu to retract his paper. In ESPN’s Peter Keating’s retelling, this is the league going after him “with a vengeance…like a nuclear missile strike on his reputation!”


But the propulsive narrative struggles to capture fine detail. As in many a science story, the “more work must be done” caveats from independent sources are shoehorned in at the end. The way the film treats McKee’s accusation of sexism on the part of the NFL’s researchers is awkward and unsure, and a less than illuminating glance at the insidious ways in which sexism can infect academia. And there’s a big story left uninvestigated: How did the NFL committee’s papers, with their bad science and obvious conflicts of interest, get published in a reputable peer-reviewed medical journal in the first place?

But those aren’t things that the filmmakers chose to focus on, which is fine. Although there’s nothing new here, League of Denial stands as a definitive document of the subject—for now. The issue of concussions will not go away, and continue to resonate as more research is done into the troubling preliminary results of CTE found in teenagers who played football.

But maybe the biggest repercussions will be with the viewers. Like most football fans, I’ve been following this sad, depressing story for a while now, and it’s changed the way I see the sport. For the sake of argument, forget the outrageous deceit on the part of the NFL. Simply knowing the full consequences of the sport’s violent nature makes me question whether my own enjoyment of it is ethical. Does the fact that the players (now) knowingly accept the risk let me off the hook? I’ll say this: the visceral thrill of watching a big hit is gone. Or rather, it’s immediately replaced by nagging doubt: How many future memories does that one hit erase?

I’ve had versions of this conversation with several of my friends—usually over beer at bar while watching football. We just can’t pull ourselves away. Even as my childhood hero Brett Favre says he can’t recall what sports his own daughter plays, and that “God only knows the toll” that football takes.

I’m glad my mom never saw us play tackle.

Half of the internet uploads photos these days, it seems


The creative class keeps on rising, and expanding.

For the first time, over half of the US adult internet population has posted a photo or video, according to the latest data from the Pew Internet & American Life Project.

Pew surveyed 1000 adults and found that 54% of adult internet users have posted some of their own photos or video to the web. Last year, that figure was 46%, meaning that 8% of adults on the internet posted their own photos or video to the web within the last year.


Almost as many are content curators—47% have shared a photo or video that someone else uploaded. I checked the crosstabs for how many of those were cat macros, but sadly, Pew didn’t ask that question.

“Pictures document life from a special angle,” stated Maeve Duggan, one of the authors of the study, with no apparent trace of irony.

For the first time, Pew did ask adult internet users whether they used certain apps. Turns out 18% use Instagram and 9% use Snapchat. There’s no data on teens, though, for which Snapchat ought to be substantially higher.

Another caveat to those numbers: Pew didn’t ask how many of the people who are “using” Instagram and Snapchat if they were also posting pictures to the service, or just looking at the feeds of the people they follow.

With the rise of smartphones (only 4% in the survey didn’t know what the term meant), it’s no surprise to find that everyone is a photographer these days. Even on pro-leaning Flickr, the most popular camera is an iPhone—and so is the second most popular, and the third.


The margins of error of Pew’s results varied from +/- 3.7–4.0 percentage points.

Miracle of Science (Cambridge)

This slideshow requires JavaScript.

So a science writer walks into a bar. He’s by himself.

Actually, that’s not a joke, that was just my Monday night.

The bar is called Miracle of Science, close to the MIT campus in Cambridge. I’ve been drawn in by the bar’s most prominent feature: its enormous menu, in the form of a periodic table. It’s hand-drawn on a chalkboard covering an entire wall. In the top corner of the table, where hydrogen would be, sits the most fundamental element of a bar menu, Hb for hamburger, below it Cb for cheeseburger, Vb for, well, you get the idea. Yes, just like the periodic table, the menu is grouped into columns and color-codedd based on the item’s properties. Br for brownie, not bromine.

Miracle of Science opened in 1991; the menu was designed by a bartender in 2002, who’s long since moved on. His initials RR are still visible in the corner.

The Ronie Burger is the best thing on the menu, according to a bartender and the guy sitting next to me at the bar. “It comes with pepperjack cheese and jalapeños actually stuffed inside the patty,” he tells me. It lives up to expectations, spicy enough to make me break out sweating. The skillet home fries and salsa that come with it are a nice touch.

Two years after the menu went up, Popular Science named it one of the “top nerd bars” in the nation. Back then, PopSci said the tables were “surrounded by microscopes and other lab paraphernalia.” But today, the decor is modern, minimalist and trendy. Game 5 of the World Series is on TV, but it’s on mute. The ESPN logo is burned into the corner of the screen. A 90s mix is playing. It’s a young techy professional crowd drawn from nearby tech firms—girls with big hipster glasses, guys huddled over laptops and their drinks.

Despite the name, there’s not much more of a science theme. But if you want real science with your drinks, you can go next door to Middlesex Lounge, run by the same owners. Middlesex actually hosts nerdy events, including Boston’s monthly Nerd Nite and science cafes hosted by WGBH’s long running science program Nova. Together, these two bars form a decent scientific core in the Central Square scene.

Matthew Curtis and Chris Lutes, the co-owners, also own Audubon Circle, Cambridge 1, and Tory Row, all similarly decorated trendy pubs. But they’re not too interested in advertising, says a bartender who declines to tell me his name and pleads with me: “Keep it unofficial, ok?”

A Strange Lonely Planet


An artist’s impression of the planet-sized L dwarf PSO J318.5-22. (MPIA/V. Ch. Quetz)

Earlier this week, a press release hit my inbox that made me say, “Ooooooh!” out loud. Its headline was: “A Strange Lonely Planet Found without a Star” and it came with an image.

“Oooooh!” I said again. An image of a planet without a star? Free-floating through the lifeless void? My imagination rumbled to life and started to jump to conclusions.

You see, I’ve wanted for a while now to write a very sad science fiction novel about such a starless scenario. This dream of mine has been motivated by a real science problem: planetary migration.

In the early days of exoplanet studies (way back in the mid-1990s!), the very first planets to be discovered were known as hot Jupiters—giant gaseous planets closer to their host stars than Mercury is to our Sun with temperatures in the thousands of degrees (Centigrade, Fahrenheit, or Kelvin—take your pick). Their existence continues to be a puzzle, because they could not have formed where they are. Both a star and its surrounding planets form when gravity pulls a cloud of gas together into clumps. But in these cases, the energy radiating from the young star should have blown the gas away before it could coalesce into a planet. Only small, rocky planets should be able to form there. The logical theory is that these hot Jupiters had to form much further out where it was colder—like where our own Jupiter is in our solar system—but then somehow migrate in.


The “planet” imaged by the Pan-STARRS1 telescope. It was discovered by a US-German team, with follow-up observations from Mauna Kea. (N. Metcalfe & Pan-STARRS 1 Science Consortium)

Multiple theories have been proposed in which the laws of physics conspire to do just that. In one, the newly formed planet slowly spirals in, losing momentum as it plows through the disk of gas surrounding the young star; in another, the gravitational presence of a nearby companion star perturbs the planet, triggering a wild, eccentric orbit that ends next to the star.

But here’s the rub: Either way, if a Jupiter comes barreling in through its solar system, its gravity will likely throw the planets out! Its enormous mass would scatter the planets like a bowling ball, slingshotting them into the dark, vast coldness of space faster than a tumbling Sandra Bullock. In fact, based on the number of these planets astronomers have already detected through gravitational microlensing, we expect upwards of billions of planets to be lost in space, away from their stars.

What a great end-of-the-world sci-fi story that would be! A helpless population, doomed by the inexorable dance of physics! The Earth becomes both our interstellar starship and our coffin! And people look for strength and hope in a world where every day is just a little bit darker and colder than the one before, without end. An art-house apocalypse—not with a bang, but the saddest, coldest whimper.

At first, I thought this discovery was a direct image of a planet so ejected! But then I read the paper, and the word “planet” isn’t how the authors described it in their title: “A Free-Floating Planetary-Mass Analog to Directly Imaged Young Gas-Giant Planets.”

In other words, it very well might not a planet—at least, not based on how we use the word planet in everyday life—but it’s like a planet. The team seem to think it probably formed like a star based on the fact that it’s moving in the same direction as other nearby stars, as opposed to an ejected planet that could be going any which way. It just happened to be so small it could be mistaken for a planet! What a bummer.

Actually, outside of my imagination, it’s not really a bummer—it’s a neat opportunity. Actual exoplanets are very difficult to study directly; they’re so close to their stars they get lost in the glare. To the extent that these planet-sized objects actually resemble planets, they give us a chance to nail down their physics unimpeded by their pesky host star. Judging by attributes like its mass, color, and brightness, this “planet” does a fair impersonation, but we still don’t know if objects like these form in exactly the same way as planets.

Regardless, I’ll keep dreaming about writing my novel…

This is my homework assignment.


The story of my life, more or less.

Yes, this plot of Skittles is my homework—for an online class I’m taking through Coursera called Social Network Analysis. This was our first assignment: using a couple of free online tools to download your Facebook friends data and visualize your network of friends.

The ease with which I was able to complete the assignment made me thankful for how (relatively) accessible network science is as a field. Thanks to social network APIs and open-source software, the tools you need to analyze your own social data are easily available. Consider it the social networking version of 23andme—personal memomics, if you will. (And thanks to the NSA, awareness has gone up, too!)

Taking the graph of my network above, each circle is a friend of mine (known as a “node” in the parlance of graph theory) and each link (or “edge”) between nodes indicates they’re Facebook friends. Not all my friends are connected to all my other friends—there were some free-floating clusters. But for the sake of clarity, I’ve shown only the largest connected component (LCC) of my graph.

The spacing of the nodes is determined by an algorithm based on simple physics. In it, each node is repulsed from each other, like magnets. Each link is like a spring, tugging groups of people together based on their common friendships. Another algorithm detects these clusters, draws boundaries, and then assigns them colors. (I went through and annotated some of them.)

I want to emphasize that I’m not in the center of this graph—a so-called ego network. I’m not in the graph at all! Each link between nodes is a direct friendship between those individuals. It shows how my friends are connected to each other, not how connected I am to them. In other words, this a disclaimer to all my friends out there: You’re all awesome, and how central you are in the chart has nothing to do with how important you are to me!

So what can network science tell me? The analysis tool, called Gephi, calculates several standard network metrics. For example, the “average path length” across the network is 5.2. In other words, there are, on average, just over 4 degrees of separation between any two people in the network—indicating a “smaller” network than the famous “six degrees of separation” maxim. This pattern is replicated over Facebook as a whole—the company’s data team reported in 2011 that the entire network had an average of just 3.74 degrees of separation, and that it was decreasing each year. The world is shrinking.

But for me, the most fascinating aspect wasn’t the numbers, but simply zooming in and browsing through my graph. As a record of my social life, it’s a strange thing. It’s all there—my friendships, my relationships, my bridges I’ve burned. By tracing edges to their nodes, I can remember the moments that linked them together, the chance friendships that I intend to keep for a lifetime, forming structures like filaments of galaxies fanning out across the night sky.

Some of the most interesting connections are the ones that unexpectedly link clusters. A kid from high school who’s now a b-boy in Seoul. Or the person who subletted my room one summer in college and then ran into a classmate from grad school while they were visiting physics grad schools. The people who are connected that you had no idea knew each other.

And isn’t it funny how some of your best friends can be the smallest nodes?

On Gravity: When 3D is necessary


I’ve seen Gravity twice now. Like so many others, I found Alfonso Cuarón’s film to be one the great moviegoing experiences of my life. It was visceral in a way in which I’d never felt sitting in a theater before, and engaged my senses and my spatial awareness in a way that seems only possible in 3D.

I’d even go so far as to say that Gravity is the first real 3D movie, in the sense that it is a post-photography movie. Cuarón’s trademark long shots prove to be a perfect means of embracing a method of moviemaking not bound by the conventions rooted in the physical artifacts that previous generations of artists have used, like frames, cuts, and zooms.

A photograph is an image; it’s built around its own two-dimensionality, and the entire language and grammar of film is built around the fact that it is filmed as a series of photographs. So much of the aesthetic framework of a photograph is that it renders reality in an artificial way, by removing a dimension. To an photographer, that is not restriction, but possibility. It means that a person in the foreground can exist next a person in the background within the frame, creating a dramatic or emotional subtext. The foreshortening of a receding line can be exploited to guide the eyes back to the subject. And so on.

Take Citizen Kane, the film that codified the language of cinema. In one scene, we see a humiliated Kane on the bad end of a business deal being forced to sign away most of his media empire.

Kane sighs, begins to reminisce, and walks into what appears to be a small room with windows behind him. But as he recedes into the frame he becomes smaller and smaller, until we realize that the windows are enormous! The set is evidently much deeper than our perspective suggested. And as Kane recedes into the depths, his image shrinks until he is dwarfed by the windows, reflecting his diminished status. It’s an optical illusion that conveys emotion—one that is made possible because of film’s two-dimensionality.

In another sequence, the camera pulls into a photograph on the wall until it subsumes the frame and then begins to move—the photograph becomes the film itself. (Trite, these days, perhaps, but what a shock it must have been to an audience in 1941.)

As another example of the power of two-dimensional imagery, you can take Cuarón’s own celebrated long shots in Children of Men. They were all about creating images, filling a frame with imagery and movement, and often incorporated real-life photographs like the scarecrows of Abu Ghraib.

All these aesthetic techniques are made possible because of the restriction of two-dimensionality. When this latest wave of 3D films started to build, many critics rejected them for this reason—by eliminating the restriction of flatness, they argued, you eliminated the possibilities that made film unique.

But to think that there aren’t also possibilities in the added dimension of depth that can be used creatively is pretty unimaginative (not to mention forgetful of theater, in which depth is always used to create tension; a soliloquy delivered from the back of a stage conveys a much different emotional state than one delivered from the front of the stage).

Gravity moves film into a realm where the classical rules of composition—those that date back to painting, the Renaissance, and an understanding of things like perspective and foreshortening—now require an extension, or a complete reformulation. (What is the 3-dimensional equivalent of the Rule of Thirds?)

Gravity is not filmed—it is filled. It takes place not within a frame but within a volume. It’s about space. Not outer space—but design space, mathematical space, the way an architect talks about space. An image can convey depth. But it cannot exist with depth. Gravity does. It happens in 3D.

And it should, because physics happens in 3 dimensions. Outer space doesn’t have a frame—there’s not up, down, left, right. Instead, you describe it with X, Y, and Z. Objects move, collide, and tumble about all three axes. Things go flying at the camera; in many other movies, it seems cheap, but here, it’s motivated by the physical ballet unfolding on screen before you. Your sense of physical intuition is engaged at all times, which is what makes this film so uniquely visceral.

Minor spoiler (highlight to view): For example, at one point, Sandra Bullock is tumbling past a spacecraft that she really needs to get a hold of. She flails her body, which makes no difference to her trajectory, of course. Then she throws an important object she’s been carrying up into space. When that happened, I thought, “No!” But then I saw what happened: her body was propelled down the frame in the opposite direction, towards the ship. Of course—Newton’s third law.

Even though we don’t deal with this weightless regime of physics in daily life, we do have an intuition that goes, “Oh yeah, that’s what would happen in space!” And that’s the entire movie, this pleasure of seeing the unexpected yet physically inevitable. The soundless explosions, spacecraft spinning, this ballet of broken objects—it’s all validated by our subconscious sense of physics. I would love to hear a neuroscientist’s take, but I would suspect that Gravity engages the part of the brain that you use to be aware of your surroundings, to be aware of your own location and momentum, and where it is taking you.

And for that, your mind needs 3D.

The Error Cone and Visualizing Uncertainty

Tropical Storm Karen Advisory 3

The National Hurricane Center’s 3rd advisory issued for Tropical Storm Karen.

When we’re kids, one of the first subjects in which we learn the concepts of probability and uncertainty is the weather. It’s perhaps the only area of our life in which we all use probabilistic models on a daily basis to guide our decisions—decisions that can come back to bite us. It’s one thing when Nature decides to deliver on that 10% chance of rain; it can be catastrophic when a hurricane makes good on a 10% chance landfall.

In a post last week, I wrote about conveying uncertainty in exoplanet detection—a matter of curiosity. But conveying uncertainty in a hurricane’s predicted track is a matter of public safety. So it would make sense for the National Hurricane Center to take great pains in communicating uncertainty to the public. Its method of visualizing it is known as the “error cone.”

Originating at the current location of the hurricane’s center, it expands along the predicted path to show how the forecasted path becomes more uncertain in the longer term. To be specific, the edge of the cone represents a 67% chance that the hurricane remains inside the cone based on the accuracy of the past five years of forecasts.

But there are some well-known issues with the error cone. For starters, it can give the false impression that it represents the extent of the storm itself, not the extent of its predicted track. Interpreted that way, it seems that the storm expands over time. Another is that by drawing a hard line in the sand at the 67% contour, it gives people just outside the cone a false sense of security, despite the fact that there’s a 1-in-6 chance the hurricane will deviate outside of the cone towards them. (If you’re wondering why it’s not 1-in-3, it’s there’s also a 1-in-6 chance it goes outside the cone on the other side.)

The issue is that a hurricane’s predicted path isn’t a probability—it’s a probability distribution. Some places are more probable than others to lie along the path, but there’s no clear-cut boundary. Choosing an arbitrary 67% contour is convenient, but it’s an awful way to convey the full distribution of possible tracks.

A team of scientists led by Jonathan Cox of Clemson University recently published an alternative method of visualizing a hurricane’s predicted path that looks like this:

What they’ve done is simulate the hurricane’s path hundreds of times, but rigged the simulation’s settings so that it should have the same statistical distribution as the error cone. It’s a bit like loading dice. There’s an element of randomness in each track, but after generating hundreds of tracks, they cluster around the original, predicted track. They also check after each track to make sure the overall set is similar to the error cone. If they’re making too many tracks outside the error cone, they reset the simulations so it will make more inside of it. It’s another application of Monte Carlo models.

The authors don’t claim to have evidence yet that this method leads to a more accurate public perception. (I can think of one possible objection: since the tracks must necessarily diverge, the decreased density makes the tracks appear fainter, which could give a false impression that the storm will get weaker.) But they do report results from a small focus group in their study and found that almost all preferred their new method: in addition to giving a better sense of the dynamic nature of hurricane tracks, it was also simply more visually interesting.

Why auto racing is a geek’s dream sport

Hello, geek.

Hello, you science nerd, you technology aficionado, you analytical thinker, you.

Do you like watching sports?

I ask because there is a sport that will appeal to every aforementioned aspect of your personality, although judging from American TV viewing figures, you are probably not paying attention to it—even though its competitors are geeks, just like you. It is the pinnacle of automobile racing, the league known as Formula 1.

A Ferarri and Red Bull scream around the streets of Singapore in 2011. Photo: Chuljae Lee / CC

When it comes to adrenaline, these cars have no match. They’re screaming, winged rockets of carbon fiber cradling a driver with no roof over his head at top speeds exceeding 200 mph. There are no fenders to protect the wheels and suspension as they strain under the 5 Gs of stress that these cars exert as they scream around corners.

But despite that, forget the notion that modern racing is an exercise in pure sensation and blind bravery. Nor is it the gentlemanly pastime of European princes, hobbyist mechanics, and thrill-seeking rascals that it once was many decades ago. Today, more than any other sport, F1 is driven by design and data. It’s engineering. It’s technology. It’s physics soup for the scientific soul.

It’s no wonder that when Ron Howard began production on his 1970s-era F1 pic Rush, he described the world he found as a “combo of engineering brilliance and fearless courage [that] reminded me of people I met at NASA while directing Apollo 13.”

The workings of an F1 team are relentless, iterative, like a computer algorithm designed to obtain a minimum value: for a race distance of 305 km, solve for the shortest time possible.

Watching a race on TV, it’s almost startling to hear the quantitative way in which the most competent commentators analyze the race as it unfolds—the cars are going over 200 mph and the guys on TV are calculating fuel loads and tire wear. It’s a bit like that epic moment in Apollo 13 when astronaut Jim Lovell is struggling to convert the gimbal angles from the stricken command module to the lifeboat lunar module and everyone in Mission Control whips out their slide rule.

To see a bit of this strategy and how F1’s geeks solve it, consider the quandary teams face when planning pit stops to change tires. A typical race might last between 50–80 laps, but the tires on an F1 car wear quickly, and each successive lap takes a tenth of a second longer on average, or more. Changing to fresh rubber means the drivers regain their speed, but a total of about 20 seconds is lost as the team swaps tires and the driver obeys a 100 km/hr speed limit on pit lane. (This is called the “bogey time” and is measured by the teams at each track.) So how often should a driver sacrifice those 20 seconds to gain back the most time on fresh rubber?

The math works out to be 1 to 3 times during a race, depending on the rate of wear, trading 20 to 60 seconds in the pits for the consistently quicker lap times on fresh tires.

But when? Imagine you’re the leader of the race. If you time it too early, you may emerge from the pits in the middle of the swarming peloton of cars, fighting with them for position. That would cost you precious time. Perhaps you should wait a handful of laps and let the cars behind you pit first.

But wait. If they pit first, they will have fresh tires while you are running around on worn rubber, bleeding time each lap. By the time you pit, the other cars may have leapfrogged you as you sit in pit lane. (This tactic is called the “undercut”.)

Now perhaps, my geeky race strategist, you have determined the perfect laps on which to pit to minimize your time (and made sure that your team is free of moles who might leak your strategy—a very real danger). But here’s the thing: the other teams can calculate their numbers just as well as you can. What are they likely to do? Well, it depends. Does that change what should you do? Maybe.

No computer could find a single perfect solution for this kind of problem. It’s mathematically impossible; there are simply too many variables. Instead, the best method is to simulate tens of thousands of races, randomly trying as many different strategies as you can to see which ones result in you winning the race the most times.

This kind of technique is called a Monte Carlo method, named since every simulation is like a gambler’s roll of the dice. It was enabled by the rise of computers and pioneered on the primitive ENIAC. Today, it’s ubiquitous. It’s the same probabilistic math that Nate Silver uses to predict elections and that scientists use to forecast the paths of hurricanes—the rolling of multitudes of virtual dice to see which outcomes are most likely to come true, down which branches of reality the river of time will meet the least resistance. And it’s why the top F1 teams have squads of statisticians and data analysts working in Mission Control-style computer rooms back in their factories during a race, conducting their simulations, feeding their teams the latest model runs and dictating race strategy.

So what does this mean for you, dear geek? For one, the raw timing data is available to view at during races. Observing the lap times and the gaps between cars will allow you to see strategies unfold faster than the TV announcers can comment on them. If you want to go even further, there is an open source API project to intercept the data, allowing you to write your own code and make your own predictions.

F1 isn’t just about watching a competition—it also gives fans the chance to experience the joy of watching an outcome emerge from a sea of data. That’s something every geek can appreciate.

The FAP trap

rendering of Alpha Centauri system

In this artist’s rendering, the exoplanet Alpha Centauri Bb looms in the foreground, with the Alpha Centauri binary system in the background.

Almost one year ago, a team of astronomers announced a detection of a rocky exoplanet right next door in the star system Alpha Centauri, the closest to our own solar system. Yes, Alpha Centauri—that near-mythical system that has such a hold on our imagination, its fictional appearances have their own Wikipedia article.

Ok, ok, so this planet, named Alpha Centauri Bb, wasn’t actually habitable. It was too close to its star, more like a scorched, oversized Mercury than Earth. But the fact that a small rocky planet was right next door boded well for the likelihood that rocky planets were everywhere. Debra Fischer, a Yale exoplanet researcher, told the New York Times it was the “story of the century.” If Joe Biden were an astronomer, he’d have called it a big fucking deal.

Except…the detection wasn’t quite a slam dunk. The team, based in Geneva and led by astronomer Xavier Dumusque, found the planet by detecting the wobble that its gravity exerts on its star. But that wobble was so small that its signal was buried deep, deep within the noise of the data. They had to attempt to control for 23 different effects that could have thrown off their measurements—things like the star’s pulsations and magnetic spots. It was only after stripping them away, one by one, that a signal started to emerge. Here’s what it looked like:


Dumusque et al. (2012), Figure 5

All those scattered little dots that seem almost random—that’s the post-analysis data. But the red dots are what you get when you group data points that are close together and average them. That’s how the team was able to recover their signal. They reported that the odds that the data in the plot could have been a fluke of nature (a statistic called the False Alarm Probability, or FAP) were pretty slim: one in a thousand.

This was a key point that many journalists picked up on and quoted the authors repeating it in a press conference to bolster the case for the planet. To wit:

Mike Wall at “Udry, however, said that the team’s statistical analyses show a ‘false alarm probability’ of just one in 1,000 — meaning there’s a 99.9 percent chance that the planet exists.”

Ian Sample in The Guardian: “The astronomers told a press briefing that the chance of their discovery being false was about one in 1,000…”

And Camille Carlisle in Sky & Telescope: “Study coauthor Stéphane Udry (Geneva Observatory) noted in a press conference earlier this week that there is one chance in 1,000 that the signal his team sees is a fluke.”

Well, that sounded like pretty good odds to me. That is, until early this summer when exoplanet astronomer Artie Hatzes published a paper in which he did his own analysis of the same data, and found nothing. In fact, he concluded that if you assumed the planet was there, he should have found it with a confidence of 99%.

So hang on a second. According to the Geneva team, they have only a 1/1000 chance of being wrong. But Hatzes finds the opposite, and says there’s only a 1/100 chance that Geneva are right. So who’s “correct”? What do those numbers even mean?

So I asked Debra Fischer. Her answer confirmed my thinking. That False Alarm Probability of 1/1000? That’s the probability that the data in that plot is a fluke—but remember, that’s the data after all of their analysis. In other words, the 1/1000 figure holds only if you assume that their analysis of those 23 parameters is absolutely perfect. It’s a comparison of the signal against the flukey nature of reality, but says nothing about the confidence in the analysis that led to that signal in the first place!

Yikes. That’s a difference with a big distinction, and one that got very little play in the media. (And it’s a point I didn’t call out when I wrote about Hatzes paper for Sky & Telescope.)

Now, that doesn’t mean the analysis is junk. Dumusque and his team weren’t trying to hide anything about their analysis—quite the opposite, in fact. They released their data publicly, inviting scrutiny; that’s what enabled Hatzes to do his independent analysis. And Dusmusqe’s team did a check of their analysis as part of their original study to see if it might introduce a false signal and concluded it did not. So Alpha Centauri Bb is not dead—not by a long shot. Both Dumusque and Fischer are currently analyzing fresh observations to try to get that slam-dunk confirmation. (Peter Edmonds has written an excellent blog post taking a look at the whole saga.)

But it does mean that it’s difficult to quantify how convincing the data are as they stand, and that the FAP is not the entire story. For a journalist, that is difficult to explain to the public. It’s yet another example of how tricky it can be to communicate probability and uncertainty—both from scientists to journalists, and from journalists to the public. That False Alarm Probability might be alluringly small, but we better make sure we know what it means.

Now, this may seem like an esoteric case. Alpha Centauri Bb winking out of existence would be a big disappointment, but not, say, hazardous to anyone’s health. But it’s not hard to see how the latter case is problematic. Perhaps the biggest shift wrought by our era of Big Data isn’t the sheer amount of data but that the nature of reality and our predictions of the future are increasingly described in probabilistic terms—in everything from election results to climate change. When we communicate this, we all have to work hard to get it right.