Half of the internet uploads photos these days, it seems

instagram_phone_bw

The creative class keeps on rising, and expanding.

For the first time, over half of the US adult internet population has posted a photo or video, according to the latest data from the Pew Internet & American Life Project.

Pew surveyed 1000 adults and found that 54% of adult internet users have posted some of their own photos or video to the web. Last year, that figure was 46%, meaning that 8% of adults on the internet posted their own photos or video to the web within the last year.

7769BF0C5E0B4F4582BF57A9602A5590

Almost as many are content curators—47% have shared a photo or video that someone else uploaded. I checked the crosstabs for how many of those were cat macros, but sadly, Pew didn’t ask that question.

“Pictures document life from a special angle,” stated Maeve Duggan, one of the authors of the study, with no apparent trace of irony.

For the first time, Pew did ask adult internet users whether they used certain apps. Turns out 18% use Instagram and 9% use Snapchat. There’s no data on teens, though, for which Snapchat ought to be substantially higher.

Another caveat to those numbers: Pew didn’t ask how many of the people who are “using” Instagram and Snapchat if they were also posting pictures to the service, or just looking at the feeds of the people they follow.

With the rise of smartphones (only 4% in the survey didn’t know what the term meant), it’s no surprise to find that everyone is a photographer these days. Even on pro-leaning Flickr, the most popular camera is an iPhone—and so is the second most popular, and the third.

flickr_most_popular_cameras

The margins of error of Pew’s results varied from +/- 3.7–4.0 percentage points.

Miracle of Science (Cambridge)

This slideshow requires JavaScript.

So a science writer walks into a bar. He’s by himself.

Actually, that’s not a joke, that was just my Monday night.

The bar is called Miracle of Science, close to the MIT campus in Cambridge. I’ve been drawn in by the bar’s most prominent feature: its enormous menu, in the form of a periodic table. It’s hand-drawn on a chalkboard covering an entire wall. In the top corner of the table, where hydrogen would be, sits the most fundamental element of a bar menu, Hb for hamburger, below it Cb for cheeseburger, Vb for, well, you get the idea. Yes, just like the periodic table, the menu is grouped into columns and color-codedd based on the item’s properties. Br for brownie, not bromine.

Miracle of Science opened in 1991; the menu was designed by a bartender in 2002, who’s long since moved on. His initials RR are still visible in the corner.

The Ronie Burger is the best thing on the menu, according to a bartender and the guy sitting next to me at the bar. “It comes with pepperjack cheese and jalapeños actually stuffed inside the patty,” he tells me. It lives up to expectations, spicy enough to make me break out sweating. The skillet home fries and salsa that come with it are a nice touch.

Two years after the menu went up, Popular Science named it one of the “top nerd bars” in the nation. Back then, PopSci said the tables were “surrounded by microscopes and other lab paraphernalia.” But today, the decor is modern, minimalist and trendy. Game 5 of the World Series is on TV, but it’s on mute. The ESPN logo is burned into the corner of the screen. A 90s mix is playing. It’s a young techy professional crowd drawn from nearby tech firms—girls with big hipster glasses, guys huddled over laptops and their drinks.

Despite the name, there’s not much more of a science theme. But if you want real science with your drinks, you can go next door to Middlesex Lounge, run by the same owners. Middlesex actually hosts nerdy events, including Boston’s monthly Nerd Nite and science cafes hosted by WGBH’s long running science program Nova. Together, these two bars form a decent scientific core in the Central Square scene.

Matthew Curtis and Chris Lutes, the co-owners, also own Audubon Circle, Cambridge 1, and Tory Row, all similarly decorated trendy pubs. But they’re not too interested in advertising, says a bartender who declines to tell me his name and pleads with me: “Keep it unofficial, ok?”

A Strange Lonely Planet

MPIA_V.-Ch.-Quetz_sm

An artist’s impression of the planet-sized L dwarf PSO J318.5-22. (MPIA/V. Ch. Quetz)

Earlier this week, a press release hit my inbox that made me say, “Ooooooh!” out loud. Its headline was: “A Strange Lonely Planet Found without a Star” and it came with an image.

“Oooooh!” I said again. An image of a planet without a star? Free-floating through the lifeless void? My imagination rumbled to life and started to jump to conclusions.

You see, I’ve wanted for a while now to write a very sad science fiction novel about such a starless scenario. This dream of mine has been motivated by a real science problem: planetary migration.

In the early days of exoplanet studies (way back in the mid-1990s!), the very first planets to be discovered were known as hot Jupiters—giant gaseous planets closer to their host stars than Mercury is to our Sun with temperatures in the thousands of degrees (Centigrade, Fahrenheit, or Kelvin—take your pick). Their existence continues to be a puzzle, because they could not have formed where they are. Both a star and its surrounding planets form when gravity pulls a cloud of gas together into clumps. But in these cases, the energy radiating from the young star should have blown the gas away before it could coalesce into a planet. Only small, rocky planets should be able to form there. The logical theory is that these hot Jupiters had to form much further out where it was colder—like where our own Jupiter is in our solar system—but then somehow migrate in.

ps1_lonely_planet-450

The “planet” imaged by the Pan-STARRS1 telescope. It was discovered by a US-German team, with follow-up observations from Mauna Kea. (N. Metcalfe & Pan-STARRS 1 Science Consortium)

Multiple theories have been proposed in which the laws of physics conspire to do just that. In one, the newly formed planet slowly spirals in, losing momentum as it plows through the disk of gas surrounding the young star; in another, the gravitational presence of a nearby companion star perturbs the planet, triggering a wild, eccentric orbit that ends next to the star.

But here’s the rub: Either way, if a Jupiter comes barreling in through its solar system, its gravity will likely throw the planets out! Its enormous mass would scatter the planets like a bowling ball, slingshotting them into the dark, vast coldness of space faster than a tumbling Sandra Bullock. In fact, based on the number of these planets astronomers have already detected through gravitational microlensing, we expect upwards of billions of planets to be lost in space, away from their stars.

What a great end-of-the-world sci-fi story that would be! A helpless population, doomed by the inexorable dance of physics! The Earth becomes both our interstellar starship and our coffin! And people look for strength and hope in a world where every day is just a little bit darker and colder than the one before, without end. An art-house apocalypse—not with a bang, but the saddest, coldest whimper.

At first, I thought this discovery was a direct image of a planet so ejected! But then I read the paper, and the word “planet” isn’t how the authors described it in their title: “A Free-Floating Planetary-Mass Analog to Directly Imaged Young Gas-Giant Planets.”

In other words, it very well might not a planet—at least, not based on how we use the word planet in everyday life—but it’s like a planet. The team seem to think it probably formed like a star based on the fact that it’s moving in the same direction as other nearby stars, as opposed to an ejected planet that could be going any which way. It just happened to be so small it could be mistaken for a planet! What a bummer.

Actually, outside of my imagination, it’s not really a bummer—it’s a neat opportunity. Actual exoplanets are very difficult to study directly; they’re so close to their stars they get lost in the glare. To the extent that these planet-sized objects actually resemble planets, they give us a chance to nail down their physics unimpeded by their pesky host star. Judging by attributes like its mass, color, and brightness, this “planet” does a fair impersonation, but we still don’t know if objects like these form in exactly the same way as planets.

Regardless, I’ll keep dreaming about writing my novel…

This is my homework assignment.

my_facebook_network

The story of my life, more or less.

Yes, this plot of Skittles is my homework—for an online class I’m taking through Coursera called Social Network Analysis. This was our first assignment: using a couple of free online tools to download your Facebook friends data and visualize your network of friends.

The ease with which I was able to complete the assignment made me thankful for how (relatively) accessible network science is as a field. Thanks to social network APIs and open-source software, the tools you need to analyze your own social data are easily available. Consider it the social networking version of 23andme—personal memomics, if you will. (And thanks to the NSA, awareness has gone up, too!)

Taking the graph of my network above, each circle is a friend of mine (known as a “node” in the parlance of graph theory) and each link (or “edge”) between nodes indicates they’re Facebook friends. Not all my friends are connected to all my other friends—there were some free-floating clusters. But for the sake of clarity, I’ve shown only the largest connected component (LCC) of my graph.

The spacing of the nodes is determined by an algorithm based on simple physics. In it, each node is repulsed from each other, like magnets. Each link is like a spring, tugging groups of people together based on their common friendships. Another algorithm detects these clusters, draws boundaries, and then assigns them colors. (I went through and annotated some of them.)

I want to emphasize that I’m not in the center of this graph—a so-called ego network. I’m not in the graph at all! Each link between nodes is a direct friendship between those individuals. It shows how my friends are connected to each other, not how connected I am to them. In other words, this a disclaimer to all my friends out there: You’re all awesome, and how central you are in the chart has nothing to do with how important you are to me!

So what can network science tell me? The analysis tool, called Gephi, calculates several standard network metrics. For example, the “average path length” across the network is 5.2. In other words, there are, on average, just over 4 degrees of separation between any two people in the network—indicating a “smaller” network than the famous “six degrees of separation” maxim. This pattern is replicated over Facebook as a whole—the company’s data team reported in 2011 that the entire network had an average of just 3.74 degrees of separation, and that it was decreasing each year. The world is shrinking.

But for me, the most fascinating aspect wasn’t the numbers, but simply zooming in and browsing through my graph. As a record of my social life, it’s a strange thing. It’s all there—my friendships, my relationships, my bridges I’ve burned. By tracing edges to their nodes, I can remember the moments that linked them together, the chance friendships that I intend to keep for a lifetime, forming structures like filaments of galaxies fanning out across the night sky.

Some of the most interesting connections are the ones that unexpectedly link clusters. A kid from high school who’s now a b-boy in Seoul. Or the person who subletted my room one summer in college and then ran into a classmate from grad school while they were visiting physics grad schools. The people who are connected that you had no idea knew each other.

And isn’t it funny how some of your best friends can be the smallest nodes?

On Gravity: When 3D is necessary

GRAVITY

I’ve seen Gravity twice now. Like so many others, I found Alfonso Cuarón’s film to be one the great moviegoing experiences of my life. It was visceral in a way in which I’d never felt sitting in a theater before, and engaged my senses and my spatial awareness in a way that seems only possible in 3D.

I’d even go so far as to say that Gravity is the first real 3D movie, in the sense that it is a post-photography movie. Cuarón’s trademark long shots prove to be a perfect means of embracing a method of moviemaking not bound by the conventions rooted in the physical artifacts that previous generations of artists have used, like frames, cuts, and zooms.

A photograph is an image; it’s built around its own two-dimensionality, and the entire language and grammar of film is built around the fact that it is filmed as a series of photographs. So much of the aesthetic framework of a photograph is that it renders reality in an artificial way, by removing a dimension. To an photographer, that is not restriction, but possibility. It means that a person in the foreground can exist next a person in the background within the frame, creating a dramatic or emotional subtext. The foreshortening of a receding line can be exploited to guide the eyes back to the subject. And so on.

Take Citizen Kane, the film that codified the language of cinema. In one scene, we see a humiliated Kane on the bad end of a business deal being forced to sign away most of his media empire.

Kane sighs, begins to reminisce, and walks into what appears to be a small room with windows behind him. But as he recedes into the frame he becomes smaller and smaller, until we realize that the windows are enormous! The set is evidently much deeper than our perspective suggested. And as Kane recedes into the depths, his image shrinks until he is dwarfed by the windows, reflecting his diminished status. It’s an optical illusion that conveys emotion—one that is made possible because of film’s two-dimensionality.

In another sequence, the camera pulls into a photograph on the wall until it subsumes the frame and then begins to move—the photograph becomes the film itself. (Trite, these days, perhaps, but what a shock it must have been to an audience in 1941.)

As another example of the power of two-dimensional imagery, you can take Cuarón’s own celebrated long shots in Children of Men. They were all about creating images, filling a frame with imagery and movement, and often incorporated real-life photographs like the scarecrows of Abu Ghraib.

All these aesthetic techniques are made possible because of the restriction of two-dimensionality. When this latest wave of 3D films started to build, many critics rejected them for this reason—by eliminating the restriction of flatness, they argued, you eliminated the possibilities that made film unique.

But to think that there aren’t also possibilities in the added dimension of depth that can be used creatively is pretty unimaginative (not to mention forgetful of theater, in which depth is always used to create tension; a soliloquy delivered from the back of a stage conveys a much different emotional state than one delivered from the front of the stage).

Gravity moves film into a realm where the classical rules of composition—those that date back to painting, the Renaissance, and an understanding of things like perspective and foreshortening—now require an extension, or a complete reformulation. (What is the 3-dimensional equivalent of the Rule of Thirds?)

Gravity is not filmed—it is filled. It takes place not within a frame but within a volume. It’s about space. Not outer space—but design space, mathematical space, the way an architect talks about space. An image can convey depth. But it cannot exist with depth. Gravity does. It happens in 3D.

And it should, because physics happens in 3 dimensions. Outer space doesn’t have a frame—there’s not up, down, left, right. Instead, you describe it with X, Y, and Z. Objects move, collide, and tumble about all three axes. Things go flying at the camera; in many other movies, it seems cheap, but here, it’s motivated by the physical ballet unfolding on screen before you. Your sense of physical intuition is engaged at all times, which is what makes this film so uniquely visceral.

Minor spoiler (highlight to view): For example, at one point, Sandra Bullock is tumbling past a spacecraft that she really needs to get a hold of. She flails her body, which makes no difference to her trajectory, of course. Then she throws an important object she’s been carrying up into space. When that happened, I thought, “No!” But then I saw what happened: her body was propelled down the frame in the opposite direction, towards the ship. Of course—Newton’s third law.

Even though we don’t deal with this weightless regime of physics in daily life, we do have an intuition that goes, “Oh yeah, that’s what would happen in space!” And that’s the entire movie, this pleasure of seeing the unexpected yet physically inevitable. The soundless explosions, spacecraft spinning, this ballet of broken objects—it’s all validated by our subconscious sense of physics. I would love to hear a neuroscientist’s take, but I would suspect that Gravity engages the part of the brain that you use to be aware of your surroundings, to be aware of your own location and momentum, and where it is taking you.

And for that, your mind needs 3D.

The Error Cone and Visualizing Uncertainty

Tropical Storm Karen Advisory 3

The National Hurricane Center’s 3rd advisory issued for Tropical Storm Karen.

When we’re kids, one of the first subjects in which we learn the concepts of probability and uncertainty is the weather. It’s perhaps the only area of our life in which we all use probabilistic models on a daily basis to guide our decisions—decisions that can come back to bite us. It’s one thing when Nature decides to deliver on that 10% chance of rain; it can be catastrophic when a hurricane makes good on a 10% chance landfall.

In a post last week, I wrote about conveying uncertainty in exoplanet detection—a matter of curiosity. But conveying uncertainty in a hurricane’s predicted track is a matter of public safety. So it would make sense for the National Hurricane Center to take great pains in communicating uncertainty to the public. Its method of visualizing it is known as the “error cone.”

Originating at the current location of the hurricane’s center, it expands along the predicted path to show how the forecasted path becomes more uncertain in the longer term. To be specific, the edge of the cone represents a 67% chance that the hurricane remains inside the cone based on the accuracy of the past five years of forecasts.

But there are some well-known issues with the error cone. For starters, it can give the false impression that it represents the extent of the storm itself, not the extent of its predicted track. Interpreted that way, it seems that the storm expands over time. Another is that by drawing a hard line in the sand at the 67% contour, it gives people just outside the cone a false sense of security, despite the fact that there’s a 1-in-6 chance the hurricane will deviate outside of the cone towards them. (If you’re wondering why it’s not 1-in-3, it’s there’s also a 1-in-6 chance it goes outside the cone on the other side.)

The issue is that a hurricane’s predicted path isn’t a probability—it’s a probability distribution. Some places are more probable than others to lie along the path, but there’s no clear-cut boundary. Choosing an arbitrary 67% contour is convenient, but it’s an awful way to convey the full distribution of possible tracks.

A team of scientists led by Jonathan Cox of Clemson University recently published an alternative method of visualizing a hurricane’s predicted path that looks like this:

What they’ve done is simulate the hurricane’s path hundreds of times, but rigged the simulation’s settings so that it should have the same statistical distribution as the error cone. It’s a bit like loading dice. There’s an element of randomness in each track, but after generating hundreds of tracks, they cluster around the original, predicted track. They also check after each track to make sure the overall set is similar to the error cone. If they’re making too many tracks outside the error cone, they reset the simulations so it will make more inside of it. It’s another application of Monte Carlo models.

The authors don’t claim to have evidence yet that this method leads to a more accurate public perception. (I can think of one possible objection: since the tracks must necessarily diverge, the decreased density makes the tracks appear fainter, which could give a false impression that the storm will get weaker.) But they do report results from a small focus group in their study and found that almost all preferred their new method: in addition to giving a better sense of the dynamic nature of hurricane tracks, it was also simply more visually interesting.

Why auto racing is a geek’s dream sport

Hello, geek.

Hello, you science nerd, you technology aficionado, you analytical thinker, you.

Do you like watching sports?

I ask because there is a sport that will appeal to every aforementioned aspect of your personality, although judging from American TV viewing figures, you are probably not paying attention to it—even though its competitors are geeks, just like you. It is the pinnacle of automobile racing, the league known as Formula 1.

A Ferarri and Red Bull scream around the streets of Singapore in 2011. Photo: Chuljae Lee / CC

When it comes to adrenaline, these cars have no match. They’re screaming, winged rockets of carbon fiber cradling a driver with no roof over his head at top speeds exceeding 200 mph. There are no fenders to protect the wheels and suspension as they strain under the 5 Gs of stress that these cars exert as they scream around corners.

But despite that, forget the notion that modern racing is an exercise in pure sensation and blind bravery. Nor is it the gentlemanly pastime of European princes, hobbyist mechanics, and thrill-seeking rascals that it once was many decades ago. Today, more than any other sport, F1 is driven by design and data. It’s engineering. It’s technology. It’s physics soup for the scientific soul.

It’s no wonder that when Ron Howard began production on his 1970s-era F1 pic Rush, he described the world he found as a “combo of engineering brilliance and fearless courage [that] reminded me of people I met at NASA while directing Apollo 13.”

The workings of an F1 team are relentless, iterative, like a computer algorithm designed to obtain a minimum value: for a race distance of 305 km, solve for the shortest time possible.

Watching a race on TV, it’s almost startling to hear the quantitative way in which the most competent commentators analyze the race as it unfolds—the cars are going over 200 mph and the guys on TV are calculating fuel loads and tire wear. It’s a bit like that epic moment in Apollo 13 when astronaut Jim Lovell is struggling to convert the gimbal angles from the stricken command module to the lifeboat lunar module and everyone in Mission Control whips out their slide rule.

To see a bit of this strategy and how F1’s geeks solve it, consider the quandary teams face when planning pit stops to change tires. A typical race might last between 50–80 laps, but the tires on an F1 car wear quickly, and each successive lap takes a tenth of a second longer on average, or more. Changing to fresh rubber means the drivers regain their speed, but a total of about 20 seconds is lost as the team swaps tires and the driver obeys a 100 km/hr speed limit on pit lane. (This is called the “bogey time” and is measured by the teams at each track.) So how often should a driver sacrifice those 20 seconds to gain back the most time on fresh rubber?

The math works out to be 1 to 3 times during a race, depending on the rate of wear, trading 20 to 60 seconds in the pits for the consistently quicker lap times on fresh tires.

But when? Imagine you’re the leader of the race. If you time it too early, you may emerge from the pits in the middle of the swarming peloton of cars, fighting with them for position. That would cost you precious time. Perhaps you should wait a handful of laps and let the cars behind you pit first.

But wait. If they pit first, they will have fresh tires while you are running around on worn rubber, bleeding time each lap. By the time you pit, the other cars may have leapfrogged you as you sit in pit lane. (This tactic is called the “undercut”.)

Now perhaps, my geeky race strategist, you have determined the perfect laps on which to pit to minimize your time (and made sure that your team is free of moles who might leak your strategy—a very real danger). But here’s the thing: the other teams can calculate their numbers just as well as you can. What are they likely to do? Well, it depends. Does that change what should you do? Maybe.

No computer could find a single perfect solution for this kind of problem. It’s mathematically impossible; there are simply too many variables. Instead, the best method is to simulate tens of thousands of races, randomly trying as many different strategies as you can to see which ones result in you winning the race the most times.

This kind of technique is called a Monte Carlo method, named since every simulation is like a gambler’s roll of the dice. It was enabled by the rise of computers and pioneered on the primitive ENIAC. Today, it’s ubiquitous. It’s the same probabilistic math that Nate Silver uses to predict elections and that scientists use to forecast the paths of hurricanes—the rolling of multitudes of virtual dice to see which outcomes are most likely to come true, down which branches of reality the river of time will meet the least resistance. And it’s why the top F1 teams have squads of statisticians and data analysts working in Mission Control-style computer rooms back in their factories during a race, conducting their simulations, feeding their teams the latest model runs and dictating race strategy.

So what does this mean for you, dear geek? For one, the raw timing data is available to view at Formula1.com during races. Observing the lap times and the gaps between cars will allow you to see strategies unfold faster than the TV announcers can comment on them. If you want to go even further, there is an open source API project to intercept the data, allowing you to write your own code and make your own predictions.

F1 isn’t just about watching a competition—it also gives fans the chance to experience the joy of watching an outcome emerge from a sea of data. That’s something every geek can appreciate.