How long should we expect Google Keep to last?

On the Guardian, Charles Arthur has totted up the lifespan of 39 products and services that Google has killed off in the past due to insufficient public interest. One interesting finding is that Google is becoming less patient with its less popular progeny, with an accelerating trend to killing off products that aren't cutting it. This was occasioned by the launch of Google Keep, a networked note-taking app which has the potential to become quite central to your workflow, and to be quite disruptive if Google kills it -- much like Google Reader, which is scheduled for impending switch-off.

So if you want to know when Google Keep, opened for business on 21 March 2013, will probably shut - again, assuming Google decides it's just not working - then, the mean suggests the answer is: 18 March 2017. That's about long enough for you to cram lots of information that you might rely on into it; and also long enough for Google to discover that, well, people aren't using it to the extent that it hoped. Much the same as happened with Knol (lifespan: 1,377 days, from 23 July 2008 to 30 April 2012), or Wave (1,095 days, from May 2009 - 30 April 2012) or of course Reader (2,824 days, from 7 October 2005 to 1 July 2013).

If you want to play around further with the numbers, then if we assume that closures occur randomly as a normal distribution around the mean, and that Google is going to shut Google Keep, then there's a 68% chance that the closure will occur between April 2015 and February 2019.

Read the rest

Cyclists are safe and courteous, and your disdain for them is grounded in cognitive bias

Jim Saska is a jerky cyclist, something he cheerfully cops to (he also admits that he's a dick when he's driving a car or walking, and explains the overall pattern with a reference to his New Jersey provenance). But he's also in possession of some compelling statistics that suggest that cyclists are, on average, less aggressive and safer than they were in previous years, that the vast majority of cyclists are very safe and cautious, and that drivers who view cycling as synonymous with unsafe behavior have fallen prey to a cognitive bias that isn't supported by empirical research.

The fact is, unlike me, most bicyclists are courteous, safe, law-abiding citizens who are quite willing and able to share the road. The Bicycle Coalition of Greater Philadelphia studied rider habits on some of Philly’s busier streets, using some rough metrics to measure the assholishness of bikers: counting the number of times they rode on sidewalks or went the wrong way on one-way streets. The citywide averages in 2010 were 13 percent for sidewalks and 1 percent for one-way streets at 12 locations where cyclists were observed, decreasing from 24 percent and 3 percent in 2006. There is no reason to believe that Philly has particularly respectful bicyclists—we’re not a city known for respectfulness, and our disdain for traffic laws is nationally renowned. Perhaps the simplest answer is also the right one: Cyclists are getting less aggressive.

A recent study by researchers at Rutgers and Virginia Tech supports that hypothesis. Data from nine major North American cities showed that, despite the total number of bike trips tripling between 1977 and 2009, fatalities per 10 million bike trips fell by 65 percent.

Read the rest

Medal Count

Attention sports-hating patriots! Here you can have just the stats. [medalcount.com] Read the rest

Widespread statistical error discovered in peer-reviewed neuroscience papers

"Erroneous analyses of interactions in neuroscience: a problem of significance," a paper in Nature Neuroscience by Sander Nieuwenhuis and co, points out an important and fatal statistical error common to many peer-reviewed neurology papers (as well as papers in related disciplines). Of the papers surveyed, the error occurred in more than half the papers where it could occur. Ben Goldacre explains the error:

Let’s say you’re working on some nerve cells, measuring the frequency with which they fire. When you drop a chemical on them, they seem to fire more slowly. You’ve got some normal mice, and some mutant mice. You want to see if their cells are differently affected by the chemical. So you measure the firing rate before and after applying the chemical, first in the mutant mice, then in the normal mice.

When you drop the chemical on the mutant mice nerve cells, their firing rate drops, by 30%, say. With the number of mice you have (in your imaginary experiment) this difference is statistically significant, which means it is unlikely to be due to chance. That’s a useful finding which you can maybe publish. When you drop the chemical on the normal mice nerve cells, there is a bit of a drop in firing rate, but not as much – let’s say the drop is 15% – and this smaller drop doesn’t reach statistical significance.

But here is the catch. You can say that there is a statistically significant effect for your chemical reducing the firing rate in the mutant cells.

Read the rest