/ Maggie Koerth-Baker / 12 pm Mon, Oct 31 2011
  • Submit
  • About Us
  • Contact Us
  • Advertise here
  • Forums
  • Faster-than-light neutrino update: What's going on behind the scenes?

    Faster-than-light neutrino update: What's going on behind the scenes?

    The publication process for a research paper about physics works a little differently than other subjects. That's because of arXiv. Funded by Cornell University, this site posts research papers, before they're formally published in a scientific journal. Unlike most scientific journals, which charge big fees for subscriptions or even to view a single paper, arXiv is free and open to the public. You can read everything published there—more than 700,000 papers about physics, math, computer science, and more. The other big difference: arXiv isn't peer reviewed. At least, not ahead of time.

    A lot of the time, when you read a newspaper article about a new study in one of those fields, the study hasn't actually yet been published in a peer-reviewed journal. It's just been posted to arXiv, which sort of becomes a crowd-sourced peer review peer review of its own. Especially for headline-grabbing research making big, bold claims.

    That's the background you need to understand what's going on right now with the study that claimed to find neutrinos traveling faster than the speed of light. That announcement was made in an arXiv paper. Putting those results on arXiv was as much a way of saying, "Woah, we just found something crazy, please tell us if you see something we've done wrong," as it was a formal declaration of scientific discovery.

    Since that paper was published in September, there have been more than 80 follow-up papers, also published on arXiv, offering criticism of the original research or proposing theoretical explanations of how that seemingly crazy finding could fit into physics as we know it. And all of this is happening before anybody has gone through the peer-review publishing process.

    That's why it's not terribly weird that you're now hearing all sorts of criticism of the original FTL neutrino findings. That's what was supposed to happen. It's also not terribly weird that the original researchers have announced that they're going to re-do the experiment themselves, taking into account some of the big criticisms brought up on arXiv. The BBC explains what will be done differently this time:

    The neutrinos that emerge at Gran Sasso start off as a beam of proton particles at Cern. Through a series of complex interactions, neutrino particles are generated from this beam and stream through the Earth's crust to Italy.

    Originally, Cern fired the protons in a long pulse lasting 10 microseconds (10 millionths of a second). The neutrinos showed up 60 nanoseconds (60 billionths of a second) earlier than light would have over the same distance.

    However, the time measurement is not direct; the researchers cannot know how long it took an individual neutrino to travel from Switzerland to Italy. Instead, the measurement must be performed statistically: the scientists superimpose the neutrinos' "arrival times" on the protons' "departure times", over and over again and taking an average.

    But some physicists say that any wrong assumptions made when relating these data sets could produce a misleading result. This should be addressed by the new measurements, in which protons are sent in a series of short bursts - lasting just one or two nanoseconds, thousands of times shorter - with a large gap (roughly 500 nanoseconds) in between each burst. This system, says Dr Bertolucci, is more efficient: "For every neutrino event at Gran Sasso, you can connect it unambiguously with the batch of protons at Cern," he explained.

    By taking these criticisms into account now, the FTL neutrino researchers are doing sort of a pre-peer-review peer review. If their new experiment yields the same results, it makes the claim stronger and makes a traditional journal more likely to publish the results. As a bonus: Those results will already have been tested against the most obvious criticisms. If FTL neutrinos make it to a peer-reviewed journal, there will be a much greater likelihood that what's being published is actually worth paying attention to. If they don't, there's a well-established record of how smart people got something wrong—valuable to future researchers, even though it wouldn't be likely to pass muster in a journal.

    Meanwhile, because none of these papers had to go through the lengthy (and costly) traditional publishing process, we've been able to see both the weird finding and the critical evaluations far faster than we otherwise would have. And because the weird finding was made available sooner, there will be independent researchers trying to replicate it sooner. In fact, there's a good chance that, if the FTL neutrino researchers decide to go ahead and publish their results in a peer-reviewed journal, several other, independent teams will be well on their way to replicating the results (or not) by the time that paper is printed.

    So if there's one thing you should be taking away from all the fuss over FTL neutrinos, it's this: Science benefits when scientists have more than one way to share information with each other.

    Image: Science Centre at CERN, a Creative Commons Attribution Share-Alike (2.0) image from johnjobby's photostream

    / / COMMENTS

    / / / / / /


    1. Maggie:
      Take a look the phrase: “becomes a crowd-sourced peer review peer review of its own”

      Either i’m missing the point or you’re repeating yourself :-D

      Maybe i’m missing the point and you’re getting all meta on me. I totally thought I understood this stuff though …. Still staring sideways at my screen and scratching my head though.

    2. The crowd-sourced pre-peer review (peer preview?) process seems to me to be much more stringent that actual peer review. Am I right to think that?

    3. The whole thing is clearly a scam, since only other scientists are going to be smart enough to spot the flaws in the original study.  How are creationists and climate change skeptics supposed to share their vaunted opinions if they can’t even understand it?    Take it away, Aasif Mandvi (apologies to those of you outside the US if the video doesn’t work).

    4. It should be pointed out that arXiv was started primarily NOT for this purpose, but as a “preprint” server.  The lag time between an article’s acceptance and its actual publication can be quite long, so arXiv was born to let folks put their already reviewed articles online for immediate consumption.  That arXiv is becoming a place where un-reviewed articles show u en mass is sometimes viewed as a bad thing (think crackpots posting anything they want).

    5. I worked at CERN in the 80’s and something as simple as the wrong length of cable could cause this result.  I’m sure they’re busy checking all that right now.

      1. I worked at CERN in the 80’s and something as simple as the wrong length of cable could cause this result.

        That’s how I find most of my jobs too.

    6. The neutrinos from supernova 1987A arrived at the exact time predicted by theory, and assuming that they travel at the speed of light. Unfortunately this kills the whole FTL  neutrino idea for me. I think it is unlikely the CERN results will be confirmed. Damn, I wanted my Delorean.

      1. Yeah, the fact that SN 1987A neutrinos arrived when they were supposed to is a big gaping whole in this. There are some hypotheses that will sort of deal with both data points. For example it could be that when we think we produce neutrinos we actually produce a very short-lived tachyon that decays into a neutrino. Then one would only notice the difference in expected arrival time over very short distances. But this seems to run afoul of Occam’s razors. 

        The SN 1987A data is one of many reasons to think this is wrong.

    7. So here’s a problem that reminds me of the chart published a week or three ago suggesting things that scientists could say, instead, to promote understanding by the public


      You say “And all of this is happening before anybody has gone through the peer-review publishing process.” But what the layman (me) thinks is, “Wait, isn’t arXiv a kind of peer-review process?”
      Perhaps you mean “arXiv is informal peer review, before the formal peer review” or maybe just “before the publication process.”

      By the way, that chart needs to add “Theory” to the list of jargon that scientists say. To the public, that means, “We have no evidence or tests and this is what we guess.” What they mean is more along the lines of, “This explanation works well for all the evidence we have, and although we might come up with a refinement or counter-example later, you can take it as Law.”

    8. The focus of this story is kind of strange.  ArXiv mainly is for already peer-reviewed papers, those in the process of peer-review, or conference proceedings and the like.  Having the media jump in there too can be counterproductive.  I’ve refereed many papers with serious flaws.  Sometimes they were making outrageous claims that would be media-worthy.  But the problem was in the methodology, so the papers never got published and the outrageous claims never saw the light of day.  If they had, the media would have reported on them, and bad science would have been disseminated to the public.  Corrections rarely are.  While there are occasionally good reasons to publish on arXiv before  peer review, if the media starts reporting on such papers it undermines the peer review process and science loses one of its most powerful tools.  

      1. OK, that’s a fun fact.  But who cares?

        1. Cranks publishing papers on arXiv still don’t get cited by serious scientists so citations still work fine as a rough marker of the quality and importance of papers.
        2. The system that’s evolved around arXiv solves some of the most serious problems with the peer reviewed journal problem: that sensational but likely incorrect results get much more attention than really important results that do not make good journal copy or which are humdrum confirmations of things already believed true.
        3. More eyes, less bugs.  Relying on one or a few anonymous journal reviewers who are probably crunched for time means less attention to detail.  It also means a smaller range of skills and knowledge being brought to bear.  Five dozen working physicists with different specialties are much more likely to find a problem with a study than one or two or three.
        5. I like learning about current scientific research but I’m not at a research institution and don’t have hundreds of thousands of dollars of disposable income to use to subscribe to scientific journals.  ArXiv is really nice for those of us who want to be scientifically literate but can’t afford it.

    9.  In fact, there’s a good chance that, if the FTL neutrino researchers decide to go ahead and publish their results in a peer-reviewed journal, several other, independent teams will be well on their way to replicating the results (or not) by the time that paper is printed.

      There are only really two other projects that have the capability to replicate this sort of thing: MINOS at Fermilab in the US, and T2K in Japan. But MINOS people have publicly said they are trying to replicate it. Estimates seem to be that they will be done by February. So this seems spot on.

      1. Do you have some reason not to use ACTUAL html tags to create a blockquote?  Is this a Noh version of formatting?

    10. Is it true that the proposition, the speed of light is the speed limit, assumes that photons have no mass?  If so, where does that assumption come from?  What if photons have mass, like everything else?  Since when does anything not have mass?  If everything, including photons, have mass, wouldn’t that increase the speed limit because there’d be more gravity?  If so, then something might go faster than light, or maybe light, itself, would go faster.

      1. I can’t quite tell if this is serious, or a deft parody of arXiv done as a response to the article. I think I’m going to take that as a mark of how excellent an example of the latter it is.

      2. The proposition that the speed of light is the fastest possible speed can be derived easily from special relativity.  Special relativity implies that accelerating a mass increases a quantity called its “relativistic mass.”  Further acceleration needs to be from a force proportional not to the “rest mass” but to the relativistic mass so the faster a mass moves, the more energy is required to accelerate it further.

        However, when a mass moves at exactly the speed of light the relativistic mass is infinite.  No amount of energy can accelerate an infinite mass, so nothing can be accelerated from slower than the speed of light to faster than the speed of light (and actually getting to the speed of light would probably require more energy than is in the universe in the first place).

        1. And light is believed to move at exactly the speed of light (hence the name).  One reason for this is that light is EM radiation and the wavespeed of EM radiation can be derived by equating Maxwell’s equations for electric fields and magnetic fields and noting that it gives the differential equation of a traveling wave.  That wavespeed is the speed of light.  (The other way we know the speed of light is that we can bang a laser beam off a mirror on the moon and see how long the round trip takes.  This experiment has been used to confirm that light does indeed move at the speed of light.)

          This gets a little confusing, though, because a meter is defined as a certain number of divisions of a light year.  So if I was going to say “X” is the speed of light it would really be a tautology because the speed of light is defined in such a way as to simultaneously define the meter as a unit of length measurement.

    11. If they can generate 2 nanosecond pulses at 500 nanosecond intervals, then they can send signals—e.g., by sending or not sending each pulse, they can send 1bit/500 nanoseconds = 2Gb/s at lightspeed straight through the earth.  Might be a little expensive, but it would reduce the latency of US-china communication by a factor of pi.

      1. Not really. Many of these pulses won’t result in any neutrinos being detected.  The key of this experimental setup is that when they do detect a neutrino they will know exactly which burst it came from.  

    12. and of course all of *our* news agencies wouldn’t touch this with a dead celebrity.  I hate this damn country.

    13. There are really multiple advantages to the open-peer review process; one of them is speed and broader review are two major ones.  However it is also a considerable savings on journal subscriptions. Currently, authors submit to journals for free, review and edit articles for free, and then pay exorbitant rates for the finished product.  This is great for academic publishers, but is a tremendous drain on scientific resources (mostly by limiting research results to the few people in R1 Universities). 

      We need to have projects like ArXiv in all the fields of scholarship.

    14. I worked for several years doing setups/ calibrations/timing for experiments that were ‘oneshot’ events. The events destroyed the source of the information to be recorded as well as hundreds up to thousands of feet of data cable. Such a ‘lab’ condition required many months  of calibration, ‘what if’s’, etc. Thousands of manhours were involved by many people with imperssive credentials/reputations. Many clever calibration  ideas evolved from this. The end results were a major step forward in knowledge in the field of nuclear physics. The prosess was long, not easy and not with everyones support. That is not until we succeded and then those who popooed the effort junped aboard.
      Not all experiments are successful.
      Specially until all thedetails are taken care of, outside reviews of methodology and data completed.
      From what I have read, I think the Nutrino flight time answer will soon be ready when one considers all of the test setup is intact and the events can be repeated for …….. how ever long is needed.
      Old Drifty Jim

    15. It’s worth noting that this particular experiment can’t be easily replicated in many labs — it requires a fairly powerful particle accelerator.  That’s one of the reasons why opening the experiment to critique before finalizing the report and attempting to publish is a good thing here.  Once published its going to take quite a while before anyone else can test the result.  Best to let people suggest changes to the experiment while the team is still fully assembled with the machines in good mx and ready to roll.

    16. This appears rigged to ensure the same results as before, meanwhile Ronald van Elburg a physicist with a Ph.D. in theoretical physics claims in a paper submitted to arXiv titled ” Time of flight between a source and a detector observed from a satellite” that the OPERA experiment fails to account for the effects of special relativity (Lorentz factor, time dilation) precisely matching the disputed 64 nanosecond discrepancy. I would have thought that this would be addressed before spending a small fortune on a repeat of the experiment, because if the measured distance is in fact not 720 kilometres but 20 metres shorter, the results will be the same.

    Comments are closed.