Michael Nielsen was a Fulbright Scholar who got his Ph.D. in Physics at 24. He was already tenured when he decided just three years later to shift his attention to helping democratize Science. He's published three books, most recently Reinventing Discovery: The New Era of Networked Science. Currently, he's a Research Fellow at YC Research in San Francisco. Michael's a friend of mine, so I was happy to discover a new article by him in The Atlantic, authored with Patrick Collison, the CEO of Stripe. I decided to ask him why they'd done the research they describe, and what it meant. –Karl
You and Patrick Coliison recently published a piece in The Atlantic called Science is Getting Less Bang for its Buck. What's wrong with science? Aren't we continuing to make new discoveries?
Patrick and I had been struck by two facts: (1) over the past hundred years, the amount humanity has spent on science has increased dramatically, much much faster than inflation; and (2) it didn't seem that we were getting really big breakthrough insights much (if any) faster than before.
At some level that seems okay: science remains an amazing driver of human progress.
But on the other hand, if you need to keep spending more and more money and effort on something, it's worth enquiring about what's causing the cost to go up. And, talking to individual scientists, they're often quite disgruntled with how science is organized.
So is the cliche about scientists chasing grant money and academic positions accurate, then? Or is it that humanity has already made all of the big discoveries, the ones that matter?
Well, the cliche about chasing grants does seem to be depressingly true, though with some tweaks to the system it needn't be true.
Whether humanity has made all the big discoveries is a very interesting question.
Personally, I don't think so. We don't know what dark matter is, we don't understand how the brain works or consciousness, we don't understand morphogenesis, we don't understand the origin of life.
It's easy to continue that list of huge, juicy problems! Eventually, we'll solve some (or all) of those problems, and no doubt discover some amazing things along the way.
And those are just problems we're aware of. As science goes on, it reveals new problems. To discover dark matter required some pretty sophisticated knowledge about galaxies, rotation curves, and distance measurements. It wasn't really possible until we knew a huge amount
about astrophysics. And then we discover that a huge chunk of the universe was formerly invisible to us. It's just amazing.
But while there are plenty of big problems still open, or waiting to be discovered, there's a question of the cost of making progress on them. Discovering the Higgs particle cost perhaps tens of thousands of times as much as discovering the atomic nucleus. Are there ways we can get more insight for our effort?
There seems to be a paradox here. There's great potential in 21st century science, but it sounds like the way we do science is becoming unsustainable. If we're working and spending more and getting less out of it, what's the alternative?
Modern science is an odd beast. It could be very different! A lot of modern practices originated with ad hoc choices made in the very early days of the NSF, the NIH, and so on. Change those choices, and science today would be very different.
As one example: most scientific grants involve external reviewers scoring the applications. Often, one bad score will sink your application. It's a pretty risk-averse system as a result. But
recently a couple of grant agencies have allowed "golden ticket" reviews. They're like the golden buzzer in "America's Got Talent": one reviewer who loves what you're doing can fund you. It completely changes the dynamic, and encourages more swing-for-the-fences approaches.
Put another way, science-the-organizational-system (as opposed to science-the-practice, or science-the-body-of-knowledge) is an incredibly contingent thing. That means there are many design possibilities, which is exciting and encouraging.
On the other hand, it's not so easy to explore that design space in practice. At the moment it's not possible to start a new type of grant agency or university in your garage, and grow to outcompete (and replace) the NIH or Harvard. Instead we have a very nearly static set of institutions that change only very slowly.
In the Atlantic article you and Patrick Collison compare science to a continent that's been thoroughly explored. But you also raise an intriguing possibility: that there are other continents than the one we've explored. You use the concept of emergence to suggest that the more we learn, the more mysteries emerge from our findings. How does that work?
I think it's clearest in computer science. Computer science in some sense actually began with its "theory of everything" (the Turing machine, AKA more or less the template for modern computers). But that didn't mean computer science was over! Rather, beautiful new ideas in computer science keep opening up amazing new continents to explore.
So, for instance, in the 1960s and 1970s public key cryptography was discovered, a set of ideas that let people communicate in private even without pre-sharing any secret cryptographic key information. Those ideas, in turn, enabled ideas like internet commerce and cryptocurrencies (and many more). Turing could never have anticipated that back in the 1930s. And who knows what other new continents will open up in future, building on top of those ideas?
So my optimistic take is that it might be possible to greatly accelerate the rate at which new scientific fields are created. At present there's a lot of inhibitory factors — you get scientific positions and grant money mostly by sticking within an existing field. It's interesting to think about ways of changing existing grant practice so as to change that. My friend Tim Hwang suggested a manual for how to invent a new scientific field. It sounds silly, but I think it's an idea worth developing.
/b>In your book Reinventing Discovery: The New Era of Networked Science, you call for more involvement by ordinary citizens in the process of scientific discovery. What is Open Science, and how is it different from what we have now?
Open science is actually a really old idea. It goes back to the 1600s, when the first publishers of scientific journals had to work very hard to convince early scientists they should openly publish
their discoveries. Up to that point, they often kept them secret. When Galileo discovered the rings of Saturn he published the result in the form of an anagram(!) He only unscrambled the anagram when one of his rivals, Kepler, wrote him a somewhat forlorn letter begging him to relent.
That was the first wave of open science, and resulted in the scientific journal system. It works okay for results that can be published in journals. But today it ought to be possible to publish
scientific data, open source code, and so on. That'd make it far easier to build on the work of other scientists, and to do things like replicate experiments.
So over the past couple of decades there's been this gradually rising second wave of open science, trying to convince scientists to get their data, their code, and their ideas off their hard disks and onto the open web.
But as you say, I can't start a grant agency in my garage. Is there a role for us ordinary, non-scientists in making Open Science happen?
Sure, lots of roles! Ultimately, publicly-funded grant agencies (& scientists) have responsibility to voters, and to humanity at large. One reason the human genome is public knowledge is because grant agencies put their foot down and insisted to the scientists involved that the data be made openly available. That kind of thing is happening more and more, as it should – publicly-funded science should be open science. But we've got a long way to go.
Are you an optimist?
Along many (though not all) axes the world has gotten far better over my lifetime: infant mortality is way down, life expectancy is way up, literacy is way up, violence is way down (for most measures), across humanity as a whole wealth inequality seems to be down (though it's up in some countries), human rights have broadly improved, the population bomb has mostly been defused, and so on.
There are lots of big problems, of course. But we need to deeply understand the positive changes because they didn't happen by accident. People who don't understand what's worked in the past are likely to have more trouble solving today's problems. I've recently been reading about the Montreal protocol (which eliminated ozone-damaging CFCs) in the hope of getting more insight into how to attack the problem of climate change.
Even something like the relative malaise in science is best viewed as an opportunity. How can we do better? Can we use ideas like open science to accelerate science? And how can we use science to make the world better? I still believe there's plenty of low-hanging fruit, ways the world can be made much, much better.
Karl Schroeder is a science fiction writer and futurist. His next novel, Stealing Worlds, will be published by Tor Books in June, 2019.