Part 2 of Science and gun violence: why is the research so weak?

The town of Macapá is in the north of Brazil, on the coast, where the Amazon River flows into the Atlantic. On December 5th, 2001, Sir Peter Blake and his crew decided to spend the night there. They were on their way back to the ocean after a journey down the Amazon, documenting the effects of climate change for the National Geographic Society.

That night, while their guard was down, a group of masked bandits boarded the boat.

When we talk about gun ownership, one of the primary things we talk about is self-defense. Having a gun makes some people feel safer. That's a perfectly legitimate reason to want a gun, from a personal perspective. But from a public perspective—the place where laws are built—what we want to know is not whether people feel safer with guns, but whether they actually are safer.

The pirates who boarded Peter Blake's boat had guns. So did Peter Blake. One of the robbers used his gun to threaten the life of a crewmember. Blake used his to shoot the robber in the hand.

But then Blake's gun jammed. While he tried to get it to work, a second robber shot him in the back, killing him.

No one else on the boat was seriously injured. After the murder, the robbers gathered up what little haul they could—some watches, a couple of cameras, a dinghy with an outboard motor—and fled.

This tragic story illustrates one of the big questions about gun ownership that science can't yet answer and politicians don't yet know how to address. Did having a gun make Peter Blake and his crew safer? It's possible that, had he not fought back and died, the robbers would have hurt more people. Did having a gun make Peter Blake and his crew less safe? It's possible that, had no man with a rifle emerged from below decks, then the robbers would have just taken their relatively unimportant booty and been on their way.

It's also completely possible that Blake's gun, or hypothetical lack thereof, had no real impact on the final outcome. Other factors—the robbers' desperation, local laws, how the pirates and the crew interacted—might have mattered more.

The fact is, we can speculate, but we don't know. And not just in this particular instance. On a broad scale, we don't know whether having more guns makes a society safer, or less safe. Or, really, whether it has any effect at all.

That was the conclusion reached by a panel of experts who reviewed gun research in the United States back in 2004. Since then, the situation hasn't changed, says Charles Wellford, professor of criminology and criminal justice at The University of Maryland and the panel's chairman.

But this statement doesn't mean there hasn't been research on the subject. In fact, in their report for the National Academy of Sciences, the committee actually wrote that this topic—specifically as it relates to laws that allow law-abiding citizens to carry a gun in public—has "a large body of research" behind it. The problem, the report says, is that none of this research has managed to make a definitive case one way or the other. Many studies exist. Those studies all produced results. It's not like the scientists finished their papers with, "In conclusion: We aren't sure." It's just that individual papers only tell you so much. To actually understand what's going on, you have to evaluate that large body of research, as a whole.

Scientists are missing some important bits of data that would help them better understand the effects of gun policy and the causes of gun-related violence. But that's not the only reason why we don't have solid answers. Once you have the data, you still have to figure out what it means. This is where the research gets complicated, because the problem isn't simply about what we do and don't know right now. The problem, say some scientists, is that we —from the public, to politicians, to even scientists themselves—may be trying to force research to give a type of answer that we can't reasonably expect it to offer. To understand what science can do for the gun debates, we might have to rethink what "evidence-based policy" means to us.

* * *

Research on the relationship between safety and gun ownership dates back to 1997, when economists John Lott and David Mustard published a now-famous paper asserting that right-to-carry laws had drastically reduced violent crime in states that enacted them between 1977 and 1992.

This was not the final word on the subject. Since then, other scientists have published papers critiquing this work—in particular, the fact that the decrease in crime Lott and Mustard found turned out to be complicated by a nationwide decrease in crime that began in roughly the late 1980s. To this day, nobody knows exactly why that decrease happened, but right-to-carry laws can't explain it. And it makes it hard to say that the decreases in crime Lott and Mustard found were actually related to those laws, and not the larger trend. Some of the critical papers just say that the more guns, less crime hypothesis hasn't actually been proven. Others, though, assert basically the opposite—that right-to-carry laws have actually increased certain kinds of violent crime.

For the most part, there aren't a lot of differences in the data that these studies are using. So how can they reach such drastically different conclusions? The issue is in the kind of data that exists, and what you have to do to understand it, says Charles Manski, professor of economics at Northwestern University. Manski studies the ways that other scientists do research and how that research translates into public policy.

"What scientists think of as the best kind of data, you just don't have that," he said. This problem goes beyond the missing pieces I told you about in the first part of this series. Even if we did have those gaps filled in, Manski said, what we'd have would still just be observational data, not experimental data.

"We don't have randomized, controlled experiments, here," he said. "The only way you could do that, you'd have to assign a gun to some people randomly at birth and follow them throughout their lives. Obviously, that's not something that's going to work."

This means that, even under the best circumstances, scientists can't directly test what the results of a given gun policy are. The best you can do is to compare what was happening in a state before and after a policy was enacted, or to compare two different states, one that has the policy and one that doesn't. And that's a pretty inexact way of working.

To understand this problem a little better, let's take a look at something totally unrelated to gun policy—body piercings.

* * *

Pick a random person—someone in your office, maybe, or a passerby out on the street. It doesn't really matter whom. But once you've chosen them, you have a job to do. You need to count the number of piercings they have.

Up front, this seems pretty simple. You can easily see whether your person is wearing earrings, or if she has a nose stud. But it gets harder when we start talking about the potential piercings that aren't easily observable. For the sake of this experiment, you're not allowed to strip your person down to their skivvies. And you can't just go ask them, either. After a certain point, you are going to have to start making assumptions. If your person is wearing a three-piece business suit and has no visible piercings, you might decide that there's a good chance they aren't hiding any, either. If you have reason to suspect that your person has a nipple pierced, then you can reason that, most likely, they have both nipples pierced.

Add in enough assumptions, and you can eventually come up with an estimate. But is the estimate correct? Is it even close to reality? That's a hard question to answer, because the assumptions you made—the correlations you drew between cause and effect, what you know and what you assume to be true because of that—might be totally wrong.

For instance, John Donohue, professor of law at Stanford University, is one of those researchers who think having more guns on the street increases the risk of aggravated assaults. Basically, he thinks that guns are more likely to escalate a tense situation than to diffuse it or prevent it from happening in the first place. But the 2004 National Academies report came to the conclusion that he'd not proved his case any more than Lott and Mustard had proven theirs. And this is why. When I spoke with Donohue, he acknowledged that he could be missing factors in his analysis of the data and that cause and effect might not be tied together in the way he thinks they are.

"There's always the apprehension that the states that pass [right-to-carry laws] also happen to be the states that were more likely to do a better job of counting aggravated assaults," he said. "Or maybe those are the state that have laws requiring police to prosecute batters. Things like that could muddy up the results." It's hard to tease apart the effect of one specific change, compared to the effects of other things that could be happening at the same time.

This process of taking the observational data we do have and then running it through a filter of assumptions plays out in the real world in the form of statistical modeling. When the NAS report says that nobody yet knows whether more guns lead to more crime, or less crime, what they mean is that the models and the assumptions built into those models are all still proving to be pretty weak.

In fact, that's the key problem at the heart of the debate over whether more guns means less or more crime, John Pepper said. Pepper is an economics professor at The University of Virginia, and one of the researchers involved in the 2004 NAS report. He's written articles criticizing the methods of both John Lott and John Donohue and he said that he sees this particular branch of research as locked in a sort of spinning wheel—constantly producing variations on a theme, but never able to answer the questions correctly. From either side of the debate, he said, scientists continue to produce wildly different conclusions using the same data. On either side, small shifts in the assumptions lead the models to produce different results. Both factions continue to choose sets of assumptions that aren't terribly logical. It's as if you decided that anybody with blue shoes probably had a belly-button piercing. There's not really a good reason for making that correlation. And if you change the assumption—actually, belly-button piercings are more common in people who wear green shoes—you end up with completely different results.

"It's been a complete waste of time, because we can't validate one model versus another," Pepper said. Most likely, he thinks that all of them are wrong. For instance, all the models he's seen assume that a law will affect every state in the same way, and every person within that state in the same way. "But if you think about it, that's just nonsensical," he said.

What you're left with is an environment where it's really easy to prove that your colleague's results are probably wrong, and it's easy for him to prove that yours are probably wrong. But it's not easy for either of you to make a compelling case for why you're right.

Statistical modeling isn't unique to gun research. It just happens to be particularly messy in this field. Scientists who study other topics have done a better job of using stronger assumptions and of building models that can't be upended by changing one small, seemingly randomly chosen detail. It's not that, in these other fields, there's only one model being used, or even that all the different models produce the exact same results. But the models are stronger and, more importantly, the scientists do a better job of presenting the differences between models and drawing meaning from them.

"Climate change is one of the rare scientific literatures that has actually faced up to this," Charles Manski said.

What he means is that, when scientists model climate change, they don't expect to produce exact, to-the-decimal-point answers. The Intergovernmental Panel on Climate Change (IPCC) produces these big reports periodically, which analyze lots of individual papers. In essence, they're looking at lots of trees and trying to paint you a picture of the forest. IPCC reports are available for free online, you can go and read them yourself. When you do, you'll notice something interesting about the way that the reports present results.

The IPCC never says, "Because we burned fossil fuels and emitted carbon dioxide into the atmosphere then the Earth will warm by x degrees." Instead, those reports present a range of possible outcomes … for everything. Depending on the different models used, different scenarios presented, and the different assumptions made, the temperature of the Earth might increase by anywhere between 1.5 and 4.5 degrees Celsius.

On the one hand, that leaves politicians in a bit of a lurch. The response you might mount to counteract a 1.5 degree increase in global average temperature is pretty different from the response you'd have to 4.5 degrees. On the other hand, the range does tell us something valuable: the temperature is increasing.

Now, you could fiddle with the dials to produce a more exact result. That's perfectly possible. But, in doing so, you might have to settle on a set of assumptions that don't necessarily reflect reality. You can increase the pinpoint accuracy of your result. Unfortunately, you might do so at the expense of the reliability of that result.

But that is is precisely what gun research tends to do, Manski and Pepper said. "Policy makers don't like ranges. You don't get called in front of Congress to testify with a range," Pepper said.

What might a range look like, applied to crime and violence? As a hypothetical, let's think about the impact of having a death penalty. We don't really know whether the death penalty saves innocent lives or not, Manski said. But with some work, we could theoretically get down to a range. We could say something like, "The impact of the death penalty could fall anywhere between saving five innocent lives and losing two." That's the kind of range you'd get when you're talking about whether more guns means more or less crime.

How do you get there? Manski explained it as a process; you start out looking at your data with no assumptions at all. If we were counting body piercings, we'd only be looking at the ones we can see with our own two eyes. Then you slowly add in only the strongest possible assumptions—the piercings you can kind of see an outline of through clothing. That gives you a range of possible answers. "These ranges tell you something, but not an awful lot," Manski said. "So now let's start thinking about what assumptions might be believable and what do they buy me?" Try adding a few assumptions with really strong logic behind them—somebody with multiple face piercings is likely to have more than one non-visible piercing. Bit by bit, you can narrow down the range, in a believable way, until you get something like, "This person probably has between 1 and 4 piercings. " To narrow down even further, you might look at the ranges produced by a couple of different models, and see where they overlap. "You lay out a whole menu of results. It's different from the present research, which is done in a take-it-or-leave-it fashion," Manski said.

* * *

The problem with this is that it flies in the face of what most of us expect science to do for public policy. Politics is inherently biased, right? The solutions that people come up with are driven by their ideologies. Science is supposed to cut that Gordian Knot. It's supposed to lay the evidence down on the table and impartially determine who is right and who is wrong.

But how do those expectations apply if the best answer we can actually get to the question of whether guns make us safer is something along the lines of, "The likely effects of right-to-carry laws range from saving 500 lives annually to costing 500 lives annually."

Manski and Pepper say that this is where we need to rethink what we expect science to do. Science, they say, isn't here to stop all political debate in its tracks. In a situation like this, it simply can't provide a detailed enough answer to do that—not unless you're comfortable with detailed answers that are easily called into question and disproven by somebody else with a detailed answer.

Instead, science can reliably produce a range of possible outcomes, but it's still up to the politicians (and, by extension, up to us) to hash out compromises between wildly differing values on controversial subjects. When it comes to complex social issues like gun ownership and gun violence, science doesn't mean you get to blow off your political opponents and stake a claim on truth. Chances are, the closest we can get to the truth is a range that encompasses the beliefs of many different groups.

"In politics, being evidence-based isn't as simple as science telling you exactly what you should do," Manski said. "I see scientists promising stuff they can't deliver. You have people saying they know for sure, but the way they know is by making assumptions that have really low credibility."

Photos: Reuters / Nick Adams and Andrew Winning