Talking to the public about cell phones, safety, and cancer risks

cellphone.jpg

This week, the World Health Organization re-categorized cell phone usage as a possible cancer risk. That sounds scary, but as many people have pointed out, a "possible cancer risk" means something different to scientists than it means to the public. I've already posted links to some very nice explainers by Ed Yong and Matthew Herper that really get to the heart of what is going on behind the headlines, and why the fear this whole incident has inspired isn't necessarily justified.

We don't know whether or not cell phones cause certain types of brain cancer. There's some evidence out there that leads scientists to want to know more. But the majority of what we do know tells us that the risks are likely to be very small, if not completely non-existent.

That's the issue everybody's been talking about, but what's captured my interest is the way we talk about it. Throughout the week, I've felt like scientists, public health experts, journalists, and the general public have been speaking past one another—using language that looks the same, but means something very different to different groups of people. Context is key, I wrote on Tuesday. But, invariably, when we talk about almost any public health issue, it's the context that's the first thing to go missing under our collective couch cushions. To find out more about why that is, and what we can do to fix it, I called Bradford Hesse, chief of the National Cancer Institute's Health Communication and Informatics Research Branch. He studies the way scientists talk to the public about cancer risk, and had a lot to say about probability, risk mitigation, and why the Internet is both a source of confusion and the way out of our communication quagmire.

Maggie Koerth-Baker: Before we get too far into this interview, I want to figure out where we're all coming from. What's the baseline here? How well do you think Americans understand what "cancer risk" really means?

Bradford Hesse: Not too well. Cancer itself is a very complicated set of diseases with a complicated set of scientific studies helping us understand what constitutes risk and prevention, and what people can do to be healthier and control their risk. The public has been struggling with that. We know this because we have done a series of surveys and we know that, when science comes out in its raw form—and with the Internet you get this shortened news cycle anyway—people get drowned out by the complexity of the data. "Data smog" is a term I've heard and that's what people end up with.

With the cell phone stuff this week, you have these people who have busy lives and they're being inundated with data smog, but they don't get a nice interpretation of what that means and how they can act on it. All they get are individual data points popping out at them. I think, in general, people just respond to the daily deluge of, "This study said x." They take the one little piece of that information that stays in their minds on the their way to work and they reduce a complicated issue down to a simple binary: It causes cancer or it doesn't. That's what people are struggling with.

MKB: Why isn't it reasonable to think about cancer risk as a binary, on or off, question? I mean, something causes cancer or it doesn't, right?

BH: This really comes back to the notion of thinking in terms of probability. We don't do it often, but we know that people are capable of that with enough support from their environment. For instance, people listen to weather reports and hear that there's a 60% chance of rain today. Now, they don't necessarily understand what that means—that, on 60% of the days like today there's been rain in the past—people don't necessarily know that, but they can still use the information to make choices throughout the day.

To get there, though, they need a lot of help. We've invested a lot in media to present weather maps and probabilistic information. What we need are really folks like you to provide that interpretation on top of the probabilities. "What does that mean for me?" That's what people need.

The other side of what this may mean is that, as people respond to these kinds of stories on the radio or TV, they go to the Internet and do searches. That's something we
didn't see before 2001 or 2002. And we can take advantage of that. As people start googling "cell phone use" and "cancer" they'll start landing on different kinds of interpretive articles. We no longer have to pack everything into one message, but can have an ongoing conversation with the public. This idea of binary, on/off, black and white, I think it really came from the old world in the 1950s and '60s where it was all broadcast media, and all one-way communication.

One other thing we've said for a long time is that we, as scientists, are public health's worst enemy. We aren't putting this into the plain language that we need for public education. Alan Alda has been important in talking a lot about this, actually. He's told scientists, "If you have a spurious early finding, couch it as 'a blip we're investigating' not as this 'very important finding.' But that's the topic of framing science and scientists have never been very good at understanding that.

Let me give you an example: We have a grantee at the Washington University in St. Louis, and they took data related to colorectal cancer screening. They were studying how to empower African Americans on that issue because screening rates are bad in that group. But if you take the data over time, you find that there's increases in screening rates for both African Americans and non-African Americans. So, one way of talking about the data is to say that African Americans are doing so much worse than these other groups. But another thing you can say is that African Americans are making great progress over time.

That latter way of talking about the findings was very empowering and it actually increased screening intention among African Americans. But the framing the data as disparities, how poorly African Americans are doing in comparison to other groups, is what scientists use to get funded. But it's the wrong way to talk about the data to the public. It's still accurate both ways. but the message needs to be different to different audiences.

MKB: Let's talk a little bit about that framing issue, because I think that's relevant to this cell phones and cancer risk story. For instance, the scale of cancer risk used in the WHO report doesn't have very many categories, just four. Is that enough, do you think? Are there other ways that the public health community categorizes cancer risks of various things that might make more sense to the general public, rather than just lumping everything from DDT to pickles into this one category of "needs more information."

What that reminds me of is the mammography issue. The Preventative Services Task Force made a change in recommendations for women ages 40-50. But they were stuck with these large, pre-set categories to work with, so they moved mammography screening for that age group into this discretionary category. They meant it to be enabling—there should be a conversation about it with your doctor. But they didn't frame it very well at all. Because they were stuck with those big categories, it got framed as, "One day this was OK, the next day it wasn't important."

Are there other ways to categorize? I go back to the weather reports. We're trying to find ways to talk about cancer risk the way we talk about that. It's not just "Sunny" or "Rainy." There's more than a couple of options. And we think we can do a better job by visualizing cancer risk that way.

MKB: Where do those categories come from? Why were they set up to be so broad?

BH: I don't know for certain, but I would imagine that, in all these cases, the categorical lumping that we do is a bureaucratic convenience. There's this movement to a new box, and then we do new things about it from there. For the WHO, it might mean that we look at cell phones more closely.

But that change wasn't necessarily meant to be how we communicate to the public. It was meant to be how we communicate within. With the Internet we raise the curtain. Now, everyone is looking at what's going on behind the scenes and saying, "Wait. Why's it in that category," but as the public health community, we're not prepared to talk about that yet. It's a historical accident and we need to be thinking about how to change
that conversation so we aren't stuck in those boxes. Right now at the National Cancer Institute, we're working with computer scientists who do work on data visualization to come up with new ways of presenting complex information. We're investing in some of these groups, because it's amazing what they can do and we think it will help communication about cancer risk.

MKB: The word "possible" really stuck out to me in the definition of the WHO's 2B category. What does "possible" mean in that context? Why is the professional "possible" different from the layman's "possible?" Or do you think it is?

BH: This is the whole difficulty with probability. Especially with low
frequency events. For instance, earthquakes are possible not just in Southern California, but also in the Washington DC metro. But they're less probable in the Washington metro. When possible comes up, it's something that has a low degree of probability but none of that is conveyed, so the public jumps to the far end and wonders if it is probable tomorrow. That's why we need more conversation with the public. Maybe
these metaphors, like earthquakes in DC, could be a helpful way to convey that.

MKB: What about the precautionary principle? At what point should that kick in, where we say, "We don't know if this is dangerous, so let's not use it?" How do public health officials make those decisions?

BH: In the United States, we have the Preventative Services Task Force and we do have these very high level committees of experts who look at the data and come to conclusions and decide whether the risk is great enough to now take action in limiting exposure. The fact that that hasn't happened with cell phones means there isn't enough data to put us into that category yet. It's just an area of focus and interest. But there are regulatory systems in place.

MKB: But what about when different countries' expert groups disagree? What does it mean when the EU bans something and the US hasn't made that call?

BH: You'd hope that there's more convergence between countries than divergence. I think it might have to do with the speed of new data. Some countries are better equiped to make fast decisions on new data. And in different countries there are other tensions—commercial tensions, public health tensions. They play out differently in every country. And I think this could also say something about a culture of risk taking versus a culture of risk aversion.

MKB: But why wait until there's proof of risk to make those regulatory calls? Why don't we do it the other way around, and say, "Well, we think there's reason to look at this more, so let's ban it until we know for sure."

BH: The way I look at risk communication, I prefer to think about risk management rather than the black and white boxes where you have risk or don't have it. We live in a world of risk. I take a risk every time I drive. But it would be painful if
someone said I couldn't ever drive again. At the same time, though, I follow laws and signs. And I can be fined if I don't. Same with health. There's a cautionary sign and there are things we can do to reduce risk. Personally, I don't know where the data trigger is when I should avoid something entirely. But I know that my physician is trained
in that and I look to him for guidance. And I put a lot of faith in what the CDC does and what the FDA does.

When we look at the way people go online and get information, there's a lot of risk in information, too. It might be wrong, it might be right, it might be biased. But, instead of regulating that, and saying what information you can and can't have access to, we encourage an open transparency model for people to get all the information that they can. And people have a preference to work with their doctor to make sense of that information. We need to encourage more conversations between people and their physicians. That active problem solving is needed for an empowered public.

MKB: Is it reasonable to expect anything to be harmless?

BH: Everything has some risk. I remember hearing a thing on a radio show with the jocks encouraging a drinking contest of water, and ultimately even
that was dangerous in large quantities. We never get to absolute zero risk. Instead, we manage to control that and navigate that. Is the fact that we are more aware of risk now a good or bad thing? That's a philosphical question. I think it's good, but with a caveat. People have to approach it with an informed consumer's hat on. You need to make sense of this information as you're exposed to it. There are paths through the
confusion if we're very attuned to making sense of the world. Maybe part of what we need is, in school, to have more education in health classes about understanding risk and about proactive health management.

MKB: What is the difference, to you, between the circumstances of the cell phones and cancer concern and the connections people were making between cigarettes and cancer in the mid-20th century? I think a lot of Americans are skeptical of cell phone safety because of situations like that, where the risk was hand-waved away with the participation of the medical community. Has anything about the way these decisions are made changed since then?

BH: That's very big and it relates to the way we relate to data from the environment again. On tobacco, if you polled any set of scientists today you'd get that this is settled science. When the scientific community is saying "this is a problem" then we should listen. But what happened with tobacco, what still happens with lots of things, is commecial interests sowing doubt. They come in and say, "Let's make it look like all science is forever tentative." That's very very dangerous. We have to be very attentive to the importance of real data vs importance of self interest.

MKB: But what should people do now? I've seen a lot of my readers who are skeptical of this because they don't know whether they're being lied to. Because of the way things like tobacco risk were handled, they don't want to trust anything that official entities have to say. How do we deal with that? Is there a way to tell what is truth and what is a commercial interest sowing doubt?

BH: Well, I'm in government and I know a lot of people who are in government and who I think are very dedicated and interpret data in a way that I think is responsible. I have that insider's bias, but I don't see a conspiracy here. But this is a conversation we have to have. Are we taking this fear of conspiracy to a
maladaptive level. On one level, it's reasonable to be skeptical, but people can take that to extremes to the point where it's hard to get anything done.

Meanwhile, I think we do invest heavily in the education and the scientific method needed to be sure that we're processing things in a credible way. And most physicians are doing thier darndest to interpret data in a way that is good for their patients, even if they are biased in some ways. And I think the same is true with public health agencies within government. I think they can give credible messages. But, having said that, we should be adept and aggressive questioners of what we hear.

My advice is to look for convergence. Im a psychologist by training and one way to overcome the quesiton of bias is to keep asking different people from different
places the same question, and over time a trend emerges. That's really what people seem to trust. If we hear something from the CDC, and then the same thing from the EU, and the same thing from, say, the WHO, then people start to be more comfortable. But I do worry on the conspiratorial side there are some people who won't ever take data and science. They want belief rather than empiricism, and that can be an unfortunate path.

MKB: What about cell phones and cancer? Is there a convergence on that issue that people can point to?

BH: From what I know the movement of cell phones to that 2B category, the WHO is the first group to do that and other agencies are assessing now what they will do. I think they'll all look at the data and come up with similar conclusions, where cell phones are on the the lower end of that area of possible, but not necessarily probable, risk. Cell phones aren't like cigarettes. It's not a known risk. We see some clouds, but that's not the same thing as spotting a hurricane.

For More Information on cell phones and cancer:

• I cannot recommend enough that you read Ed Yong's explanation of what the World Health Organization's re-categorization decision really means, and what evidence really exists for the relationship between cell phone usage and cancer risk.


• Orac, a science blogger who is a cancer surgeon, has written a lot about this over the years, and has another nice summary of the evidence and critique of how the information has been presented to the public.

Images: Hung Up, a Creative Commons Attribution (2.0) image from jurvetson's photostream; Cool Zappos Poster, a Creative Commons Attribution (2.0) image from nanpalmero's photostream.