Shades of Tuskegee in Indian cancer studies

How do we know whether screening for something like cervical cancer is effective at saving women's lives? Two ongoing studies conducted in India (one funded by the National Cancer Institute and the other by The Gates Foundation) are aimed at answering that question — but their methods are under fire by critics.

It works like this. Say you want to test the effectiveness of a new screening method. You recruit a large group of women and you split them into two groups. One group gets the screening regularly. The other, the control group, doesn't get the screening. Then you follow them over time and track how many women in both groups died of cancer. That's a pretty basic scientific method. It's also something that prompts big questions about the treatment of women in the control group.

The people conducting the study say women in the control group were told they could seek out screening on their own. Critics argue that point (and the way the study worked) wasn't clearly explained, and that those alterante options weren't as available to the women as researchers imply. The majority of the women participating in the studies are poor and have very little formal education.

There are some important differences between this and the infamous Tuskegee syphilis experiment. In that case, researchers identified men with syphilis and neither told them about their disease nor offered them treatment — just monitored the deadly disease's progress. Here, there's clearly an attempt (however poorly executed) at being open with the women about what the study is and what is being done. And nobody is intentionally trying to prevent sick women from being treated. But the study definitely exists in an uncomfortable space and could reasonably be called unethical. Is it ever okay to not screen people for a disease that are pretty sure some of them have? If not, how do we figure out whether potentially life-saving screening methods are actually useful? How do you do statistics ethically when people are the numbers? I don't have good answers for these questions.

Here's what we do know. There are 76,000 women enrolled in the National Cancer Institute study, and another 31,000 in The Gates Foundation study. So far, they've been tracked for 12 years and at least 79 of the women in the control groups have died of cervical cancer.

Read Bob Ortega's full story at The Arizona Republic

A suicide draws attention to the ethics of psychiatric drug testing

This is a really important long read that we all need to pay attention to. It concerns how we treat people with who are suffering from paranoid delusions — and how we treat people whose families worry that they are a threat to others. It concerns the relationships between doctors and the pharmaceutical industry. It concerns the ethics of clinical trials — the risks we run as we test potential treatments that could help many, or hurt a few, or both. If we want to reform mental health care, this needs to be part of the discussion.

In 2004, Dan Markingson committed suicide. The story behind that death is complicated and depressing. At the Molecules to Medicine blog, Judy Stone documents the whole thing in three must-read chapters. Many people find help in psychiatric drugs, and credit those drugs with making their lives better. (Full disclosure, I'm one of them. I have used Ritalin for several years. I am temporarily on an anti-depressant.) But we have to pay attention to how those drugs get to us. This isn't just about treating people. It's about the process that gets us there. Because, if that process is compromised, the treatments we get won't be as effective and lives will be lost along the way.

Markingson began to show signs of paranoia and delusions in 2003, believing that he needed to murder his mother. He was committed to Fairview Hospital involuntarily after being evaluated by Dr. Stephen Olson, of the University of Minnesota. He was subsequently enrolled on a clinical trial of antipsychotic drugs—despite protests from his mother. This study was a comparison of atypical antipsychotics for the treatment of first episodes of schizophrenia (aka the CAFÉ study), sponsored by AstraZeneca. The study’s structure was that of a Phase 4 randomized, double-blind trial comparing the effectiveness of three different atypical antipsychotic drugs: Zyprexa (olanzapine), Risperdal (risperidone) and Seroquel (quetiapine), with each patient to be treated for a year.

After about two weeks on study treatment in the hospital, Markingson was discharged to a halfway house—again over his mother’s objections. Over the coming months, Dan’s mother, Mary Weiss, continued to express concerns about her son’s deterioration, even asking if her son might have to kill himself before anyone else would take notice…then, in fact, her son violently committed suicide on May 7, 2004, mutilating himself with a box cutter. The University of Minnesota and their IRB have maintained that the study was conducted appropriately and that they have no responsibility for Dan’s death. Dan’s mother and bioethicist Carl Elliott believe otherwise.

We’ll explore some of the major issues of contention in this case over several posts, as illustrative of basic clinical research principles, including adequacy of informed consent, IRB oversight, conflicts of interest, and coercion, including threats to a bioethicist whistleblower.

Read the first part of the story

Read the second part: How clinical trials should be done and how they were done in this case.

Read the third part: Conflicts of interest between the researchers and the pharmaceutical industry.

Image: Pills (white rabbit), a Creative Commons Attribution (2.0) image from erix's photostream

Google’s driver-less cars and robot morality

In the New Yorker, an essay by Gary Marcus on the ethical and legal implications of Google's driver-less cars which argues that these automated vehicles "usher in the era in which it will no longer be optional for machines to have ethical systems."

Marcus writes,

Your car is speeding along a bridge at fifty miles per hour when errant school bus carrying forty innocent children crosses its path. Should your car swerve, possibly risking the life of its owner (you), in order to save the children, or keep going, putting all forty kids at risk? If the decision must be made in milliseconds, the computer will have to make the call.

U.S. panel urges end to secret DNA testing for "discreet samples"

On Thursday, The Presidential Commission for the Study of Bioethical Issues released a report on privacy concerns sparked by the advent of whole genome sequencing (decoding the entirety of someone's DNA make-up), and the ease with which commercial startups offer to obtain and decode secretly-swiped DNA samples. Chairperson Amy Gutmann told reporters on Wednesday that there is a "potential for misuse of this very personal data." More at Reuters.

Warning labels can act as nocebos

Remember the nocebo effect? It's the flip side of placebos. Placebos can make people feel better or even relieve pain (to a certain extent). Nocebo happens when a placebo causes negative side-effects—nausea, racing heart, dizziness, etc. And here's one more weird thing to add to this veritable bonfire of weirdness: When we tell people about the possible negative side-effects of a real drug, that might make them more likely to experience those side-effects.

In one study, 50 patients with chronic back pain were randomly divided into two groups before a leg flexion test. One group was informed that the test could lead to a slight increase in pain, while the other group was instructed that the test would have no effect. Guess which group reported more pain and was able to perform significantly fewer leg flexions?

Another example from the report: Patients undergoing chemotherapy for cancer treatment who expect these drugs to trigger intense nausea and vomiting suffer far more after receiving the drugs than patients who don’t.

And, like placebos and classic nocebos, this isn't just "all in their head"—at least, not in the sense that they're making it up or deluding themselves. There are measurable physical effects to this stuff.

As science writer Steve Silberman says in the article I've quoted from above, what we're learning here is that the feedback we get from other people ("That might make you feel yucky" or "You look tired today") has a physical effect on us. It's a little insane. It's also worth thinking about when we talk about medical ethics. Full disclosure of what treatments you're getting and what the risks and benefits are is generally regarded as the ethically right way to practice medicine. And that's probably correct. But how do you balance that with what we know about placebo/nocebo? What happens when transparency keeps you from using a harmless placebo as a treatment? What happens when transparency makes you more likely to experience negative health outcomes? It's a strange, strange world and it's not always easy to make the right ethical choices.

Read Steve Silberman's full story on the nocebo effect

Peter Watts's drone-ethics story "Malak" podcast

Tony from the StarShipSofa podcast sez, "This week on StarShipSofa we play the short story Malak, by science fiction writer Peter Watts. Malak was originally published in the anthology Engineering Infinity edited by Jonathan Strahan and views the world of a semi-autonomous combat drone called Azrael and throws in some very powerful ethical questions. A brilliant story from a brilliant writer."

StarShipSofa No 244 Peter Watts/MP3 (Thanks, Tony!)

Market for zero-day vulnerabilities incentivizes programmers to sabotage their own work

In this Forbes editorial, Bruce Schneier points out a really terrible second-order effect of the governments and companies who buy unpublished vulnerabilites from hackers and keep them secret so they can use them for espionage and sabotage. As Schneier points out, this doesn't just make us all less secure (EFF calls it "security for the 1%") because there are so many unpatched flaws that might be exploited by crooks; it also creates an incentive for software engineers to deliberately introduce flaws into the software they're employed to write, and then sell those flaws to governments and slimy companies.

I’ve long argued that the process of finding vulnerabilities in software system increases overall security. This is because the economics of vulnerability hunting favored disclosure. As long as the principal gain from finding a vulnerability was notoriety, publicly disclosing vulnerabilities was the only obvious path. In fact, it took years for our industry to move from a norm of full-disclosure — announcing the vulnerability publicly and damn the consequences — to something called “responsible disclosure”: giving the software vendor a head start in fixing the vulnerability. Changing economics is what made the change stick: instead of just hacker notoriety, a successful vulnerability finder could land some lucrative consulting gigs, and being a responsible security researcher helped. But regardless of the motivations, a disclosed vulnerability is one that — at least in most cases — is patched. And a patched vulnerability makes us all more secure.

This is why the new market for vulnerabilities is so dangerous; it results in vulnerabilities remaining secret and unpatched. That it’s even more lucrative than the public vulnerabilities market means that more hackers will choose this path. And unlike the previous reward of notoriety and consulting gigs, it gives software programmers within a company the incentive to deliberately create vulnerabilities in the products they’re working on — and then secretly sell them to some government agency.

No commercial vendors perform the level of code review that would be necessary to detect, and prove mal-intent for, this kind of sabotage.

The Vulnerabilities Market and the Future of Security (via Crypto-gram)

Why "ethical" people commit fraud

An All Things Considered segment (MP3) with Chana Joffe-Walt and Alix Spiegel looks at the circumstances that lead to people cheating and committing other frauds. They frame it with the true story of Toby Groves, whose brother had been convicted of fraud, and whose father made him swear a solemn oath to be upstanding in his business dealings. However, Groves found himself committing fraud later, and brought several of his employees in on it.

Typically when we hear about large frauds, we assume the perpetrators were driven by financial incentives. But psychologists and economists say financial incentives don't fully explain it. They're interested in another possible explanation: Human beings commit fraud because human beings like each other.

We like to help each other, especially people we identify with. And when we are helping people, we really don't see what we are doing as unethical.

Lamar Pierce, an associate professor at Washington University in St. Louis, points to the case of emissions testers. Emissions testers are supposed to test whether or not your car is too polluting to stay on the road. If it is, they're supposed to fail you. But in many cases, emissions testers lie.

"Somewhere between 20 percent and 50 percent of cars that should fail are passed — are illicitly passed," Pierce says.

Financial incentives can explain some of that cheating. But Pierce and psychologist Francesca Gino of Harvard Business School say that doesn't fully capture it.

Psychology Of Fraud: Why Good People Do Bad Things

Spoiler: Indie after-the-zombies movie

Spoiler is an independently produced 17-minute horror/science fiction movie that illuminates the kinds of cold equations that have to be solved in pandemic outbreaks. In this case, it's the story of the coroners who keep the zombie plague under control after it's been beaten back. It's a good twist on the traditional zombie movie, and hits a sweet spot of sorrow and horror that you get with the best zombie stories.

The zombie apocalypse happened -- and we won.

But though society has recovered, the threat of infection is always there -- and Los Angeles coroner Tommy Rossman is the man they call when things go wrong.

Spoiler (Thanks, Ben!)

Facebook launches a new game: Organ Farmville

Facebook announced today that the social network's 161 million members in the United States will be encouraged to begin displaying "organ donor status" on their pages, along with birth dates and schools. Some 7,000 people die every year in America while waiting for an organ transplant, and the idea here, according to this New York Times story, is to "create peer pressure to nudge more people to add their names to the rolls of registered donors." Absolutely nothing could go wrong. (via John Schwartz)

Doc blasts mandatory transvaginal ultrasound laws

An anonymous MD has a guest-post on John Scalzi's blog describing her/his medical outrage at being asked to perform medically unnecessary transvaginal ultrasounds on women seeking abortion, in accordance with laws proposed and passed by several Republican-dominated state legislatures. As the doctor writes, "If I insert ANY object into ANY orifice without informed consent, it is rape. And coercion of any kind negates consent, informed or otherwise." The article is a strong tonic and much-welcome -- the ethics of medical professionals should not (and must not) become subservient to cheap political stunting, and especially not when political stunt requires doctors' complicity in state-ordered sexual assaults.

1) Just don’t comply. No matter how much our autonomy as physicians has been eroded, we still have control of what our hands do and do not do with a transvaginal ultrasound wand. If this legislation is completely ignored by the people who are supposed to implement it, it will soon be worth less than the paper it is written on.

2) Reinforce patient autonomy. It does not matter what a politician says. A woman is in charge of determining what does and what does not go into her body. If she WANTS a transvaginal ultrasound, fine. If it’s medically indicated, fine… have that discussion with her. We have informed consent for a reason. If she has to be forced to get a transvaginal ultrasound through coercion or overly impassioned argument or implied threats of withdrawal of care, that is NOT FINE.

Our position is to recommend medically-indicated tests and treatments that have a favorable benefit-to-harm ratio… and it is up to the patient to decide what she will and will not allow. Period. Politicians do not have any role in this process. NO ONE has a role in this process but the patient and her physician. If anyone tries to get in the way of that, it is our duty to run interference.

Guest Post: A Doctor on Transvaginal Ultrasounds

The case for dolphin rights

Recently, I posted a series of videos where science writers talked about some of the fascinating things they learned at the 2012 American Association for the Advancement of Science conference. In one of those clips, Eric Michael Johnson talked a bit about a panel session on whether or not certain cetaceans—primarily whales and dolphins—deserve to have legal rights under the law, the same as people have.

This is an issue that just begs controversy. But in a recent blog post following up on that panel and the meaning behind it, Johnson explains that it's not quite as crazy an idea as it might at first sound.

It was just this understanding of rights as obligations that governments must obey that formed the basis for a declaration of rights for cetaceans (whales and dolphins) at the annual meeting of the American Association for the Advancement of Science held in Vancouver, Canada last month. Such a declaration is a minefield ripe for misunderstanding, as the BBC quickly demonstrated with their headline, “Dolphins deserve same rights as humans, say scientists.” However, according to Thomas I. White, Conrad N. Hilton Chair of Business Ethics at Loyola Marymount University in Los Angeles, the idea of granting personhood rights to nonhumans would not make them equal to humans under law. They would not vote, sit on a jury, or attend public school. However, by legally making whales and dolphins “nonhuman persons,” with individual rights under law, it would obligate governments to protect cetaceans from slaughter or abuse.

“The evidence for cognitive and affective sophistication—currently most strongly documented in dolphins—supports the claim that these cetaceans are ‘non-human persons,’” said White. As a result, cetaceans should be seen as “beyond use” by humans and have “moral standing” as individuals. “It is, therefore, ethically indefensible to kill, injure or keep these beings captive for human purposes,” he said.

Johnson also makes an interesting point—there's a legal basis for this kind of thing. After all, if corporations can be people, my friends, why not dolphins?

PREVIOUSLY

Image: Dolphins, a Creative Commons Attribution Share-Alike (2.0) image from hassanrafeek's photostream

What it's like to wear a brain-stimulating "thinking cap"

Science writer Sally Adee provides some background on her New Scientist article describing her experience with a DARPA program that uses targeted electrical stimulation of the brain during training exercises to induce "flow states" and enhance learning. The "thinking cap" is something like the tasp of science fiction, and the experimental evidence for it as a learning enhancement tool is pretty good thus far -- and the experimental subjects report that the experience feels wonderful (Adee: "the thing I wanted most acutely for the weeks following my experience was to go back and strap on those electrodes.")

We don’t yet have a commercially available “thinking cap” but we will soon. So the research community has begun to ask: What are the ethics of battery-operated cognitive enhancement? Last week a group of Oxford University neuroscientists released a cautionary statement about the ethics of brain boosting, followed quickly by a report from the UK’s Royal Society that questioned the use of tDCS for military applications. Is brain boosting a fair addition to the cognitive enhancement arms race? Will it create a Morlock/Eloi-like social divide where the rich can afford to be smarter and leave everyone else behind? Will Tiger Moms force their lazy kids to strap on a zappity helmet during piano practice?

After trying it myself, I have different questions. To make you understand, I am going to tell you how it felt. The experience wasn’t simply about the easy pleasure of undeserved expertise. When the nice neuroscientists put the electrodes on me, the thing that made the earth drop out from under my feet was that for the first time in my life, everything in my head finally shut the fuck up.

The experiment I underwent was accelerated marksmanship training on a simulation the military uses. I spent a few hours learning how to shoot a modified M4 close-range assault rifle, first without tDCS and then with. Without it I was terrible, and when you’re terrible at something, all you can do is obsess about how terrible you are. And how much you want to stop doing the thing you are terrible at.

Then this happened:

The 20 minutes I spent hitting targets while electricity coursed through my brain were far from transcendent. I only remember feeling like I had just had an excellent cup of coffee, but without the caffeine jitters. I felt clear-headed and like myself, just sharper. Calmer. Without fear and without doubt. From there on, I just spent the time waiting for a problem to appear so that I could solve it.

If you want to try the (obviously ill-advised) experiment of applying current directly to your brain, here's some HOWTOs. Remember, if you can't open it, you don't own it!

Better Living Through Electrochemistry (via JWZ)

Undercover UK cops infiltrated environmental groups, seduced women in the groups, fathered children with them, abandoned them

Undercover police agents in the UK infiltrated environmental groups, had sex with their members, struck up long-term relationships with women in these groups, fathered children with these women, and then abandoned the children.

Two undercover police officers secretly fathered children with political campaigners they had been sent to spy on and later disappeared completely from the lives of their offspring, the Guardian can reveal.

In both cases, the children have grown up not knowing that their biological fathers – whom they have not seen in decades – were police officers who had adopted fake identities to infiltrate activist groups. Both men have concealed their true identities from the children's mothers for many years.

Good thing the police were there, though. Who knows what kind of unethical behaviour an environmentalist might be getting up to.

Undercover police had children with activists

Scary science, national security, and open-source research

I’ve been following the story about the scientists who have been working to figure out how H5N1 bird flu might become transmissible from human to human, the controversial research they used to study that question, and the federal recommendations that are now threatening to keep that research under wraps.

Read the rest