The Sacramento Bee is reporting on a complicated story about last-ditch treatments and the ethics of human experimentation.
Glioblastomas are incredibly deadly brain cancers that usually kill the people diagnosed with them within 15 months. Two neurosurgeons at UC Davis ran across anecdotal evidence suggesting that glioblastoma patients who accidentally picked up infections after surgery sometimes lived much longer — one of the surgeons claims that a patient he knew of survived another 20 years. Read the rest
Cell culture lines are cells, taken from donor tissue, that have been divided and separated over and over and over — providing researchers with reliably identical "families" of cells that can be used to biomedical research. Some, like the now-famous HeLa line, are derived from cancerous tissue and replicate indefinitely. Others, like WI-38, will only divide a set number of times (in the case of WI-38, it's 50), but new cells can be frozen at any point and stored. When you thaw them out later, they'll pick back up dividing from the point in the 50-division cycle where they were when frozen.
WI-38 is a particularly important cell culture line. Used extensively in the development of vaccines, these are the cells that helped create the vaccine for Rubella, a disease that, just a few decades ago, used to kill and maim many fetuses whose mothers' became infected. Between 1962 and 1965, it's estimated that rubella infections caused 30,000 stillbirths and left 20,000 children with life-long disabilities.
But WI-38 is controversial. That's partly because the cells that founded the line came from the lung tissue of a fetus that was legally aborted during the fourth month of pregnancy by a woman in Sweden in 1962. At Nature News, Meredith Wadman has a fascinating long read about the moral and ethical issues surrounding WI-38. This isn't just about the abortion question. Also at issue: Did the fetus' mother consent to tissue donation? And are we okay with the fact that she and her family have never received compensation, despite the money that's been made off selling WI-38 cell cultures? Read the rest
Here's a press-release describing a paywalled paper in Science magazine, written by a pair of University of Bonn Economists. They conducted an experiment that showed how markets diffused responsibility for actions that ended up violating individual moral codes, so that people did things in market contexts that they had previously described as immoral when done individually.
Read the rest
"To study immoral outcomes, we studied whether people are willing to harm a third party in exchange to receiving money. Harming others in an intentional and unjustified way is typically considered unethical," says Prof. Falk. The animals involved in the study were so-called "surplus mice", raised in laboratories outside Germany. These mice are no longer needed for research purposes. Without the experiment, they would have all been killed. As a consequence of the study many hundreds of young mice that would otherwise all have died were saved. If a subject decided to save a mouse, the experimenters bought the animal. The saved mice are perfectly healthy and live under best possible lab conditions and medical care.
A subgroup of subjects decided between life and money in a non-market decision context (individual condition). This condition allows for eliciting moral standards held by individuals. The condition was compared to two market conditions in which either only one buyer and one seller (bilateral market) or a larger number of buyers and sellers (multilateral market) could trade with each other. If a market offer was accepted a trade was completed, resulting in the death of a mouse. Compared to the individual condition, a significantly higher number of subjects were willing to accept the killing of a mouse in both market conditions.
How do we know whether screening for something like cervical cancer is effective at saving women's lives? Two ongoing studies conducted in India (one funded by the National Cancer Institute and the other by The Gates Foundation) are aimed at answering that question — but their methods are under fire by critics.
It works like this. Say you want to test the effectiveness of a new screening method. You recruit a large group of women and you split them into two groups. One group gets the screening regularly. The other, the control group, doesn't get the screening. Then you follow them over time and track how many women in both groups died of cancer. That's a pretty basic scientific method. It's also something that prompts big questions about the treatment of women in the control group.
The people conducting the study say women in the control group were told they could seek out screening on their own. Critics argue that point (and the way the study worked) wasn't clearly explained, and that those alterante options weren't as available to the women as researchers imply. The majority of the women participating in the studies are poor and have very little formal education.
There are some important differences between this and the infamous Tuskegee syphilis experiment. In that case, researchers identified men with syphilis and neither told them about their disease nor offered them treatment — just monitored the deadly disease's progress. Here, there's clearly an attempt (however poorly executed) at being open with the women about what the study is and what is being done. Read the rest
This is a really important long read that we all need to pay attention to. It concerns how we treat people with who are suffering from paranoid delusions — and how we treat people whose families worry that they are a threat to others. It concerns the relationships between doctors and the pharmaceutical industry. It concerns the ethics of clinical trials — the risks we run as we test potential treatments that could help many, or hurt a few, or both. If we want to reform mental health care, this needs to be part of the discussion.
In 2004, Dan Markingson committed suicide. The story behind that death is complicated and depressing. At the Molecules to Medicine blog, Judy Stone documents the whole thing in three must-read chapters. Many people find help in psychiatric drugs, and credit those drugs with making their lives better. (Full disclosure, I'm one of them. I have used Ritalin for several years. I am temporarily on an anti-depressant.) But we have to pay attention to how those drugs get to us. This isn't just about treating people. It's about the process that gets us there. Because, if that process is compromised, the treatments we get won't be as effective and lives will be lost along the way.
Read the rest
Markingson began to show signs of paranoia and delusions in 2003, believing that he needed to murder his mother. He was committed to Fairview Hospital involuntarily after being evaluated by Dr. Stephen Olson, of the University of Minnesota. He was subsequently enrolled on a clinical trial of antipsychotic drugs—despite protests from his mother.
In the New Yorker, an essay by Gary Marcus on the ethical and legal implications of Google's driver-less cars which argues that these automated vehicles "usher in the era in which it will no longer be optional for machines to have ethical systems."
Your car is speeding along a bridge at fifty miles per hour when errant school bus carrying forty innocent children crosses its path. Should your car swerve, possibly risking the life of its owner (you), in order to save the children, or keep going, putting all forty kids at risk? If the decision must be made in milliseconds, the computer will have to make the call.Read the rest
Remember the nocebo effect? It's the flip side of placebos. Placebos can make people feel better or even relieve pain (to a certain extent). Nocebo happens when a placebo causes negative side-effects—nausea, racing heart, dizziness, etc. And here's one more weird thing to add to this veritable bonfire of weirdness: When we tell people about the possible negative side-effects of a real drug, that might make them more likely to experience those side-effects.
In one study, 50 patients with chronic back pain were randomly divided into two groups before a leg flexion test. One group was informed that the test could lead to a slight increase in pain, while the other group was instructed that the test would have no effect. Guess which group reported more pain and was able to perform significantly fewer leg flexions?
Another example from the report: Patients undergoing chemotherapy for cancer treatment who expect these drugs to trigger intense nausea and vomiting suffer far more after receiving the drugs than patients who don’t.
And, like placebos and classic nocebos, this isn't just "all in their head"—at least, not in the sense that they're making it up or deluding themselves. There are measurable physical effects to this stuff.
As science writer Steve Silberman says in the article I've quoted from above, what we're learning here is that the feedback we get from other people ("That might make you feel yucky" or "You look tired today") has a physical effect on us. It's a little insane. It's also worth thinking about when we talk about medical ethics. Read the rest
Tony from the StarShipSofa podcast sez, "This week on StarShipSofa we play the short story Malak, by science fiction writer Peter Watts. Malak was originally published in the anthology Engineering Infinity edited by Jonathan Strahan and views the world of a semi-autonomous combat drone called Azrael and throws in some very powerful ethical questions. A brilliant story from a brilliant writer."
In this Forbes editorial, Bruce Schneier points out a really terrible second-order effect of the governments and companies who buy unpublished vulnerabilites from hackers and keep them secret so they can use them for espionage and sabotage. As Schneier points out, this doesn't just make us all less secure (EFF calls it "security for the 1%") because there are so many unpatched flaws that might be exploited by crooks; it also creates an incentive for software engineers to deliberately introduce flaws into the software they're employed to write, and then sell those flaws to governments and slimy companies.
Read the rest
I’ve long argued that the process of finding vulnerabilities in software system increases overall security. This is because the economics of vulnerability hunting favored disclosure. As long as the principal gain from finding a vulnerability was notoriety, publicly disclosing vulnerabilities was the only obvious path. In fact, it took years for our industry to move from a norm of full-disclosure — announcing the vulnerability publicly and damn the consequences — to something called “responsible disclosure”: giving the software vendor a head start in fixing the vulnerability. Changing economics is what made the change stick: instead of just hacker notoriety, a successful vulnerability finder could land some lucrative consulting gigs, and being a responsible security researcher helped. But regardless of the motivations, a disclosed vulnerability is one that — at least in most cases — is patched. And a patched vulnerability makes us all more secure.
This is why the new market for vulnerabilities is so dangerous; it results in vulnerabilities remaining secret and unpatched.
An All Things Considered segment (MP3) with Chana Joffe-Walt and Alix Spiegel looks at the circumstances that lead to people cheating and committing other frauds. They frame it with the true story of Toby Groves, whose brother had been convicted of fraud, and whose father made him swear a solemn oath to be upstanding in his business dealings. However, Groves found himself committing fraud later, and brought several of his employees in on it.
Typically when we hear about large frauds, we assume the perpetrators were driven by financial incentives. But psychologists and economists say financial incentives don't fully explain it. They're interested in another possible explanation: Human beings commit fraud because human beings like each other.
We like to help each other, especially people we identify with. And when we are helping people, we really don't see what we are doing as unethical.
Lamar Pierce, an associate professor at Washington University in St. Louis, points to the case of emissions testers. Emissions testers are supposed to test whether or not your car is too polluting to stay on the road. If it is, they're supposed to fail you. But in many cases, emissions testers lie.
"Somewhere between 20 percent and 50 percent of cars that should fail are passed — are illicitly passed," Pierce says.
Financial incentives can explain some of that cheating. But Pierce and psychologist Francesca Gino of Harvard Business School say that doesn't fully capture it.