Cell culture lines are cells, taken from donor tissue, that have been divided and separated over and over and over — providing researchers with reliably identical "families" of cells that can be used to biomedical research. Some, like the now-famous HeLa line, are derived from cancerous tissue and replicate indefinitely. Others, like WI-38, will only divide a set number of times (in the case of WI-38, it's 50), but new cells can be frozen at any point and stored. When you thaw them out later, they'll pick back up dividing from the point in the 50-division cycle where they were when frozen.
WI-38 is a particularly important cell culture line. Used extensively in the development of vaccines, these are the cells that helped create the vaccine for Rubella, a disease that, just a few decades ago, used to kill and maim many fetuses whose mothers' became infected. Between 1962 and 1965, it's estimated that rubella infections caused 30,000 stillbirths and left 20,000 children with life-long disabilities.
But WI-38 is controversial. That's partly because the cells that founded the line came from the lung tissue of a fetus that was legally aborted during the fourth month of pregnancy by a woman in Sweden in 1962. At Nature News, Meredith Wadman has a fascinating long read about the moral and ethical issues surrounding WI-38. This isn't just about the abortion question. Also at issue: Did the fetus' mother consent to tissue donation? And are we okay with the fact that she and her family have never received compensation, despite the money that's been made off selling WI-38 cell cultures? Read the rest
Here's a press-release describing a paywalled paper in Science magazine, written by a pair of University of Bonn Economists. They conducted an experiment that showed how markets diffused responsibility for actions that ended up violating individual moral codes, so that people did things in market contexts that they had previously described as immoral when done individually.
Read the rest
"To study immoral outcomes, we studied whether people are willing to harm a third party in exchange to receiving money. Harming others in an intentional and unjustified way is typically considered unethical," says Prof. Falk. The animals involved in the study were so-called "surplus mice", raised in laboratories outside Germany. These mice are no longer needed for research purposes. Without the experiment, they would have all been killed. As a consequence of the study many hundreds of young mice that would otherwise all have died were saved. If a subject decided to save a mouse, the experimenters bought the animal. The saved mice are perfectly healthy and live under best possible lab conditions and medical care.
A subgroup of subjects decided between life and money in a non-market decision context (individual condition). This condition allows for eliciting moral standards held by individuals. The condition was compared to two market conditions in which either only one buyer and one seller (bilateral market) or a larger number of buyers and sellers (multilateral market) could trade with each other. If a market offer was accepted a trade was completed, resulting in the death of a mouse. Compared to the individual condition, a significantly higher number of subjects were willing to accept the killing of a mouse in both market conditions.
How do we know whether screening for something like cervical cancer is effective at saving women's lives? Two ongoing studies conducted in India (one funded by the National Cancer Institute and the other by The Gates Foundation) are aimed at answering that question — but their methods are under fire by critics.
It works like this. Say you want to test the effectiveness of a new screening method. You recruit a large group of women and you split them into two groups. One group gets the screening regularly. The other, the control group, doesn't get the screening. Then you follow them over time and track how many women in both groups died of cancer. That's a pretty basic scientific method. It's also something that prompts big questions about the treatment of women in the control group.
The people conducting the study say women in the control group were told they could seek out screening on their own. Critics argue that point (and the way the study worked) wasn't clearly explained, and that those alterante options weren't as available to the women as researchers imply. The majority of the women participating in the studies are poor and have very little formal education.
There are some important differences between this and the infamous Tuskegee syphilis experiment. In that case, researchers identified men with syphilis and neither told them about their disease nor offered them treatment — just monitored the deadly disease's progress. Here, there's clearly an attempt (however poorly executed) at being open with the women about what the study is and what is being done. Read the rest
This is a really important long read that we all need to pay attention to. It concerns how we treat people with who are suffering from paranoid delusions — and how we treat people whose families worry that they are a threat to others. It concerns the relationships between doctors and the pharmaceutical industry. It concerns the ethics of clinical trials — the risks we run as we test potential treatments that could help many, or hurt a few, or both. If we want to reform mental health care, this needs to be part of the discussion.
In 2004, Dan Markingson committed suicide. The story behind that death is complicated and depressing. At the Molecules to Medicine blog, Judy Stone documents the whole thing in three must-read chapters. Many people find help in psychiatric drugs, and credit those drugs with making their lives better. (Full disclosure, I'm one of them. I have used Ritalin for several years. I am temporarily on an anti-depressant.) But we have to pay attention to how those drugs get to us. This isn't just about treating people. It's about the process that gets us there. Because, if that process is compromised, the treatments we get won't be as effective and lives will be lost along the way.
Read the rest
Markingson began to show signs of paranoia and delusions in 2003, believing that he needed to murder his mother. He was committed to Fairview Hospital involuntarily after being evaluated by Dr. Stephen Olson, of the University of Minnesota. He was subsequently enrolled on a clinical trial of antipsychotic drugs—despite protests from his mother.
In the New Yorker, an essay by Gary Marcus on the ethical and legal implications of Google's driver-less cars which argues that these automated vehicles "usher in the era in which it will no longer be optional for machines to have ethical systems."
Your car is speeding along a bridge at fifty miles per hour when errant school bus carrying forty innocent children crosses its path. Should your car swerve, possibly risking the life of its owner (you), in order to save the children, or keep going, putting all forty kids at risk? If the decision must be made in milliseconds, the computer will have to make the call.Read the rest
Remember the nocebo effect? It's the flip side of placebos. Placebos can make people feel better or even relieve pain (to a certain extent). Nocebo happens when a placebo causes negative side-effects—nausea, racing heart, dizziness, etc. And here's one more weird thing to add to this veritable bonfire of weirdness: When we tell people about the possible negative side-effects of a real drug, that might make them more likely to experience those side-effects.
In one study, 50 patients with chronic back pain were randomly divided into two groups before a leg flexion test. One group was informed that the test could lead to a slight increase in pain, while the other group was instructed that the test would have no effect. Guess which group reported more pain and was able to perform significantly fewer leg flexions?
Another example from the report: Patients undergoing chemotherapy for cancer treatment who expect these drugs to trigger intense nausea and vomiting suffer far more after receiving the drugs than patients who don’t.
And, like placebos and classic nocebos, this isn't just "all in their head"—at least, not in the sense that they're making it up or deluding themselves. There are measurable physical effects to this stuff.
As science writer Steve Silberman says in the article I've quoted from above, what we're learning here is that the feedback we get from other people ("That might make you feel yucky" or "You look tired today") has a physical effect on us. It's a little insane. It's also worth thinking about when we talk about medical ethics. Read the rest
Tony from the StarShipSofa podcast sez, "This week on StarShipSofa we play the short story Malak, by science fiction writer Peter Watts. Malak was originally published in the anthology Engineering Infinity edited by Jonathan Strahan and views the world of a semi-autonomous combat drone called Azrael and throws in some very powerful ethical questions. A brilliant story from a brilliant writer."
In this Forbes editorial, Bruce Schneier points out a really terrible second-order effect of the governments and companies who buy unpublished vulnerabilites from hackers and keep them secret so they can use them for espionage and sabotage. As Schneier points out, this doesn't just make us all less secure (EFF calls it "security for the 1%") because there are so many unpatched flaws that might be exploited by crooks; it also creates an incentive for software engineers to deliberately introduce flaws into the software they're employed to write, and then sell those flaws to governments and slimy companies.
Read the rest
I’ve long argued that the process of finding vulnerabilities in software system increases overall security. This is because the economics of vulnerability hunting favored disclosure. As long as the principal gain from finding a vulnerability was notoriety, publicly disclosing vulnerabilities was the only obvious path. In fact, it took years for our industry to move from a norm of full-disclosure — announcing the vulnerability publicly and damn the consequences — to something called “responsible disclosure”: giving the software vendor a head start in fixing the vulnerability. Changing economics is what made the change stick: instead of just hacker notoriety, a successful vulnerability finder could land some lucrative consulting gigs, and being a responsible security researcher helped. But regardless of the motivations, a disclosed vulnerability is one that — at least in most cases — is patched. And a patched vulnerability makes us all more secure.
This is why the new market for vulnerabilities is so dangerous; it results in vulnerabilities remaining secret and unpatched.
An All Things Considered segment (MP3) with Chana Joffe-Walt and Alix Spiegel looks at the circumstances that lead to people cheating and committing other frauds. They frame it with the true story of Toby Groves, whose brother had been convicted of fraud, and whose father made him swear a solemn oath to be upstanding in his business dealings. However, Groves found himself committing fraud later, and brought several of his employees in on it.
Typically when we hear about large frauds, we assume the perpetrators were driven by financial incentives. But psychologists and economists say financial incentives don't fully explain it. They're interested in another possible explanation: Human beings commit fraud because human beings like each other.
We like to help each other, especially people we identify with. And when we are helping people, we really don't see what we are doing as unethical.
Lamar Pierce, an associate professor at Washington University in St. Louis, points to the case of emissions testers. Emissions testers are supposed to test whether or not your car is too polluting to stay on the road. If it is, they're supposed to fail you. But in many cases, emissions testers lie.
"Somewhere between 20 percent and 50 percent of cars that should fail are passed — are illicitly passed," Pierce says.
Financial incentives can explain some of that cheating. But Pierce and psychologist Francesca Gino of Harvard Business School say that doesn't fully capture it.
Spoiler is an independently produced 17-minute horror/science fiction movie that illuminates the kinds of cold equations that have to be solved in pandemic outbreaks. In this case, it's the story of the coroners who keep the zombie plague under control after it's been beaten back. It's a good twist on the traditional zombie movie, and hits a sweet spot of sorrow and horror that you get with the best zombie stories.
The zombie apocalypse happened -- and we won.
But though society has recovered, the threat of infection is always there -- and Los Angeles coroner Tommy Rossman is the man they call when things go wrong.
An anonymous MD has a guest-post on John Scalzi's blog describing her/his medical outrage at being asked to perform medically unnecessary transvaginal ultrasounds on women seeking abortion, in accordance with laws proposed and passed by several Republican-dominated state legislatures. As the doctor writes, "If I insert ANY object into ANY orifice without informed consent, it is rape. And coercion of any kind negates consent, informed or otherwise." The article is a strong tonic and much-welcome -- the ethics of medical professionals should not (and must not) become subservient to cheap political stunting, and especially not when political stunt requires doctors' complicity in state-ordered sexual assaults.
Read the rest
1) Just don’t comply. No matter how much our autonomy as physicians has been eroded, we still have control of what our hands do and do not do with a transvaginal ultrasound wand. If this legislation is completely ignored by the people who are supposed to implement it, it will soon be worth less than the paper it is written on.
2) Reinforce patient autonomy. It does not matter what a politician says. A woman is in charge of determining what does and what does not go into her body. If she WANTS a transvaginal ultrasound, fine. If it’s medically indicated, fine… have that discussion with her. We have informed consent for a reason. If she has to be forced to get a transvaginal ultrasound through coercion or overly impassioned argument or implied threats of withdrawal of care, that is NOT FINE.
Our position is to recommend medically-indicated tests and treatments that have a favorable benefit-to-harm ratio… and it is up to the patient to decide what she will and will not allow.
Recently, I posted a series of videos where science writers talked about some of the fascinating things they learned at the 2012 American Association for the Advancement of Science conference. In one of those clips, Eric Michael Johnson talked a bit about a panel session on whether or not certain cetaceans—primarily whales and dolphins—deserve to have legal rights under the law, the same as people have.
This is an issue that just begs controversy. But in a recent blog post following up on that panel and the meaning behind it, Johnson explains that it's not quite as crazy an idea as it might at first sound.
Read the rest
It was just this understanding of rights as obligations that governments must obey that formed the basis for a declaration of rights for cetaceans (whales and dolphins) at the annual meeting of the American Association for the Advancement of Science held in Vancouver, Canada last month. Such a declaration is a minefield ripe for misunderstanding, as the BBC quickly demonstrated with their headline, “Dolphins deserve same rights as humans, say scientists.” However, according to Thomas I. White, Conrad N. Hilton Chair of Business Ethics at Loyola Marymount University in Los Angeles, the idea of granting personhood rights to nonhumans would not make them equal to humans under law. They would not vote, sit on a jury, or attend public school. However, by legally making whales and dolphins “nonhuman persons,” with individual rights under law, it would obligate governments to protect cetaceans from slaughter or abuse.
Science writer Sally Adee provides some background on her New Scientist article describing her experience with a DARPA program that uses targeted electrical stimulation of the brain during training exercises to induce "flow states" and enhance learning. The "thinking cap" is something like the tasp of science fiction, and the experimental evidence for it as a learning enhancement tool is pretty good thus far -- and the experimental subjects report that the experience feels wonderful (Adee: "the thing I wanted most acutely for the weeks following my experience was to go back and strap on those electrodes.")
Read the rest
We don’t yet have a commercially available “thinking cap” but we will soon. So the research community has begun to ask: What are the ethics of battery-operated cognitive enhancement? Last week a group of Oxford University neuroscientists released a cautionary statement about the ethics of brain boosting, followed quickly by a report from the UK’s Royal Society that questioned the use of tDCS for military applications. Is brain boosting a fair addition to the cognitive enhancement arms race? Will it create a Morlock/Eloi-like social divide where the rich can afford to be smarter and leave everyone else behind? Will Tiger Moms force their lazy kids to strap on a zappity helmet during piano practice?
After trying it myself, I have different questions. To make you understand, I am going to tell you how it felt. The experience wasn’t simply about the easy pleasure of undeserved expertise. When the nice neuroscientists put the electrodes on me, the thing that made the earth drop out from under my feet was that for the first time in my life, everything in my head finally shut the fuck up.