In the New Yorker, an essay by Gary Marcus on the ethical and legal implications of Google's driver-less cars which argues that these automated vehicles "usher in the era in which it will no longer be optional for machines to have ethical systems."
Your car is speeding along a bridge at fifty miles per hour when errant school bus carrying forty innocent children crosses its path. Should your car swerve, possibly risking the life of its owner (you), in order to save the children, or keep going, putting all forty kids at risk? If the decision must be made in milliseconds, the computer will have to make the call.
On Thursday, The Presidential Commission for the Study of Bioethical Issues released a report on privacy concerns sparked by the advent of whole genome sequencing (decoding the entirety of someone's DNA make-up), and the ease with which commercial startups offer to obtain and decode secretly-swiped DNA samples. Chairperson Amy Gutmann told reporters on Wednesday that there is a "potential for misuse of this very personal data." More at Reuters. — Xeni
Remember the nocebo effect? It's the flip side of placebos. Placebos can make people feel better or even relieve pain (to a certain extent). Nocebo happens when a placebo causes negative side-effects—nausea, racing heart, dizziness, etc. And here's one more weird thing to add to this veritable bonfire of weirdness: When we tell people about the possible negative side-effects of a real drug, that might make them more likely to experience those side-effects.
In one study, 50 patients with chronic back pain were randomly divided into two groups before a leg flexion test. One group was informed that the test could lead to a slight increase in pain, while the other group was instructed that the test would have no effect. Guess which group reported more pain and was able to perform significantly fewer leg flexions?
Another example from the report: Patients undergoing chemotherapy for cancer treatment who expect these drugs to trigger intense nausea and vomiting suffer far more after receiving the drugs than patients who don’t.
And, like placebos and classic nocebos, this isn't just "all in their head"—at least, not in the sense that they're making it up or deluding themselves. There are measurable physical effects to this stuff.
As science writer Steve Silberman says in the article I've quoted from above, what we're learning here is that the feedback we get from other people ("That might make you feel yucky" or "You look tired today") has a physical effect on us. It's a little insane. It's also worth thinking about when we talk about medical ethics. Full disclosure of what treatments you're getting and what the risks and benefits are is generally regarded as the ethically right way to practice medicine. And that's probably correct. But how do you balance that with what we know about placebo/nocebo? What happens when transparency keeps you from using a harmless placebo as a treatment? What happens when transparency makes you more likely to experience negative health outcomes? It's a strange, strange world and it's not always easy to make the right ethical choices.
Tony from the StarShipSofa podcast sez, "This week on StarShipSofa we play the short story Malak, by science fiction writer Peter Watts. Malak was originally published in the anthology Engineering Infinity edited by Jonathan Strahan and views the world of a semi-autonomous combat drone called Azrael and throws in some very powerful ethical questions. A brilliant story from a brilliant writer."
In this Forbes editorial, Bruce Schneier points out a really terrible second-order effect of the governments and companies who buy unpublished vulnerabilites from hackers and keep them secret so they can use them for espionage and sabotage. As Schneier points out, this doesn't just make us all less secure (EFF calls it "security for the 1%") because there are so many unpatched flaws that might be exploited by crooks; it also creates an incentive for software engineers to deliberately introduce flaws into the software they're employed to write, and then sell those flaws to governments and slimy companies.
I’ve long argued that the process of finding vulnerabilities in software system increases overall security. This is because the economics of vulnerability hunting favored disclosure. As long as the principal gain from finding a vulnerability was notoriety, publicly disclosing vulnerabilities was the only obvious path. In fact, it took years for our industry to move from a norm of full-disclosure — announcing the vulnerability publicly and damn the consequences — to something called “responsible disclosure”: giving the software vendor a head start in fixing the vulnerability. Changing economics is what made the change stick: instead of just hacker notoriety, a successful vulnerability finder could land some lucrative consulting gigs, and being a responsible security researcher helped. But regardless of the motivations, a disclosed vulnerability is one that — at least in most cases — is patched. And a patched vulnerability makes us all more secure.
This is why the new market for vulnerabilities is so dangerous; it results in vulnerabilities remaining secret and unpatched. That it’s even more lucrative than the public vulnerabilities market means that more hackers will choose this path. And unlike the previous reward of notoriety and consulting gigs, it gives software programmers within a company the incentive to deliberately create vulnerabilities in the products they’re working on — and then secretly sell them to some government agency.
No commercial vendors perform the level of code review that would be necessary to detect, and prove mal-intent for, this kind of sabotage.
An All Things Considered segment (MP3) with Chana Joffe-Walt and Alix Spiegel looks at the circumstances that lead to people cheating and committing other frauds. They frame it with the true story of Toby Groves, whose brother had been convicted of fraud, and whose father made him swear a solemn oath to be upstanding in his business dealings. However, Groves found himself committing fraud later, and brought several of his employees in on it.
Typically when we hear about large frauds, we assume the perpetrators were driven by financial incentives. But psychologists and economists say financial incentives don't fully explain it. They're interested in another possible explanation: Human beings commit fraud because human beings like each other.
We like to help each other, especially people we identify with. And when we are helping people, we really don't see what we are doing as unethical.
Lamar Pierce, an associate professor at Washington University in St. Louis, points to the case of emissions testers. Emissions testers are supposed to test whether or not your car is too polluting to stay on the road. If it is, they're supposed to fail you. But in many cases, emissions testers lie.
"Somewhere between 20 percent and 50 percent of cars that should fail are passed — are illicitly passed," Pierce says.
Financial incentives can explain some of that cheating. But Pierce and psychologist Francesca Gino of Harvard Business School say that doesn't fully capture it.
Spoiler is an independently produced 17-minute horror/science fiction movie that illuminates the kinds of cold equations that have to be solved in pandemic outbreaks. In this case, it's the story of the coroners who keep the zombie plague under control after it's been beaten back. It's a good twist on the traditional zombie movie, and hits a sweet spot of sorrow and horror that you get with the best zombie stories.
The zombie apocalypse happened -- and we won.
But though society has recovered, the threat of infection is always there -- and Los Angeles coroner Tommy Rossman is the man they call when things go wrong.
Facebook announced today that the social network's 161 million members in the United States will be encouraged to begin displaying "organ donor status" on their pages, along with birth dates and schools. Some 7,000 people die every year in America while waiting for an organ transplant, and the idea here, according to this New York Times story, is to "create peer pressure to nudge more people to add their names to the rolls of registered donors." Absolutely nothing could go wrong. (via John Schwartz)— Xeni
An anonymous MD has a guest-post on John Scalzi's blog describing her/his medical outrage at being asked to perform medically unnecessary transvaginal ultrasounds on women seeking abortion, in accordance with laws proposed and passed by several Republican-dominated state legislatures. As the doctor writes, "If I insert ANY object into ANY orifice without informed consent, it is rape. And coercion of any kind negates consent, informed or otherwise." The article is a strong tonic and much-welcome -- the ethics of medical professionals should not (and must not) become subservient to cheap political stunting, and especially not when political stunt requires doctors' complicity in state-ordered sexual assaults.
1) Just don’t comply. No matter how much our autonomy as physicians has been eroded, we still have control of what our hands do and do not do with a transvaginal ultrasound wand. If this legislation is completely ignored by the people who are supposed to implement it, it will soon be worth less than the paper it is written on.
2) Reinforce patient autonomy. It does not matter what a politician says. A woman is in charge of determining what does and what does not go into her body. If she WANTS a transvaginal ultrasound, fine. If it’s medically indicated, fine… have that discussion with her. We have informed consent for a reason. If she has to be forced to get a transvaginal ultrasound through coercion or overly impassioned argument or implied threats of withdrawal of care, that is NOT FINE.
Our position is to recommend medically-indicated tests and treatments that have a favorable benefit-to-harm ratio… and it is up to the patient to decide what she will and will not allow. Period. Politicians do not have any role in this process. NO ONE has a role in this process but the patient and her physician. If anyone tries to get in the way of that, it is our duty to run interference.
It was just this understanding of rights as obligations that governments must obey that formed the basis for a declaration of rights for cetaceans (whales and dolphins) at the annual meeting of the American Association for the Advancement of Science held in Vancouver, Canada last month. Such a declaration is a minefield ripe for misunderstanding, as the BBC quickly demonstrated with their headline, “Dolphins deserve same rights as humans, say scientists.” However, according to Thomas I. White, Conrad N. Hilton Chair of Business Ethics at Loyola Marymount University in Los Angeles, the idea of granting personhood rights to nonhumans would not make them equal to humans under law. They would not vote, sit on a jury, or attend public school. However, by legally making whales and dolphins “nonhuman persons,” with individual rights under law, it would obligate governments to protect cetaceans from slaughter or abuse.
“The evidence for cognitive and affective sophistication—currently most strongly documented in dolphins—supports the claim that these cetaceans are ‘non-human persons,’” said White. As a result, cetaceans should be seen as “beyond use” by humans and have “moral standing” as individuals. “It is, therefore, ethically indefensible to kill, injure or keep these beings captive for human purposes,” he said.
Johnson also makes an interesting point—there's a legal basis for this kind of thing. After all, if corporations can be people, my friends, why not dolphins?
Science writer Sally Adee provides some background on her New Scientist article describing her experience with a DARPA program that uses targeted electrical stimulation of the brain during training exercises to induce "flow states" and enhance learning. The "thinking cap" is something like the tasp of science fiction, and the experimental evidence for it as a learning enhancement tool is pretty good thus far -- and the experimental subjects report that the experience feels wonderful (Adee: "the thing I wanted most acutely for the weeks following my experience was to go back and strap on those electrodes.")
We don’t yet have a commercially available “thinking cap” but we will soon. So the research community has begun to ask: What are the ethics of battery-operated cognitive enhancement? Last week a group of Oxford University neuroscientists released a cautionary statement about the ethics of brain boosting, followed quickly by a report from the UK’s Royal Society that questioned the use of tDCS for military applications. Is brain boosting a fair addition to the cognitive enhancement arms race? Will it create a Morlock/Eloi-like social divide where the rich can afford to be smarter and leave everyone else behind? Will Tiger Moms force their lazy kids to strap on a zappity helmet during piano practice?
After trying it myself, I have different questions. To make you understand, I am going to tell you how it felt. The experience wasn’t simply about the easy pleasure of undeserved expertise. When the nice neuroscientists put the electrodes on me, the thing that made the earth drop out from under my feet was that for the first time in my life, everything in my head finally shut the fuck up.
The experiment I underwent was accelerated marksmanship training on a simulation the military uses. I spent a few hours learning how to shoot a modified M4 close-range assault rifle, first without tDCS and then with. Without it I was terrible, and when you’re terrible at something, all you can do is obsess about how terrible you are. And how much you want to stop doing the thing you are terrible at.
Then this happened:
The 20 minutes I spent hitting targets while electricity coursed through my brain were far from transcendent. I only remember feeling like I had just had an excellent cup of coffee, but without the caffeine jitters. I felt clear-headed and like myself, just sharper. Calmer. Without fear and without doubt. From there on, I just spent the time waiting for a problem to appear so that I could solve it.
If you want to try the (obviously ill-advised) experiment of applying current directly to your brain, here's some HOWTOs. Remember, if you can't open it, you don't own it!
Undercover police agents in the UK infiltrated environmental groups, had sex with their members, struck up long-term relationships with women in these groups, fathered children with these women, and then abandoned the children.
Two undercover police officers secretly fathered children with political campaigners they had been sent to spy on and later disappeared completely from the lives of their offspring, the Guardian can reveal.
In both cases, the children have grown up not knowing that their biological fathers – whom they have not seen in decades – were police officers who had adopted fake identities to infiltrate activist groups. Both men have concealed their true identities from the children's mothers for many years.
Good thing the police were there, though. Who knows what kind of unethical behaviour an environmentalist might be getting up to.
I’ve been following the story about the scientists who have been working to figure out how H5N1 bird flu might become transmissible from human to human, the controversial research they used to study that question, and the federal recommendations that are now threatening to keep that research under wraps.
Recoding Innovation is a National Science Foundation-funded documentary that's basically about the anthropology of science and engineering.
If you're a scientist or an engineer, you can participate. How does your culture, values, and beliefs make your work happen? The idea here is that ethics aren't something that hold science back. Instead, applying ethics helps scientists and engineers be innovative. It's a cool idea, and I'm looking forward to watching the finished documentary. The video above includes a short example of the kind of stories the editors are looking for.
The Kimberley Academy in Montclair, New Jersey hosted a fascinating, one-hour chat between Neil DeGrasse Tyson -- Hayden Planetarium director, TV science host, and all-round good guy -- with Stephen Colbert in a rare, out-of-character appearance.