Boing Boing 

Brain-on-a-chip for drug testing and injury research

Draper Laboratory and University of South Florida researchers are developing a prototype "brain-on-a-chip." No, it's not an AI but rather a combination of living cells and microfluidics in a bio-artificial model of the brain's nerovascular unit, the system of neurons, capillaries, and other cells that control the supply of nutrients to the brain. Eventually, such a device could be used to test medications and vaccines. And that's just the beginning.

“In addition to screening drugs, we could potentially block vascular channels and mimic stroke or atherosclerotic plaque," says lead researcher Anil Achyuta. "Furthermore, this platform could eventually be used for neurotoxicology, to study the effects of brain injury like concussions, blast injuries, and implantable medical devices such as in neuroprosthetics.”

"Draper Laboratory Developing “Brain-on-a-Chip”"

Why casual sexism in science matters


Psychobiologist Dario Maestripieri returned from a neuroscience meeting in New Orleans and posted to Facebook that he was disappointed with the "unusually high concentrations of unattractive women. The super model types are completely absent. What is going on? Are unattractive women particularly attracted to neuroscience? Are beautiful women particularly uninterested in the brain?"

He added, "No offense to anyone."

Many people took offense, starting with the Drugmonkey blog, which reposted the remarks.

Janet Stemwedel on Adventures in Ethics and Science has a good post explaining why she is offended by this:

The thing is, that denial is also the denial of the actual lived experience of a hell of a lot of women in science (and in other fields -- I've been sexually harassed in both of the disciplines to which I've belonged).

I can't pretend to speak for everyone who calls out sexism like Maestripieri's, so I'll speak for myself. Here's what I want:

1. I want to shine a bright light on all the sexist behaviors, big or small, so the folks who have managed not to notice them so far start noticing them, and so that they stop assuming their colleagues who point them out and complain about them are making a big deal out of nothing.

2. I want the exposure of the sexist behaviors to push others in the community to take a stand on whether they're cool with these behaviors or would rather these behaviors stop. If you know about it and you don't think it's worth talking about, I want to know that about you -- it tells me something about you that might be useful for me to know as I choose my interactions.

3. I want the people whose sexist behaviors are being called out to feel deeply uncomfortable -- at least as uncomfortable as their colleagues (and students) who are women have felt in the presence of these behaviors.

4. I want people who voice their objections to sexist behaviors to have their exercise of free speech (in calling out the behaviors) be just as vigorously defended as the free speech rights of the people spouting sexist nonsense.

5. I want the sexist behavior to stop so scientists who happen to be women can concentrate on the business of doing science (rather than responding to sexist behavior, swallowing their rage, etc.)

I've got a daughter who, at four and a half, wants to be a scientist. Every time she says this, it makes me swell up with so much pride, I almost bust. If she grows up to be a scientist, I want her to be judged on the reproducibility of her results, the elegance of her experimental design, and the insight in her hypotheses, not on her ability to live up to someone's douchey standard of "super model" looks.

What a dead fish can teach you about neuroscience and statistics

The methodology is straightforward. You take your subject and slide them into an fMRI machine, a humongous sleek, white ring, like a donut designed by Apple. Then you show the subject images of people engaging in social activities — shopping, talking, eating dinner. You flash 48 different photos in front of your subject's eyes, and ask them to figure out what emotions the people in the photos were probably feeling. All in all, it's a pretty basic neuroscience/psychology experiment. With one catch. The "subject" is a mature Atlantic salmon.

And it is dead.

Read the rest

Beware of neuro-speculation

Between the downfall of Jonah Lehrer, and Naomi Wolfe's new book that claims chemicals in women's brains force us to demand our lovers shower us with roses and candy and refer to us as "goddess"*, there's been some growing backlash against the long-popular idea of better living through neuroscience. You know what I'm talking about here: You (yes, you!) can succeed at work, be more creative, improve your relationships, and have a better sex life — all you have to do is read this one interpretation of the latest in neuroscience research!

Perhaps unsurprisingly, that pitch oversells the reality. What we know about how the brain works isn't really that clear cut. But more than that, the idea of scientific self-help quite often has to severely distort science in order to make any sense. The public comes away with a massive misunderstanding of what MRI does and doesn't tell us, what hormones like dopamine actually do, and what the lab tells us about real life.

There are two big essays that you need to read before you pick up another story or book that tries to make connections between cutting-edge brain science and real life. The first, in New Statesman, is by Steven Poole and the broad overview of why it's such a problem when neuroscience becomes neuro-speculation. The second, by Maia Szalavitz at Time Magazine's Healthland blog, focuses on Naomi Wolfe's new book and uses that as a springboard to talk about the bigger issue of brain chemicals, what they are, and what they aren't.

Read the rest

How science turned into science fiction

Moran Cerf is a neuroscientist. In the video above, which Cory posted on Friday, he tells the story of how a paper he published in the journal Nature ended up getting him phone calls from Apple and invitations to appear with Christopher Nolan on the publicity tour for Inception. The problem: Nolan, Apple, and a lot of other people thought Cerf had figured out a way to record dreams. He hadn't. Not even close.

Cory's piece, and a link that Xeni sent me to the video, got me reading up on this case and I wanted to provide more of the scientific background—so you can see clearly what Cerf's research was really about and how the media got wrong. Back in 2010, Cerf and his colleagues were trying to figure out how humans look at a world cluttered with different faces, objects, smells, and sounds and manage to filter out the specific things we're interested in. What happens when I look at a messy desk and immediately focus in on one piece of paper? If there are two objects on the desk that are familiar to me, but only one of them really matters, how does my brain resolve the conflict and direct my attention in a single direction?

Turns out, at least under laboratory conditions, humans can filter out the important stuff by consciously controlling the firing of neurons in their own brains. Here's how Alison Abbott at Nature News described the research at the time:

In the last six years or so they have shown that single neurons can fire when subjects recognise — or even imagine — just one particular person or object. They propose that activity in these neurons reflect the choices the brain is making about what sensory information it will consider further and what information it will neglect.

In this experiment, the scientists flashed a series of 110 familiar images — such as pictures of Marilyn Monroe or Michael Jackson — on a screen in front of each of the 12 patients and identified individual neurons which uniquely and reliably responded to one of the images. They selected four images for which they had found responsive neurons in different parts of a subject's MTL. Then they showed the subject two images superimposed on each other. Each was 50% faded out.

The subjects were told to think about one of the images and enhance it.

Read the rest

How bad neuroscience can mislead us

If you watch or read much science fiction, you know that all it takes to suspend disbelief about fictional science is an explanation that sounds good on the surface and makes use of terms and ideas that your audience doesn't fully understand but does find emotionally compelling. It's why "radioactive spider" made sense in 1960s.

Apparently (and unfortunately) this effect is true for actual science as well.

This slide comes from a lecture given by Oxford University neuroscientist Dorothy Bishop. Basically, it's showing that an explanation of a psychological phenomenon became more believable if you added in some hand-wavey neuroscience and pictures of brain scans. Suddenly, an explanation of human behavior that's based on circular reasoning and poor logic changes from something lay people won't accept to something we're happy to buy into.

Bishop's entire, hour-long presentation on the science of bad neuroscience is available to watch online for free. If you don't have time, check out this summary of the key points at the Neurobonkers blog.

Via Mind Hacks

The neurobiology and psychology that connect summer vacation with your morning run

Time is relative. Remember how each day in grade school (especially summer days) seemed to last for an eternity? Ever notice how it seems to take forever to travel a new route on your bike, while the return trip along the same path is done in the blink of an eye?

Turns out, both of those things are connected and they have important implications for the nature of memory. There's a great summary of the science on this up at The Irish Times. It's written by William Reville, emeritus professor of biochemistry at University College Cork.

The key issue, according to Reville, is that the amount of information your brain can store during a given time period isn't really dependent on the length of that time period. You could store up a lot of new information during 10 minutes of a really interesting lecture. You might store only a little new information during 10 minutes of walking your dog along a path you know very well.

The higher the intensity, the longer the duration seems to be. In a classic experiment, participants were asked to memorise either a simple [a circle] or complex figure . Although the clock-time allocated to each task was identical, participants later estimated the duration of memorising the complex shape to be significantly longer than for the simple shape.

... [H]ere is a “guaranteed” way to lengthen your life. Childhood holidays seem to last forever, but as you grow older time seems to accelerate. “Time” is related to how much information you are taking in – information stretches time. A child’s day from 9am to 3.30pm is like a 20-hour day for an adult. Children experience many new things every day and time passes slowly, but as people get older they have fewer new experiences and time is less stretched by information. So, you can “lengthen” your life by minimising routine and making sure your life is full of new active experiences – travel to new places, take on new interests, and spend more time living in the present.

I think this also has some implications for my exercise routine. I am well aware that my ability to run any distance at all is heavily dependent on psychological factors. I am not one of those people who likes to go running in new places, along unfamiliar trails, because it has always made me feel like the distance was much, much longer — and, consequently, leads me to stop running and start walking sooner than I actually have to. I've had a lot more luck running on tracks and elliptical machines—situations where it seems to be easier for me to get into a zone and lose track of time. When I run that way, it's my physical limitations that matter, not my psychological ones.

Of course, I know a lot of people who feel exactly the opposite. Maybe, for those people, running in a routine situation, like a track, makes them start to think more about their day or what's going on around them, and processing all that information makes the workout seem longer. I'm not sure. But this is awfully interesting.

Read the rest of William Reville's piece at The Irish Times

Via Graham Farmelo

Image: RUN Hills Pullover in action!, a Creative Commons Attribution (2.0) image from lululemonathletica's photostream

How does the brain think?

I was on Minnesota Public Radio's morning show The Daily Circuit today—along with Ivan Semeniuk, chief of correspondents for the journal Nature—talking about the Curiosity rover, human evolution, and dealing with the big unknowns in science. You can listen to that segment online.

But right at the end of my bit, as I was packing up my stuff to leave the studio, I heard the next segment on the show, and it was AWESOME. Ask a Neuroscientist is, precisely, reader questions answered by a neuroscientist. But you have to read the transcript for today's first question, where a 5-year-old exchanged ideas with Baylor College of Medicine neuroscientist David Eagleman.

Madeline, 5 years old: How does a brain think?

David Eagleman: We don't know. Part of modern neuroscience's quest is to answer that. One theory goes that, in the same way brains control muscle movement, your brain controls your arms and legs and mouth and so on. Thought might be, essentially, covert muscle movement. In other words, it's going through the same routine that says 'bend this, flex that, extend that' - except that it's not controling a muscle. Instead, it's controling something conceptual.

Holy, awesomesauce.

Read the rest at The Daily Circuit website

In mid-'60s LSD research study, dosed scientists achieved creative breakthroughs

Illustration: Jonathan Castro, for The Heretic


A wonderful long-read at The Morning News by Tim Doody, on 1966 LSD studies that took place as the US government's position on acid research shifted from "sure, go ahead, scientists" to "nope, this is now banned." The series of tests described in the article took place at the International Foundation for Advanced Study (IFAS) in Menlo Park, CA. Scientists from Stanford, Hewlett-Packard, and elsewhere participated. The volunteers each brought "three highly technical problems from their respective fields that they’d been unable to solve for at least several months." They took "a relatively low dose of acid," 100 micrograms, to enhance their creativity.

Read the rest

This is your brain on meditation

There's a feature worth reading in the New York Times today by John Hanc on the role that meditation plays in brain development, and scientific studies to explore "the extent to which meditation may affect neuroplasticity — the ability of the brain to make physiological change."

Student's brain flatlined during classes


From "A Wearable Sensor for Unobtrusive, Long-term Assessment of Electrodermal Activity" (by Poh, M.Z., Swenson, N.C., Picard, R.W. in IEEE Transactions on Biomedical Engineering, vol.57, no.5), a chart showing a single student's electrodermal activity over the course of a week. Note the neural flatlining during classtime. As Joi Ito notes, "Note that the activity is higher during sleep than during class." He also adds, "Obviously, this is just one student and doesn't necessarily generalize."

A week of a student's electrodermal activity (Thanks, Joi!)

Helped by friends, cartoonist battles Parkinson's

Courtesy of Richard Thompson

Cartoonist Richard Thompson's voice was quiet and reedy when we spoke, although the traces of his Maryland upbringing are clear.

Read the rest

Is responding to food as a reward the same thing as food addiction?

We've had a couple of posts recently about a hypothesis that links the current increase in obesity with an increase in easy access to foods that are designed to trigger reward systems in the human brain. Basically: Maybe we're getting fatter because our brains are seeking out the recurrent reward of food that makes us fat. Scientist Stephan Guyenet explained it all in more detail in a recent guest post.

It's an interesting—and increasingly popular—idea, though not without flaws. To give you some context on how scientists are talking about this, I linked you to a blog post by Scicurious, another scientist who wrote about some of the critiques of food reward and related ideas. In particular, Scicurious questioned some of the implicit connections being made here between body size and health, and eating patterns and body size.

She also talked about another critique, one which came up in a recent article in the journal Nature Reviews Neuroscience. If people are gaining weight because they're addicted to eating unhealthy foods, we ought to see some evidence of that in the way their brains respond to those foods. After all, brains respond to many physically addictive substances in special ways. But we don't see that with junk food. So does that invalidate the hypothesis?

Stephan Guyenet doesn't think it does. In a recent email to me, he explained that he thinks the food reward hypothesis is a bit more nuanced, and can't really be described as "food addiction". At least, not the same way that cigarettes or heroin are addictive.

Addiction is the dependence on a drug, or behavior, despite clear negative consequences. Drug addiction is associated with characteristic changes in the brain, particularly in regions that govern motivation and behavioral reinforcement (reward), which drive out-of-control drug seeking behaviors. Some researchers have proposed that common obesity is a type of “food addiction”, whereby drug addiction-like changes in the brain cause a loss of control over eating behavior. Hisham Ziauddeen and colleagues recently published an opinion piece in Nature Reviews Neuroscience reviewing the evidence related to this idea.

The review concluded that there is currently not enough evidence to treat obesity as a “food addiction”. I agree, and I doubt there ever will be enough evidence. However, this does not challenge the idea that food reward is involved in obesity, an idea I described in a review article in JCEM, on my blog (1, 2), and my recent Boing Boing piece.

The reward system is what motivates us to seek and consume food, and what motivates us to choose certain foods over others. To begin to appreciate its role in obesity, all we need is a common sense example.

Why do some people drink sweetened sodas between meals, rather than plain water? Is it because sodas quench thirst better than water? Is it because people are hungry and need the extra calories? If so, why not just eat a plain potato or a handful of unsalted nuts? The main reason people drink soda is that they enjoy it, plain and simple. They like the sweetness, they like the flavor, they like the feeling of carbonation on the tongue and the mild stimulation the caffeine provides. It’s the same reason people eat a thick slice of double chocolate cake even though they’re stuffed after a large meal. The reward system motivates you to seek the soda and cake, and the hedonic (pleasure) system encourages you to keep consuming it once you’ve begun.

But is this the same as addiction? If I took a person’s cola away, would they get the shakes? Would they break into a convenience store at night to get a cola fix? I’m going to say no.

I agree with Ziauddeen and colleagues that the evidence at this point is not sufficient to say that common obesity represents food addiction, and I appreciate their skeptical perspective on the matter. In obesity, as in leanness, the food reward system appears to be doing exactly what it evolved to do: seek out energy-dense, tasty food, and strongly suggest that you eat it. The problem is that we’re increasingly surrounded by easily accessible, cheap, commercial food that is designed to hit these circuits as hard as possible, with the goal of driving repeat purchase and consumption behaviors. Our brains are not malfunctioning; they’re reacting just as they’re supposed to around foods like this.

Diane Ackerman: The Brain on Love

Snip from an essay in the New York Times today about the neuroscience of romantic love, by author Diane Ackerman:

While they were both in the psychology department of Stony Brook University, Bianca Acevedo and Arthur Aron scanned the brains of long-married couples who described themselves as still “madly in love.” Staring at a picture of a spouse lit up their reward centers as expected; the same happened with those newly in love (and also with cocaine users). But, in contrast to new sweethearts and cocaine addicts, long-married couples displayed calm in sites associated with fear and anxiety. Also, in the opiate-rich sites linked to pleasure and pain relief, and those affiliated with maternal love, the home fires glowed brightly.

The Brain on Love (NYT)

Your brain, your food, and obesity

We recently hosted an article by scientist and guest blogger Stephan Guyenet that explained how certain foods—those with a high calorie density, fat, starch, sugar, salt, free glutamate (umami), certain textures (easily chewed, soft or crunchy, solid fat), certain flavors, an absence of bitterness, food variety, and drugs such as alcohol and caffeine—could trip reward systems in the human brain. Those reward systems, then, encourage people to eat more of the foods that trigger the reward. The result, says Guyenet, is a cycle that could be the link between the American obesity epidemic and the rise of highly processed convenience foods, designed specifically to trip those neural reward systems.

This theory, and several related theories, are increasingly popular in the scientific community. This week, there's an opinion piece in the journal Nature Reviews Neuroscience that looks at the strengths and weaknesses of these theories and talks about what research needs to be done going forward. It's kind of a space for researchers to step back and say, "Okay, here's what we know, here's what's not lining up with what we think we know, and here's what we have to do if we want to understand this better." In the context of science, an article like this isn't really a slam against the ideas it analyzes. Instead, it's meant to summarize the state of the science and share ideas that could either strengthen the case, or lead down entirely new roads.

Sadly, you can't read this article unless you have a subscription to Nature Reviews Neuroscience (or pay them $32 for single article access).

Luckily, Scicurious, a neuroscientist and an excellent blogger, has read the article, and has a nice run-down of what it's saying and what you should know. Some of the ideas being discussed here overlap with Stephan Guyenet's research. Some don't. But this is connected enough that I thought you guys would be interested in reading more and getting more perspectives on this issue. Let me make this clear, though: Guyenet isn't doing bad science. As with a lot of scientific research, there's often more than one way to look at the same data. Scientists can disagree without one person having to be all-wrong and another all-right. In fact, having different scientists working on the same subject is a key part of getting the facts right.

As you read, you'll notice that an important place where Scicurious' perspective really differs from Guyenet's is in terms of connecting the idea of "addiction" to certain foods back to the idea of an obesity epidemic.

...is there a place for food addiction? The authors think so, and I am inclined to agree. However, it needs to be much more stringent than the current model of food addiction that many people want to embrace (the idea that sugar makes you addicted or that being overweight means you have a problem). Changes need to be made.

First off, it's important to separate food addiction from obesity. Binge eating does not necessarily mean you are overweight, and being overweight does not necessarily mean that you binge eat. Ranking by BMI is not going to work.

Read Scicurious' full post.

(Via the illustrious Ed Yong. Image: Fabio Berti, Shutterstock)

Pianist with synesthesia performs Bach "in color" (video)

[Video Link] BB pal Joe Sabia points us to this incredible video by Evan Shinners, Julliard-trained pianist and "best Bach player around." In the video, Shinners shows the world the colors he sees when he plays: he has synesthesia. You can follow him on Twitter, and check him out live on one of his upcoming tour dates.

The neuroscience of magic

Writing in Smithsonian magazine, magician Teller describes the neuroscience that underpins magical illusions, using admirably clear language to describe some of the weirdest ways that our brains can be made to fool us.

1. Exploit pattern recognition. I magically produce four silver dollars, one at a time, with the back of my hand toward you. Then I allow you to see the palm of my hand empty before a fifth coin appears. As Homo sapiens, you grasp the pattern, and take away the impression that I produced all five coins from a hand whose palm was empty.

2. Make the secret a lot more trouble than the trick seems worth. You will be fooled by a trick if it involves more time, money and practice than you (or any other sane onlooker) would be willing to invest. My partner, Penn, and I once produced 500 live cockroaches from a top hat on the desk of talk-show host David Letterman. To prepare this took weeks. We hired an entomologist who provided slow-moving, camera-friendly cockroaches (the kind from under your stove don’t hang around for close-ups) and taught us to pick the bugs up without screaming like preadolescent girls. Then we built a secret compartment out of foam-core (one of the few materials cockroaches can’t cling to) and worked out a devious routine for sneaking the compartment into the hat. More trouble than the trick was worth? To you, probably. But not to magicians.

3. It’s hard to think critically if you’re laughing. We often follow a secret move immediately with a joke. A viewer has only so much attention to give, and if he’s laughing, his mind is too busy with the joke to backtrack rationally.

4. Keep the trickery outside the frame. I take off my jacket and toss it aside. Then I reach into your pocket and pull out a tarantula. Getting rid of the jacket was just for my comfort, right? Not exactly. As I doffed the jacket, I copped the spider.

Teller Reveals His Secrets

Zapping the brain into "expert" mode

The "flow state" is how neuroscience researchers describe that zone you can get into when you're doing something that you've become highly skilled at. It's a zen-like place in your brain — that state where you lose track of time doing something that you enjoy doing for its own sake, and where the job of doing the task seems to become something you don't even have to think about. You just do it, and you do it right.

The catch, of course, is that usually it takes a lot of heavy work to get to the point where the flow can take over. This is where Malcolm Gladwell's 10,000 hours of practice comes into play. But, over the years, scientists have learned that there are some ways around that 10,000-hour rule. Some people just seem to pick up on the flow easier than others, for instance.

If your brain isn't just naturally inclined toward the flow, though, there is the option of zapping it into line. This is called transcranial direct current stimulation—basically running a very small electric current through specific parts of the brain. In some studies, and for some tasks, it's been shown to induce a feeling very much like a flow state, and possibly make it easier for people to get to a high level of skill faster. Last spring, Pesco wrote about some of the research that's being conducted on this intriguing but still-not-proven technique. Recently, New Scientist reporter Sally Adee tried it out, and saw a significant short-term improvement in her ability to spot and hit targets in a video shooter game.

The mild electrical shock is meant to depolarise the neuronal membranes in the region, making the cells more excitable and responsive to inputs. Like many other neuroscientists working with tDCS, Weisend thinks this accelerates formation of new neural pathways during the time that someone practises a skill. The method he is using on me boosted the speed with which wannabe snipers could detect a threat by a factor of 2.3

It's not yet clear why some forms of tDCS should bring about the flow state. After all, if tDCS were solely about writing new memories, it would be hard to explain the improvement that manifests itself as soon as the current begins to flow.

One possibility is that the electrodes somehow reduce activity in the prefrontal cortex - the area used in critical thought, which Csikszentmihalyi had found to be muted during flow. Roy Hamilton, a neuroscientist at the University of Pennsylvania in Philadelphia, thinks this may happen as a side effect of some forms of tDCS. "tDCS might have much more broad effects than we think it does," he says. He points out that some neurons can mute the signals of other brain cells in their network, so it is possible that stimulating one area of the brain might reduce activity in another.

The first thing I thought of when I read this: The way drinking one (but not more than two) beers can change the way I approach a billiards game. It doesn't improve my skills, per se—I don't suddenly become graceful with a pool cue. But when it's a game that I have some skill at already, like table hockey, one beer is often just enough to allow me to stop over-thinking and just play the game ... making it feel like I'm better at it then than I am stone-cold sober. I'd be really interested to know if/how these experiences are related.

The connections between "itch" and "ouch"

The biology of itching and the biology of pain are intertwined in interesting ways, writes graduate student and science blogger Aatish Bhatia. Understanding itching can help us better understand how to treat pain. I'd not seen Bhatia's blog before, but I'm really liking his style. He does a great job of breaking down the science in a clear way.

... In the last decade, researchers have learned about receptors in the nerves under our skin that react specifically to itchy substances. When these receptors fire, they send a signal racing up our spinal cord, headed to our brain where it creates an urge to scratch. Scientists now have a basic map of the roads that an itch takes on its way to our brain. And they have even been able to block some of these roads in mice, essentially preventing them from feeling an itch.

...The picture that is emerging is a complex one, where pain and itch signals are distinct yet subtly intertwined. Of the nerve cells under our skin, some are involved only in signalling pain, and they have pain receptors. Others are responsible for signalling different types of itches, and they have both itch and pain receptors. If the same cell has both receptors, how do we distinguish itch from ouch?

... As the biology of itching becomes better understood, the benefits are making their way from the lab to the clinic. The drug morphine is a powerful painkiller, but has a common side effect of itchiness. Women taking opiates to relieve their labour pain often experience a similar side effect. Zhou-Feng Chen and Yan-Gang Sun, authors of the GRPR receptor study, teamed up with colleagues at the newly founded Center for the Study of Itch and managed to tackle this problem. Their results, published in the current issue of the journal Cell, show that the benefits of morphine can be separated from the itch.

Via Greg Laden

Image: llama itch, a Creative Commons Attribution (2.0) image from davedehetre's photostream

Fabric brain art

I love serendipity. On the same day that Anja Austerman posted this awesome knit hat to my Google+ feed, Kevin Zelnio also posted a link reminding me of the existence of the The Museum of Scientifically Accurate Fabric Brain Art. Xeni posted about the museum here back in 2008. But it's awfully fun to contrast the super-detailed brain art on display there with this more whimsical variety.

Neuroscience explanations are more believable than mere psychological ones

"The Seductive Allure of Neuroscience Explanations," published in 2008 in the Journal of Cognitive Neuroscience, experimentally verifies the hypothesis that laypeople find explanations for psychological phenomena compelling because adding "neuroscience" makes them sound true:

In line with this body of research, we propose that people often find neuroscience information alluring because it interferes with their abilities to judge the quality of the psychological explanations that contain this information. The presence of neuroscience information may be seen as a strong marker of a good explanation, regardless of the actual status of that information within the explanation. That is, something about seeing neuroscience information may encourage people to believe they have received a scientific explanation when they have not. People may therefore uncritically accept any explanation containing neuroscience information, even in cases when the neuroscience information is irrelevant to the logic of the explanation.

To test this hypothesis, we examined people’s judgments of explanations that either do or do not contain neuroscience information, but that otherwise do not differ in content or logic. All three studies reported here used a 2 (explanation type: good vs. bad) × 2 (neuroscience: without vs. with) design. This allowed us to see both people’s baseline abilities to distinguish good psychological explanations from bad psychological explanations as well as any influence of neuroscience information on this ability. If logically irrelevant neuroscience information affects people’s judgments of explanations, this would suggest that people’s fascination with neuropsychological explanations may stem from an inability or unwillingness to critically consider the role that neuroscience information plays in these explanations.

(via Kottke)

(Image: DSCN0746, a Creative Commons Attribution Share-Alike (2.0) image from niels_olson's photostream)

What reward does your brain actually seek?


Dopamine Jackpot! Sapolsky on the Science of... by FORAtv

Dopamine does a lot of things, but you're probably most familiar with it as the chemical your brain uses as a sort-of system of in-game gold coins. You earn the reward for certain behaviors, usually "lizard-brain" type stuff—eating a bowl of pudding, for instance, or finally making out with that cute person you've had your eye on. And, as you've probably heard, there's some evidence that we can get addicted to that burst of dopamine, and that's how a nice dessert or an enjoyable crush turns into something like compulsive eating or sex addiction.

Neurologist Robert Sapolsky puts an interesting twist on this old story, though. What if it isn't the burst of dopamine that we get addicted to, but the anticipation of a burst of dopamine? It's a small distinction. But it matters, he says, if our reward system is based less on happiness than on the pursuit of happiness.

For more on this, check out David Bradley's post on this video, which also links back to a more-detailed discussion of the basics of dopamine addiction.

Watch lectures on mind, brain, and human nature

On Monday, I told you about The Nobel Conference at Gustavus Adolphus College, in St. Peter, Minn., where top neuroscientists are speaking about the mind, the brain, and what it means to be human.

Now, I have some good news for those of you who couldn't play hooky this week, couldn't get tickets to the free event, and/or don't actually live anywhere near St. Peter, Minn. You can watch The Nobel Conference online.

Today's lectures will be broadcast on a live feed. You can also submit questions through the site and participate in the Q&A after each lecture. The first speaker is John Donoghue, director of the Institute for Brain Science at Brown University. Starting at 10:00 am, Central, he'll be talking about a topic near and dear to every Happy Mutant's heart: "Merging Mind to Machines: Brain Computer Interfaces to Restore Lost Motor Function."

If, for some reason, you can't start your morning off with healthy dose of cyborgs, all the lectures from Tuesday and today will eventually be archived as online videos. Right now, there's only one lecture available this way—yesterday's morning session on new therapies for autism. I've embedded that video above. But check the Conference's site for other lectures, coming soon!

Video Link

Thanks to Lisa Dubbels for pointing this out!

No, you're not in love with your iPhone

The New York Times has an op-ed out today, which claims that fMRI studies show that, when people are exposed to a pretty, shiny, ringing iPhone, the experience lights up the part of their brains that signifies a deep, compassionate love for something. iPhones trigger the same brain activity that your parents and loved ones trigger, writes branding strategist Martin Lindstrom.

Clearly, this was going to turn out to wildly misleading. You love your iPhone like you love your mother is just not the kind of statement that passes a cursory bullshit inspection. And lots of people have handily debunked it, including a couple of actual nueroimaging specialists, Russ Poldrack and Tal Yarkoni.

So, how wrong was the NYT op-ed? Pretty damn wrong. Turns out, the part of the brain Martin Lindstrom identifies with lovey-dovey emotions is a lot more complicated than that. Here's Russ Poldrack:

Insular cortex may well be associated with feelings of love and compassion, but this hardly proves that we are in love with our iPhones. In Tal Yarkoni's recent paper in Nature Methods, we found that the anterior insula was one of the most highly activated part of the brain, showing activation in nearly 1/3 of all imaging studies! Further, the well-known studies of love by Helen Fisher and colleagues don't even show activation in the insula related to love, but instead in classic reward system areas.

And Tal Yarkoni adds a lot more to this:

... the insula (or at least the anterior part of the insula) plays a very broad role in goal-directed cognition. It really is activated when you’re doing almost anything that involves, say, following instructions an experimenter gave you, or attending to external stimuli, or mulling over something salient in the environment.

So, by definition, there can’t be all that much specificity to what the insula is doing, since it pops up so often. To put it differently, as Russ and others have repeatedly pointed out, the fact that a given region activates when people are in a particular psychological state (e.g., love) doesn’t give you license to conclude that that state is present just because you see activity in the region in question. If language, working memory, physical pain, anger, visual perception, motor sequencing, and memory retrieval all activate the insula, then knowing that the insula is active is of very little diagnostic value.

I'd recommend reading Yarkoni's full post, because it also gets into some really fascinating nuance behind the neuroscience of addiction. Shorter version: We don't have a clear biomarker that signals addiction, or addictive behavior. You couldn't even diagnose an obviously addicted individual using neuroimaging. So you should beware of anybody who tells you that an fMRI study demonstrates that people are addicted to anything.

Can magnets make you lie?

A small Estonian study is offering some hints that our brains could be even weirder than we'd imagined. Researchers found that magnetic pulses directed at a certain part of the frontal cortex affected whether people were more willing to fib, or more likely to tell the truth. Only 16 people were involved in the study, so these results are more something potentially cool to follow up on than a definitive declaration about brain function. There's a good chance this could turn out to be a statistical fluke. But it is worth researching further. If the effect is real, it could have some really interesting ethical, legal, and neurobiological implications.

Say it with me now: "F***ing magnets, how do they work?" Mo Costandi explains:

Inga Karton and Talis Bachmann of the University of Tartu adopted a different and novel approach, by examining the natural propensity to lie spontaneously during situations in which deception has no consequences. They recruited 16 volunteers, and showed them red and blue discs, which were presented randomly on a computer screen. The participants were asked to name the colour of each disc, and that they could do so correctly or incorrectly at their free will.

The researchers used a technique called transcranial magnetic stimulation (TMS) to disrupt the participants' brain activity during the task. TMS is a non-invasive technique in which pulses of electromagnetic radiation are targeted to a specific brain region, inducing weak electrical currents that can either inhibit or enhance activity in that area.

They split the participants into two groups of eight for the experiment. Half of the participants in one group received magnetic pulses to the dorsolateral prefrontal cortex (DLPFC) in the left hemisphere of the brain, while half in the other received them to the DLPFC on the right side. The rest of the participants acted as controls, and TMS was targeted to either the left or the right parietal cortex.

Statistical analysis of the results revealed that magnetic stimulation directed at the left DLPFC slightly increased the participants' tendency to lie about the colour of the discs, whereas stimulation of the right DLPFC slightly reduced it. By contrast, stimulation of the left or right parietal cortex had no effect on the participants' propensity to lie.

Costandi has actually made his full interview with the primary researcher in this study available online. In it, he gets a bit more into the nuance of what happens when you turn up a result as odd as this one, why scientists conduct such small studies, and what they do with the results of those studies.

Beautiful paintings of neurons

That's no dandelion. It's a painted close-up of a slice of human hippocampus. Jessica Palmer at the Bioephemera blog introduced me to the gorgeous artwork of neuroscience grad student and painter Greg Dunn. His images of different neurons are really lovely. And you can buy prints.

Via Elizabeth Sears

The science of near-death experiences

Some recent research is confirming what a lot of us have probably long suspected—there's a pretty reasonable scientific explanation for near-death experiences.

Recently, a host of studies has revealed potential underpinnings for all the elements of such experiences.

For instance, the feeling of being dead is not limited to near-death experiences—patients with Cotard or "walking corpse" syndrome hold the delusional belief that they are deceased. This disorder has occurred following trauma, such as during advanced stages of typhoid and multiple sclerosis, and has been linked with brain regions such as the parietal cortex and the prefrontal cortex—"the parietal cortex is typically involved in attentional processes, and the prefrontal cortex is involved in delusions observed in psychiatric conditions such as schizophrenia," Mobbs explains. Although the mechanism behind the syndrome remains unknown, one possible explanation is that patients are trying to make sense of the strange experiences they are having.

This story, by Charles Q. Choi, breaks down several common elements of near-death experiences the same way. But the fact that I found most interesting relates to who has "near-death" experiences. Turns out, it's not limited to people who are actually near death. Choi reports that a study of 58 patients who had had near-death experiences found that 30 of them weren't actually in danger of dying. They just thought they were.

The psychopathic neurobiologist

James Fallon studies the brain. Then he studied his own, and found out that he has the same brain malfunctions as psychopathic serial killers. What happened next is a fascinating story about the brain, the mind, and the dueling influences of nature and nurture.

Animals and the amygdala

As part of a cool project in blogging on Google+ ("plogging"), Nature editor Noah Gray writes about a recent experiment that found that specific neurons in the human amygdala respond instantly to images of animals. These responses were stronger and faster than when other neurons responded to those images, and stronger and faster than when the animal-centric neurons responded to other types of images.

The amygdala is well known to be involved in fear modulation and memory, as well as influencing other types of emotional processing. So is it expected that cells in this structure would respond so strongly to the sight of animals? There is a moderate precedent from the non-human primate literature. Studies in macaques have revealed strong firing of amygdalar neurons to faces, so categorical responses aren't unique in the amygdala. This is true in humans as well, but humans also maintain a different dedicated brain region for face processing, perhaps opening up some portions of the amygdala to take on additional, different roles. But why would we need a dedicated system for animal imagery, elevating this particular stimulus to such an important position in our recognition system? Well this is all speculation, but it isn't difficult to state the obvious and stress that animals were critical as prey for our ancient ancestors, as well as potential threats. Thus, early man may have developed a system to speed our reaction times to such an important category as the landscape was visually scanned for information. Placing this system in a brain region critical to emotion processing could have also more-easily mobilized action through a rapid activation of attack or flight responses.

Image: Animal Kingdom Sign, a Creative Commons Attribution Share-Alike (2.0) image from pixeljones's photostream

The Singularity is Far: A Neuroscientist's View

David J. Linden is the author of a new book,The Compass of Pleasure: How Our Brains Make Fatty Foods, Orgasm, Exercise, Marijuana, Generosity, Vodka, Learning, and Gambling Feel So Good.

Read the rest