Questioning the nature of reality with cognitive scientist Donald Hoffman

Back in the early 1900s, the German biologist Jakob Johann Baron von Uexküll couldn’t shake the implication that the inner lives of animals like jellyfish and sea urchins must be radically different from those of humans.

Uexküll was fascinated by how meaty, squishy nervous systems gave rise to perception. Noting that the sense organs of sea creatures and arachnids could perceive things that ours could not, he realized that giant portions of reality must therefore be missing from their subjective experiences, which suggested that the same was true of us. In other words, most ticks can’t enjoy an Andrew Lloyd Webber musical because, among other reasons, they don’t have eyes. On the other hand, unlike ticks, most humans can’t smell butyric acid wafting on the breeze, and so no matter where you sit in the audience, smell isn’t an essential (or intended) element of a Broadway performance of Cats.

Uexküll imagined that each animal’s subjective experience was confined to a private sensory world he called an umwelt. Each animal’s umwelt was different, he said, distinctive from that of another animal in the same environment, and each therefore was tuned to take in only a small portion of the total picture. Not that any animal would likely know that, which was Uexküll’s other big idea. Because no organism can perceive the totality of objective reality, each animal likely assumes that what it can perceive is all that can be perceived. Each umwelt is a private universe, fitted to its niche, and the subjective experiences of all of Earth’s creatures are like a sea filled with a panoply of bounded virtual realities floating past one another, each unaware that it is unaware.

Like all ideas, Uexküll’s weren’t completely new. Philosophers had wondered about the differences in subjective and objective reality going back to Plato’s cave (and are still wondering). But even though Uexküll’s ideas weren’t strictly original, he brought them into a new academic silo – biology. In doing so, he generated lines of academic research into neuroscience and the nature of consciousness that are still going today.

For instance, when the philosopher Thomas Nagel famously asked, “What is it like to be a bat?” he thought there was no answer to his question because it would be impossible to think in that way. Bat sonar, he said, is nothing like anything we possess, “and there is no reason to suppose that it is subjectively like anything we can experience or imagine.” All one can do, said Nagel, is imagine what it would be like for a person, like yourself, to be a bat. Imagining what it would be like for a bat to be a bat is impossible. This was part of an overall criticism on the limits of reductionist thinking, and is, of course, still the subject of much debate.

The siblings of these notions appear in the writings of everyone from Timothy Leary with his “reality tunnels” to J.J. Gibson’s “ecological optics” to psychologist Charles Tart and his “consensus trances.” From the Wachowski’s Matrix to Kant’s “noumenon” to Daniel Dennett’s “conscious robots,” we’ve been wondering about these questions for a very long time. You too, I suspect, have stumbled on these problems, asking something along the lines of “do we all see the same colors?” at some point. The answer, by the way, is no.

The assumption in most of these musings is that we humans are unique because we can escape our umwelten. We have reason, philosophy, science, and physics which free us from the prison of our limited human perceptions. We can use tools to extend our senses, to see the background radiation left behind by the big bang or hear the ultrasonic laughter of ticklish mice. Sure, the table seems solid enough when we knock on it, and if you were still trapped in your umwelt, you wouldn’t think otherwise, but now you know it is actually mostly empty space thanks to your understanding of protons and electrons. We assume that more layers of truth reveal themselves to us with each successive paradigm shift.

In this episode of the You Are Not So Smart podcast, we sit down with a scientist who is challenging these assumptions.

hoffman-portrait-2xDonald Hoffman, a cognitive psychologist at the University of California with a background in artificial intelligence, game theory, and evolutionary biology has developed a new theory of consciousness that, should it prove true, would rearrange our understanding of not only the mind and the brain, but physics itself.

“I agree up to a point,” said Hoffman, “that different organisms are in effectively different perceptual worlds, but where I disagree is that these worlds are seeing different parts of the truth. I don’t think they are seeing the truth at all.”

Hoffman wondered if evolution truly favored veridical minds, so he and his graduate students created computer models of natural selection that included accurate perceptions of reality as a variable.

“We simulated hundreds of thousands of random worlds and put organisms in those worlds that could see all of the truth, part of the truth, or none of the truth,” explained Hoffman. “What we found in our simulations was that organisms that saw reality as-it-is could never outcompete organisms that saw none of reality and were just tuned to fitness, as long as they were of equal complexity.”

The implication, Hoffman said, is that an organism that can see the truth will never be favored by natural selection. This suggests that literally nothing we can conceive of can be said to represent objective reality, not even atoms, molecules, or physical laws. Physics and chemistry are still inside the umwelt. There’s no escape.

“If our perceptual systems evolved by natural selection, then the probability that we see reality as it actually is, in any way, is zero. Precisely zero,” said Hoffman.

Well aware that these ideas come across as woo, Hoffman welcomes challenges from his peers and other interested parties, and in the interview you’ll hear what they’ve said so far and how you can investigate these concepts for yourself.

Also in the show, Hoffman explains his ideas in detail in addition to discussing the bicameral mind, artificial intelligence, and the hard problem of consciousness in this mindbending episode about how we make sense of our world, our existence, and ourselves.

DownloadiTunesStitcherRSSSoundcloud

Great Courses PlusThis episode is sponsored by The Great Courses Plus. Get unlimited access to a huge library of The Great Courses lecture series on many fascinating subjects. Start FOR FREE with Your Deceptive Mind taught by neurologist Steven Novella. Learn about how your mind makes sense of the world by lying to itself and others. Click here for a FREE TRIAL.

exoThis episode is sponsored by EXO Protein Bars. EXO makes all-natural protein bars using cricket flour, a sustainable and complete source essential amino acids, high in iron, that produce 100x less greenhouse gas than cows. Oh yeah, and you’ll never know its crickets because they are delicious and designed by a three-Michelin-star chef. Get your four-bar sample pack by visiting EXOProtein.com/sosmart.

sssThere is no better way to create a website than with Squarespace. Creating your website with Squarespace is a simple, intuitive process. You can add and arrange your content and features with the click of a mouse. Squarespace makes adding a domain to your site simple; if you sign up for a year you’ll receive a custom domain for free for a year. Start your free trial today, at Squarespace.com and enter offer code SOSMART to get 10% off your first
purchase.

PatreonSupport the show directly by becoming a patron! Get episodes one-day-early and ad-free. Head over to the YANSS Patreon Page for more details.

Links and Sources

DownloadiTunesStitcherRSSSoundcloud

Previous Episodes

Boing Boing Podcasts

Cookie Recipes

Donald Hoffman’s Website

What if Evolution Bred Reality Out of Us?

The Case Against Reality

Mr. Jaynes Wild Ride

The Human Intellect is Like Peacock Feathers

XKCD: Umwelt

David Eagleman on the Umwelt

IMAGE SOURCE: Wikimedia Commons

Notable Replies

  1. I think there's the point that I disagree with.

    Do our senses miss huge chunks of reality? Of course they do.

    Do the things we can perceive get heavily processed, so that they line up with what "makes sense" more than with an actual perception of the underlying reality? If that weren't true, we wouldn't have optical illusions.

    But the idea that we can never build tools that can perceive the underlying reality just seems like nonsense to me.

  2. I listened to the podcast, and it was interesting up to the point where Hoffman started postulating about physics.

    Then it rapidly descended into Deep Woo.

    I've been subscribing to YANSS for a long time, and it's generally well-researched and interesting. But this episode really should have taken a much more skeptical view of Hoffman's wilder theories (like the idea that consciousness, rather than spacetime, energy, quantum mechanics, etc. is fundamental to the dynamics of the universe).

    Hoffman repeatedly claims that his ideas "haven't been disproven," when the truth is that they don't make meaningful and testable physical predictions, and don't solve any significant problems in theoretical physics.

    When you're a professional working in the field of physics you see all manner of crackpot theories from people who have only a pop-culture understanding of physics. They're almost never worth spending any time to understand and debunk. That's no different when the crackpot theory comes from someone who holds a PhD and professorship in an unrelated field.

  3. Recommended reading:

    "Meat" by Terry Bisson.

    Excerpt:

    "They're made out of meat."

    "Meat?"

    "Meat. They're made out of meat."

    "Meat?"

    "There's no doubt about it. We picked up several from different parts of the planet, took them aboard our recon vessels, and probed them all the way through. They're completely meat."

  4. It's that model that seems the most far-fetched part to me. I don't understand how evolutionary theorists build these models, but I know that not very long ago they were building models that gave results that didn't reflect the real world. I have a suspicion that while they have refined their models they haven't come remotely close to perfecting them. So to go from a model to a probability of something being true in the real world with enough confidence to say "precisely" seems strange to me.

    Beyond that immediate quibble a few things he said seemed really concerning to me. He talked about beings of equal complexity. First of all, my short reading on the matter seems to suggest that biologists don't have a clear definition of what "complexity" is, which makes me wonder how you could model it. He knows more than I do on this front, though. I also understand that currently the belief is that evolution doesn't particularly select for complexity or simplicity.

    But it seems like he's talking entirely about DNA-based progression. What if you take one of the species in his model and give it the ability to make tools that can be handed down from one member of the species to its children. So survival isn't just dependent on your genetics but also on your technological inheritance. I think it's pretty easy to look at the world and see that technological inheritance is a more powerful fitness tool than any possible sequence of DNA, especially since sufficient technological inheritance would allow a species to simply rewrite its DNA in whatever way was desirable.

    Since we are talking about human beings and our ability to perceive reality and whether it has been useful to us, I don't see how we can leave this out of the discussion.

    But another thing is I really want to know how he could have modeled the value of the content of consciousness when he admits not knowing what consciousness is. He's saying that we can have a rich internal symbolic world (the user interface) that lets us distinguish things to eat from things to not eat without having any accurate perception of underlying reality, which sounds like it may or may not be true. But I wonder if he is also saying that in his model he assumed the two thing were independent, so that having an accurate understanding of reality was an additional cost on top of that symbolic inner world. In reality, it could be that they are highly dependent on one another, and developing the symbolic world without a good understanding of reality would be more costly. To use the user interface analogy against, making really, really efficienct code requires an understanding of the hardware it is running on. That analogy is super weak and isn't meant to prove anything, it's just meant to point out there is a question that I don't think it is even remotely possible he could have the answer to without knowing what consciousness is to begin with.

Continue the discussion bbs.boingboing.net

24 more replies

Participants