A few months back, I shared a piece from neuroscientist (and longtime BB reader!) Dan Graham about research into the language we use to discuss and understand the human brain. For years, scientists have relied on computer metaphors as the go-to point-of-comparison for brain function. But in his new book An Internet in Your Head: A New Paradigm for How the Brain Works, which is out today, Graham proposes a new way of looking at the language we use to talk about our minds: the internet.
Whether we realize it or not, we think of our brains as computers. In neuroscience, the metaphor of the brain as a computer has defined the field for much of the modern era. But as neuroscientists increasingly reevaluate their assumptions about how brains work, we need a new metaphor to help us ask better questions.
The computational neuroscientist Daniel Graham offers an innovative paradigm for understanding the brain. He argues that the brain is not like a single computer—it is a communication system, like the internet. Both are networks whose power comes from their flexibility and reliability. The brain and the internet both must route signals throughout their systems, requiring protocols to direct messages from just about any point to any other. But we do not yet understand how the brain manages the dynamic flow of information across its entire network. The internet metaphor can help neuroscience unravel the brain's routing mechanisms by focusing attention on shared design principles and communication strategies that emerge from parallel challenges. Highlighting similarities between brain connectivity and the architecture of the internet can open new avenues of research and help unlock the brain's deepest secrets.
An Internet in Your Head presents a clear-eyed and engaging tour of brain science as it stands today and where the new paradigm might take it next. It offers anyone with an interest in brains a transformative new way to conceptualize what goes on inside our heads.
This sparked some neat discussion about the ways we talk about and understand the functions of our minds, so I've gotten permission to share an excerpt from Graham's book, which hopefully leads to some more rollicking debate!
We think of our brains as computers. Whether we notice it or not, we invoke the metaphor of the brain as a computer anytime we talk about retrieving a memory, running on autopilot, being hardwired for something, or rebooting our minds. Neuroscientists are no less trapped in the computer metaphor. For almost as long as neuroscience has been a recognized field, the default approach has been to imagine the brain as a computing device.
Of course, most neuroscientists don't think the brain is literally a digital computer. But textbooks across the brain sciences routinely describe neurobiological processes of thinking and behavior as directly analogous to those of a computer, with programs, memory circuits, image processing, output devices, and the like. Even consciousness is described as the internal computational modeling of the external world. And although comparisons of the brain to a computing device are usually somewhat qualified, they are nearly ubiquitous. The metaphor is hard to escape or even notice because it is so ingrained in the way we think about the brain.
This situation exists in part because neuroscientists use the computer metaphor when describing the brain for the general public. Neuroscientist Dean Buonomano, in his 2011 book Brain Bugs, calls brain injuries and disorders a "system crash," and he writes of "disk space" and "upgrades" for our memory systems. Cognitive scientist Donald Hoffman analogizes our visual perception of the world with a computer desktop interface: "my icon of an apple guides my choice of whether to eat, as well as the grasping and biting actions by which I eat." Others, like brain scientist Gary Marcus, are uncompromising: "Face it," Marcus wrote in the New York Times, "your brain is a computer."
Neuroscientists typically see the job of a given part of the brain—single neurons, neural circuits, or brain regions—as computing something. At each level, electrical or chemical signals are passed among components and the components operate on the signals by computing something. Computing in this sense means taking in a signal, making the signal bigger or smaller, faster or slower, and then passing the signal along for further mathematical adjustment. What matters is the computational relationship between the magnitude of the signal coming in and the magnitude of the signal going out.
A neuron's job is often to compute a response when provided with some stimulus: a pattern of light, a sound, a social situation. With lots of neurons performing specialized computations, properties of our environment can be sensed, analyzed, stored, and linked to behavior. Working neuroscientists mostly agree that, although brains and computers differ in innumerable ways, they share a common set of "hacks." In other words, brains and computers exploit many of the same fundamental design principles.
There is no doubt that the computer metaphor has been helpful and that the brain does perform computations. But neuroscience based on the computer metaphor is incomplete because it does not consider the principles of network communication. Neuroscientists are starting to realize that, in addition to performing
computations, the brain also must communicate within itself. The key point is that, although communication involves computation, communication systems rely on different fundamental design principles than those of computing systems.
Although it has been little studied, brain-wide communication is attracting greater interest. We increasingly understand the physical structure of the brain as a highly interconnected network. The connectomics movement aims to map this network, as well as its dynamic activity. Through increasingly massive studies of the structure of neuronal networks, a new picture of brain function in complex animals is emerging. We are beginning to understand that one of the connectome's main jobs is to support brain intercommunication.
At the moment, however, there is no guiding principle for how these interconnected networks carry messages to and from a given part of the brain. We don't know the rules about how traffic on brain networks is directed or how the rules relate to our capabilities of thinking and behavior. We don't even know how to investigate this. What's missing, at least in part, is an appropriate metaphor to help us think about how the brain communicates within itself. I propose that the internet is that metaphor. The computer metaphor and the internet metaphor can coexist and inform one another. For one thing, the internet is obviously made up of computers. But it has emergent properties and rules that differ from those that govern single computers.
The coexistence of computation and communication metaphors—and the change in perspective needed to understand communication strategies—can be understood as being analogous to a time traveler from the past encountering today's internet. Imagine a 1950s-era electrical engineer transported to the present day. The engineer doesn't know what the internet is, but given a standard Wi-Fi router, she is curious enough to open it up and record electrical currents from its circuit board. By carefully measuring voltage changes over time at various locations on the circuit board, the engineer could probably learn to identify different kinds of components, such as diodes and transistors. In doing so, she could deduce the computations each one performs. But the stream of voltage variations entering or leaving the router would be very difficult to interpret. Measuring only the sheer number of signals would reveal little.
In brains, we have something similar. We can measure the activity of individual cells in the brain and deduce the rules that govern their electrical changes. In a much more limited way, we can measure large-scale brain activity. But we can't observe how messages are transmitted across several synapses in the brain or the branching, dynamic paths these messages may take.
In short, we don't know the brain's strategy for passing messages across the whole brain. Indeed, supposing the existence of "messages" is somewhat heretical. But returning to our timetraveling engineer, if she knew the general rules for message passing on a computer network, she might be able to identify the role played by a given diode or transistor in the router. The same should be true for brains: if we could work out the basic principles of message passing, we could understand the role of individual neural computations.
For decades, neuroscientists have been measuring diodes and transistors and ignoring the larger system of message passing. We should think more about the brain as a unified communication system in science—and in society. Going further, we can investigate the brain in reference to the general principles that make the internet the universe's most powerful, flexible, and robust communication system. This change in viewpoint can also help us all understand and utilize our own brains more effectively.
We know that brains must intercommunicate at all levels, from the biochemistry of synapses to whole-brain oscillations in electrical activity. Most importantly, it must be possible to send messages selectively in the brain without changing the structure of the cellular network of neurons. All kinds of tasks involve sending messages to one place sometimes and to another place at other times. This seems obvious when stated directly, but it is rarely acknowledged.
It's like what happens at an ice cream shop when we decide between chocolate and vanilla. It must be possible for a decision-making neuron in our brain to direct a signal to the neural output for saying "chocolate" or, alternatively, to the neural output for saying "vanilla." We might even say "chocolate—no, wait! Vanilla!" because we remember that the vanilla at the shop is especially tasty, and thereby draw upon memories stored elsewhere on the network to change the route of the message in real time. The trace of communication across the network can change almost instantaneously. But our brain accomplishes this without altering neuronal network connectivity.
Neuroscientists have extensively studied the decision-making computations occurring in neurons. These neurons appear to "decide" to fire or not fire by accumulating evidence from input signals over time. But it is not known how the computed decision is routed to the selected output neurons. This question has not really even been asked.
Other parts of the body also intercommunicate, and it's worth considering whether the solutions adopted in other biological systems are useful comparisons. The immune system, for example, is predicated on the ability to pass information about the presence of pathogens to the appropriate internal security forces. Great armies of antibodies patrol every milliliter of blood, applying tiny labels to anything suspicious. As tagged microbes circulate through the body, the tags are eventually noticed and the offender pulled aside and killed. The message, as it were, has been received. If antibodies are the immune system's messages, passed by physical movement in miles of blood vessels, the brain's messages are something altogether different. In the brain, messages consist of electrical signals and their chemical intermediaries. Messages travel over a highly interconnected—but fixed—network of "wires." No individual component of the brain moves very far, at least in the short term. It is this kind of networked message passing that defines neural communication. Just like the immune system, the brain must have effective global rules and strategies for communication. But these rules are specialized for a system made
of neurons and linked to functions of thinking and behavior.
In recent years, a small but growing community of researchers has investigated the message-passing rules operating on brain networks. A few researchers have proposed internet-like solutions to the challenge of passing signals in the brain in a flexible way, though the theories have only occasionally been described as routing theories. Routing here refers to the directing of signals from one part of the network to another part according to a set of global rules. We can start to see things from a new perspective—and see how the internet metaphor can aid us—by recasting neural computation as neural routing.
Excerpted from An Internet in Your Head by Daniel Graham. Copyright (c) 2021 Columbia University Press. Used by arrangement with the Publisher. All rights reserved.