New theory of why our dreams are so weird

Why are our dreams so weird?

God knows they are! Scientists have never been quite sure why we dream, though. One common theory is we dream to help consolidate new memories; the idea seems to check out, because in experiments, if you interrupt people's dream-sleep, they can't as easily learn new things or remember new things.

Still, this doesn't quite explain why our dreams are often so odd. Why would we so often dream about irrelevant things, which seem to have nothing to do with our lives? Why would we have such hallucinogenically nutty dreams? How would that help us retain stuff?

But maybe the weirdness and irrelevance is the point. Maybe dreams need to be odd to be useful.

This is the gist of a new theory by the Tufts University neuroscientist Erik Hoel, which he developed by pondering the ways our brains might be like artificial intelligence neural-nets.

As Hoel notes, when computer scientists are training a neural net — basically, trying to get it to learn something — they run into the problem of "overfitting". That's when you're training your neural network on a bunch of example data, and it finds the perfect trendline for precisely that pile of data … but only that pile of data. It's useless at predicting trendlines in new batches of data. (I'm being a little handwavey and metaphoric here, but you get my point.)

Anyway, AI scientists have long used a technique to help avoid overfitting — they feed a bit of noise into the neural net while they're training it. This keeps the neural net from perserverating too avidly on the data and helps it extract more general lessons.

Dreams do the same thing for our brains, as Hoel argues. While our brain is asleep, chewing over the events of the day, it creates dreams to add noise, so our brains don't overfit the data.

As a piece about Hoel's work in Discover notes:

The most common way to tackle overfitting is to add some noise to the learning process, to make it harder for the neural network to focus on irrelevant detail. In practice, researchers add noise to images or feed the computer with corrupted data and even remove random nodes in the neural network, a process known as dropout. [snip]

Dreams perform the same function for the brain, says Hoel: "The purpose is to provide 'out-of-distribution' simulations specifically to prevent overfitting and improve generalization."

This might even explain why we like fiction, Hoel suggests:

Hoel says that fiction in general — books, plays, films etc — might perform a similar role to dreams. "They are, after all, explicitly false information," he points out.

Just why humans create and enjoy fiction has always been something of a puzzle. But Hoel has an answer: "The overfitted brain hypothesis suggests fictions, and perhaps the arts in general, may actually have an underlying cognitive utility in the form of improving generalization and preventing overfitting, since they act as artificial dreams."

I'm intrigued. I'm always a bit skeptical of overly-direct comparisons between human brains and AI neural nets. Today's computer AI, even the most elite stuff, is unfathomably less complicated than human wetware. Still, I've been reading about dream science for years, and this is one of the more provocative ideas I've seen in a while.

If you want to dig deeper into Hoel's idea, he's got a paper on the "overfitted brain hypothesis" up on Arxiv here.