A decade ago, UC Berkeley researchers decoded fMRI data of people's brains to reconstruct images of movie trailers they had previously watched. Back then, the images were noisy and grainy. If you didn't know what you were looking at, it wasn't easy to make sense of them. However, the research delivered a fascinating glimpse of what could someday become a form of digital telepathy. And now, Japanese researchers at Osaka University have taken a similar approach but used the Stable Diffusion model for generative AI. The images reconstructed from the brain activity data are astonishingly clear. Far fucking out. From the research paper:
Reconstructing visual experiences from human brain activity offers a unique way to understand how the brain represents the world, and to interpret the connection between computer vision models and our visual system. While deep generative models have recently been employed for this task, reconstructing realistic images with high semantic fidelity is still a challenging problem. Here, we propose a new method based on a diffusion model (DM) to reconstruct images from human brain activity obtained via functional magnetic resonance imaging (fMRI). More specifically, we rely on a latent diffusion model (LDM) termed Stable Diffusion. This model reduces the computational cost of DMs, while preserving their high generative performance […]
(Thanks, UPSO!)