Imagine trying to crack a language where every "word" is a complex pattern of clicks, whistles, and burst pulses. That's what Google's DolphinGemma AI model is tackling, running on waterproofed Pixel phones in the waters of the Bahamas.
The system, announced on Google's blog, makes use of 38 years of underwater recordings from the Wild Dolphin Project (WDP), the longest-running study of its kind. These recordings capture everything from mother dolphins calling their calves with unique signature whistles to aggressive "squawks" during confrontations. The AI processes these vocalizations in real-time, searching for patterns that could unlock the dolphins' communication code.
The technology uses with bone-conducting underwater headphones, allowing researchers to hear and potentially respond to dolphin calls while swimming alongside the pods.
Google plans to open-source DolphinGemma this summer, enabling researchers studying other cetacean species to build on their work.
How soon before Google Translate includes Dolphinese?
Previously:
• Horny dolphin blamed for incident that left 18 people injured
• New documentary about John C. Lilly, the scientist who used LSD to communicate with dolphins and invented the sensory deprivation tank (video)
• Dolphin uses iPad as way to communicate with humans