Thanks to a collaboration between Google and the Wild Dolphin Project (WDP), we might finally crack the code on dolphin language.
Since 1985, WDP has been in the cerulean seas of the Bahamas with underwater microphones, obsessively recording Atlantic spotted dolphins as they click, squeak, and whistle at each other. Google, and its unending, desperate quest to find a use for its AI models outside of poorly summarizing search results, has turned its AI guns on dolphins to see if it can decode dolphin-speak.
DolphinGemma is a large language model built to do what AI does best: predict patterns. But instead of completing your sentence in an email, it listens to dolphin yelps and tries to figure out what comes next, with the longer-term hope of decoding what all the whistling and squeaking actually means.
DolphinGemma is trained on the WDP’s enormous dolphin sound archive using SoundStream, a Google-made tech that turns those chirps into something the model can process. The result is a language model that may one day help us communicate with dolphins and maybe help them communicate with us.
Creating an actual, real-life universal translator a la Star Trek or, maybe more appropriately, a Babel fish like in The Hitchhiker’s Guide to the Galaxy, is a dream that is been relegated to science fiction. The tech world has made some strides in recent years, most notably with live translation apps that can instantly convert and convey messages between languages, aiding many a lost traveler. But translating languages between species is a whole other matter that will truly put AI language models to the test.
This AI isn’t living in some Google server farm. It can’t, not when so much of the work is happening outdoors in the Caribbean seas. That’s why the DolphinGemma AI models run on Google Pixel phones. Corporate synergy at its finest.
Specifically, WDP’s custom device called CHAT (Cetacean Hearing Augmentation Telemetry) has been using Pixel 6s, and now it’s getting an upgrade to the Pixel 9. The upgrade will allow the team to run deep learning models at the same time as template-matching algorithms, with both working in tandem to decode dolphin speak.
You should probably pump the brakes on any fantasies you’ve got running in your head of the researchers chatting on a phone and then the phone spitting out dolphin whistles and so on until they feel like they’ve known each other for years.
With a little more time and a few more AI models, though, who knows? Maybe one day there will be a dolphin with a YouTube channel or, God forbid, a podcast.
The post Google AI Could One Day Let You Talk to Dolphins appeared first on VICE.