Researchers at Google’s artificial intelligence laboratory, Google DeepMind, have developed an AI-powered model designed to interpret dolphin vocalizations, which will aid in the ongoing efforts to gain a deeper understanding of dolphin communication methods.
The AI model, dubbed DolphinGemma, was trained on data provided by the Wild Dolphin Project (WDP), a nonprofit organization dedicated to studying the behavior and social patterns of Atlantic spotted dolphins. Leveraging Google’s open Gemma series of models, DolphinGemma is capable of generating sound sequences that mimic those of dolphins and is sufficiently efficient to operate on mobile devices, as stated by Google.
This summer, the Wild Dolphin Project plans to utilize Google’s Pixel 9 smartphone to power a platform that can produce synthetic dolphin vocalizations and detect dolphin sounds to generate a matching “response.” Previously, the organization was using the Pixel 6 for this purpose, but Google notes that upgrading to the Pixel 9 will enable the researchers to run AI models and template-matching algorithms simultaneously.
Source Link