Ever wonder what a tiny fish could teach us about the future of Artificial Intelligence? It might sound strange, but a virtual zebrafish is paving the way for incredibly smart AI, and the implications are astounding. Let's dive in!
By: Marylee Williams
Professor Aran Nayebi, from Carnegie Mellon University, humorously compares his robot vacuum's limited intelligence to the playful autonomy of his cats, Zoe and Shira. Unlike the vacuum, which follows a pre-set path, his cats exhibit genuine curiosity and flexibility.
"Their brains are so much tinier than the Roomba, yet these animals have a kind of robust agency," Nayebi observes. This natural curiosity in animals inspired Nayebi and his team to build AI that can explore its environment without explicit instructions, much like his cats.
This work hints at a future where autonomous "AI agent scientists" could revolutionize data analysis, sifting through massive datasets without human bias, and uncovering hidden patterns.
Nayebi, part of a research team at Carnegie Mellon University's School of Computer Science, created a virtual zebrafish that mimics the behavior of a real one, even without prior training. This virtual fish replicates animal-like brain activity and exhibits autonomy in a simulated environment.
But here's where it gets controversial... This autonomy is crucial for developing AI agents capable of open-ended exploration, or in other words, creating AI agent scientists.
"If we build AI scientists, we could take those moments of serendipity in scientific discovery, like how penicillin was discovered, and make them more likely," Nayebi explains. He also notes that AI agents can handle vast amounts of interconnected data more effectively than humans.
Furthermore, AI agents might outperform humans by avoiding the biases that often cloud human judgment. Humans tend to create narratives that can lead to misleading conclusions, while AI agents focus solely on the data.
The team chose the zebrafish because of prior research into their glial cells. Initially overlooked, these cells were found to play a crucial role in the larval zebrafish's ability to swim and explore.
When biologists disabled the zebrafish's tail, it entered a state of futility-induced passivity, a period of trying and failing, followed by a period of inactivity. The interactions with glial cells helped the fish try again.
Inspired by this, Nayebi and his team developed a computational method that allows an AI agent to explore and adapt to its environment without external rewards or labeled data. Their simulated larval zebrafish uses a model called Model-Memory-Mismatch Progress, or 3M-Progress, to understand its world.
The model's memory component has two parts: a current memory of real-time experiences and an "ethologically relevant prior memory" of how the world should work. A mismatch between these memories triggers an update to the model.
"Incorporating memory primitives, which are fixed priors about the world that the agent can remember and reference later, gives just enough flexibility to construct an intrinsic goal that not only captures zebrafish exploration behavior, but also predicts whole-brain activity at single-cell resolution from our agent’s artificial brain," says Reece Keller, a Ph.D. student involved in the research. "This is important because it emphasizes that animal intelligence is built on top of lots of biological priors.”
3M-Progress is an intrinsic-motivation algorithm, meaning it has its own built-in drive to explore, unlike reward-based AI agents. For example, a robot vacuum is a reward-based AI agent. The simulated zebrafish, however, is driven by a mismatch signal that pushes it toward curiosity-like exploration.
"We're training this virtual zebrafish with a 3M-Progress objective. This virtual zebrafish hasn’t been shown how real zebrafish move, and we’re not trying to force its ‘brain’ to match the data directly. Instead, we created a simulated environment, let it explore and evaluated its behavior afterward," Nayebi explains.
The researchers recreated the futility-induced passivity scenario and found that the virtual zebrafish exhibited similar behavior without prior training. This is significant because the AI agent displayed this behavior without any prior knowledge of the state.
Nayebi explains that the neural glial connection is how biology implements the mismatched computation. The simulated zebrafish learns to recognize futility and suppress its actions, leading to cyclical behavior.
Recreating this behavior in an AI agent helps researchers understand and replicate animal-like autonomy in AI. Nayebi notes that as researchers tackle more complex problems, the solutions become increasingly similar to how the brain works.
The CMU research team included Alyn Kirsch, Felix Pei, and Xaq Pitkow, along with Leo Kozachkov from Brown University.
Nayebi says the team's next step is to explore how this autonomy can be applied to different embodiments, not just zebrafish.
What do you think? Do you believe that AI agents will eventually surpass human scientists in discovery? Share your thoughts in the comments below!