Artificial intelligence (AI) is evolving—literally.
Researchers have created software that borrows concepts from Darwinian evolution, including “survival of the fittest,” to create AI programs that improve generation after generation without human input. The program replicated decades of AI research during a matter of days, and its designers think that at some point, it could discover new approaches to AI.
“While most of the people were taking baby steps, they took an enormous leap into the unknown,” says Risto Miikkulainen, a scientist at the University of Texas, Austin, who wasn’t involved the work. “This is one among those papers that would launch tons of future research.”
Building an AI algorithm takes time. Take neural networks, a standard sort of machine learning used for translating languages and driving cars. These networks loosely mimic the structure of the brain and learn from training data by altering the strength of connections between artificial neurons. Smaller subcircuits of neurons perform specific tasks—for instance, spotting road signs—and researchers can spend months understanding the way to connect them in order that they work together seamlessly.
In recent years, scientists have sped up the method by automating some steps. But these programs still believe stitching together ready-made circuits designed by humans. meaning the output remains limited by engineers’ imaginations and their existing biases.
So Quoc Le, a scientist at Google, and colleagues developed a program called AutoML-Zero that would develop AI programs with effectively zero human input, using only basic mathematical concepts a highschool student would know. “Our ultimate goal is to truly develop novel machine learning concepts that even researchers couldn’t find,” he says.
The program discovers algorithms employing a loose approximation of evolution. It starts by creating a population of 100 candidate algorithms by randomly combining mathematical operations. It then tests them on an easy task, like an image recognition problem where it’s to make a decision whether a picture shows a cat or a truck.
In each cycle, the program compares the algorithms’ performance against hand-designed algorithms. Copies of the highest performers are “mutated” by randomly replacing, editing, or deleting a number of its code to make slight variations of the simplest algorithms. These “children” get added to the population, while older programs get culled. The cycle repeats.
The system creates thousands of those populations directly, which lets it churn through tens of thousands of algorithms a second until it finds an honest solution. The program also uses tricks to hurry up the search, like occasionally exchanging algorithms between populations to stop any evolutionary dead ends, and automatically removing duplicate algorithms.
In a preprint paper published last month on arXiv, the researchers show the approach can find a variety of classic machine learning techniques, including neural networks. The solutions are simple compared with today’s most advanced algorithms, admits Le, but he says the work may be a proof of principle and he’s optimistic it is often scaled up to make far more complex AIs.
Still, Joaquin Vanschoren, a scientist at the Eindhoven University of Technology, thinks it’ll be a short time before the approach can compete with the state-of-the-art. One thing that would improve the program, he says, isn’t asking it to start out from scratch, but instead seeding it with a number of the tricks and techniques humans have discovered. “We can prime the pump with learned machine learning concepts.”
That’s something Le plans to figure on. that specialize in smaller problems instead of entire algorithms also holds promise, he adds. His group published another paper on arXiv on 6 April that used an identical approach to revamp a well-liked ready-made component utilized in many neural networks.
But Le also believes boosting the number of mathematical operations within the library and dedicating even more computing resources to the program could let it discover entirely new AI capabilities. “That’s a direction we’re really hooked in to,” he says. “To discover something really fundamental which will take an extended time for humans to work out.”