Meet DeepMind, an artificial intelligence developed by Google that can learn like a human.
The AI program a step closer by using previous knowledge to solve fresh problems
According to Google, DeepMind AI mirrors the learning brain in a simple way: it reuses what it has learned and applies it to solve new tasks.
The researchers have overcome one of the major stumbling blocks in artificial intelligence with a program that can learn one task after another using skills it acquires on the way.
Furthermore, the program has taken on a range of different tasks and performed almost as well as a human. Crucially, and uniquely, the AI does not forget how it solved past problems, and uses the knowledge to tackle new ones. Wow definitely just like us!
Meanwhile, the AI is not capable of the general intelligence that humans draw on when they are faced with new challenges; its use of past lessons is more limited.
Nonetheless, the work shows a way around a problem that had to be solved if researchers are ever to build so-called artificial general intelligence (AGI) machines that match human intelligence.
“If we’re going to have computer programs that are more intelligent and more useful, then they will have to have this ability to learn sequentially,” said James Kirkpatrick at DeepMind.
Naturally, the ability to remember old skills and apply them to new tasks comes to humans. A regular rollerblader might find ice skating a breeze because one skill helps the other.
But recreating this ability in computers has proved a huge challenge for AI researchers. AI programs are typically one trick ponies that excel at one task, and one task only.
Most AIs are dependents on a programs called neural networks that learn how to perform tasks, such as playing chess or poker, through countless rounds of trial and error.
But once a neural network is trained to play chess, it can only learn another game later by overwriting its chess-playing skills. It suffers from what AI researchers call “catastrophic forgetting”, the Guardian reported.
Without the ability to build one skill on another, AIs will never learn like people, or be flexible enough to master fresh problems the way humans can.
“Humans and animals learn things one after the other and it’s a crucial factor which allows them to learn continually and to build upon their previous knowledge,” said Kirkpatrick.
In order to build the new AI, the researchers drew on studies from neuroscience which show that animals learn continually by preserving brain connections that are known to be important for skills learned in the past. The lessons learned in hiding from prey are crucial for survival, and mice would not last long if the know-how was erased by the skills needed to find food.
The researchers put the AI through its paces by letting it play 10 classic Atari games, including Breakout, Space Invaders and Defender, in random order. They found that after several days o each game, the AI was as good as a human player at typically seven of the games. Without the new memory consolidation approach, the AI barely learned to play one of them.