Virtual reality glasses adapted to the visually impaired (15 February 2017 18:34)
Google Maps lets you save and share favorite places with launch of Lists (15 February 2017 10:34)
Google sells satellite imaging business Terra Bella to Planet Labs (04 February 2017 20:46)
Date: 16 November 2016 15:34
Children learn through play, so why shouldn't machines? For humans, the easiest way to learn about an object’s properties – whether it is hot or cold, light or heavy, sharp or blunt – is to pick it up and explore it with their hands.
Now AI engineers at Google’s DeepMind are training machines to learn the same way, by exploring the physical properties of virtual blocks in order to compare them.
Explaining the work, the group said: 'In the past few years deep learning has gotten really good at understanding the world through passive observation, but a lot of human understanding as pointed out above comes through interaction as well.'
Using simulations, they enabled machines to work out hidden properties of the virtual objects by manipulating them. If a child were presented with two blocks painted black, one made of wood and one made of lead, he or she could work out the blocks' basic properties through playing with them.
The shape and colour of the blocks is obvious at a glance, but the weight of the blocks is a ‘hidden’ property, which can be worked out by picking up the blocks and comparing them. Through a series of experiments in a virtual environment, a team from Google's DeepMind was able to train an AI to explore like a child, by 'playing' with virtual objects and exploring their properties.
In the first experiment, they used a ‘which is heavier’ test for their AI to compare hidden masses of blocks, by ‘poking’ all of them before making a decision on which had the greatest mass. They found the harder the task was, the longer the AI spent collecting data – spending more time poking blocks to explore their properties before making up its mind.
A second trial was set up in a block building simulation, where the AI was tasked with working out the building blocks which made up a rigid tower. In the trial, the machine learned by poking the virtual tower, knocking it over and exploring the way the blocks landed to work out their properties.
Some of the blocks in the tower were stuck together, so the only way for the AI to find out how many separate blocks made up the virtual structure was to poke it and see what arrangement the components made when they were apart. Using a trial and error approach, the machines learned they had to knock the towers over to solve the puzzle.
Writing in an as yet unpublished paper, posted on the online ArXiv server, the researchers explain: ‘By letting our agents conduct physical experiments in an interactive simulated environment, they learn to manipulate objects and observe the consequences to infer hidden object properties.
They add: ‘We demonstrate the efficacy of our approach on two important physical understanding tasks – inferring mass and counting the number of objects under strong visual ambiguities.’ Commenting on the potential applications of the work, DeepMind's Misha Denil said: 'I think right now concrete applications are still a long way off, but in theory any application where machines need an understanding of the world that goes beyond passive perception could benefit from this work.
'This might include machines that can manipulate complex materials, or machines that can navigate precarious terrains for things such as disaster response.'
Developing machines that learn through play could prove to be a fruitful avenue for pushing AI forward. Earlier this week, researchers in Italy launched a new project to develop robots which learn by themselves, using a form of open open-ended machine learning.
Called goal-based open-ended autonomous learning (GOAL), the project aims to make independent robots which set their own targets and would be a breakthrough for artificial intelligence.