European Parliament proposes granting legal status to robots as ‘electronic persons’ (13 January 2017 13:34)
Scientists are turning to AI to create safer batteries (19 December 2016 10:34)
Can neural networks pick the perfect present? (01 December 2016 10:34)
Date: 14 December 2015 10:49
A major breakthrough in computer science could lead to smarter and more adaptable computer programs. This advancement could lead to better recognition software to be used in smartphones and robots, IFL Science reported.
This new study, published this week in Science, presents a new program that can learn a concept from a single example, showing for the first time that it’s possible for a machine to learn like humans do. The software uses a statistics system to break down new concepts in known components: the algorithm learns like a child using what it knows already and building up its complexity.
Computers are getting smarter and can do calculations faster and more accurately than any human, but the road to artificial intelligence remains a long one. In the last decade, there has been an increased focus on how machines learn: Computers tend to require hundreds, if not thousands, of examples before they are capable of generalizing like a human does. This approach is called deep neural network and it is used by Facebook, for example, to recognize faces in pictures.
"When people learn or users interact with novel concepts, they do not just see characters as static visual objects," Dr. Brendan Lake, lead author of the paper, said at a press conference. "Instead, they see richer structures like a causal model or sequence of pen strokes that describe how to officially produce new examples of the concept."
The computer was taught to recognize and reproduce as best as it could written characters based on the number, and the shape, of strokes. Its ability to “learn to learn” let the machine quickly grasp new symbols. The researchers applied this model to over 1,600 types of handwritten characters from the world’s alphabet, including Sanskrit, Greek, and Tibetan, as well as invented characters (some from the TV show "Futurama").
"We aim to develop an algorithm of the same capability and compare it with people," added Dr. Lake. "This led us to the Bayesian program learning of approach introduced in the paper. The key idea is that concepts are represented as simple probabilistic programs. Computer code that resembles the work of a programmer but the program produces a different output each time it runs."
The team used a visual Turing test to establish the ability of the machine to be human-like. The authors asked the computer and humans to either reproduce characters after having seen a single example or to invent a new character. The output was looked at by human judges that had to decide if the output was produced by a computer or by a person. Fewer than 25 percent of the judges performed better than random chance in distinguishing between human and computer work.
“The algorithm only works for handwritten characters currently, but we believe the broader approach based on probabilistic program induction can lead to progress in speed recognition and object recognition but will take more time to get representation right in these domains,” said Dr. Lake.
“Our work shows the power of studying human learning and the power of probabilistic programs for building smarter and more human-like learning algorithms.”