Artificial Intelligence could be the "worst thing to happen to humanity" if it is not properly managed, Professor Stephen Hawking has warned.
The world famous physicist and cosmologist said that as AI becomes more advanced it could bring dangers such as "powerful autonomous weapons, or new ways for the few to oppress the many".
He was speaking at the launch of The Leverhulme Centre for the Future of Intelligence (CFI) in Cambridge, which will explore the implications - good and bad - of the rapid development of AI.
It will look into applications ranging from increasingly "smart" smartphones to robot surgeons and Terminator-style military droids.
It is not the first time Professor Hawking has warned about the potential dangers, having previously said that AI could end mankind if it is misused.
That is why the new centre is "crucial to the future of our civilisation and of our species", he said.
"I believe there is no deep difference between what can be achieved by a biological brain and what can be achieved by a computer. It therefore follows that computers can, in theory, emulate human intelligence - and exceed it."
Professor Hawking said the potential benefits were great and that AI could "finally eradicate disease and poverty".
"In short, success in creating AI could be the biggest event in the history of our civilisation," he said. "But it could also be the last unless we learn how to avoid the risks."
Alongside the benefits, AI in the future "could develop a will of its own - a will that is in conflict with ours", he added.
"In short, the rise of powerful AI will be either the best, or the worst thing, ever to happen to humanity. We do not know which."
Fears of robots freeing themselves from creators have inspired a host of films and literature, including 2001: A Space Odyssey.
And as AI becomes more advanced, with robots increasingly being able to take on human tasks, it will directly threaten millions of jobs.
CFI director Stephen Cave said it is about ensuring intelligent artificial systems "have goals aligned with human values" and ensure computers do not evolve spontaneously in "new, unwelcome directions".
The new centre is a collaboration between the University of Cambridge, the University of Oxford, Imperial College London and the University of California, Berkeley, funded by a £10m grant from the Leverhulme Trust.