Scientists at Cambridge University are to study the possibility of technology destroying human civilisation, in the latest move to combat the potential of ‘killer robots’.
The Centre for the Study of Existential Risk (CSER) is to launch next year.
The BBC reports that the scientists have warned that dismissing the possibility of a potential robot uprising “would be dangerous”.
Researchers have said that the idea that machines could become more intelligent than humans needed examination. The centre, which is to study the dangers posed by biotechnology, artificial life, nanotechnology and climate change, has been co-founded by Cambridge professors Huw Price and Martin Rees alongside Skype co-founder Jaan Tallinn.
Fear of the future power of robots seems to be growing. HumanIPO recently reported on the launch of the Losing Humanity campaign, supported by the Harvard School of Law and the International Human Rights Clinic (IHRC), which is against the use of automated robots considered for warfare as human substitutes.
“The seriousness of these risks is difficult to assess, but that in itself seems a cause for concern, given how much is at stake,” the researchers wrote on the centre’s site.
Price told the AFP of the risk that humans could one day be at the mercy of “machines that are not malicious, but machines whose interests don’t include us”.
“It seems a reasonable prediction that some time in this or the next century intelligence will escape from the constraints of biology. What we’re trying to do is to push it forward in the respectable scientific community.”
The Losing Humanity campaign was launched in a similar vein, concerned at the lack of human judgement involved in using robots instead of human armies and determined to inform governments and officials of the full consequences of doing so.
In a 50-page report titled Losing Humanity: The Case against Killer Robots, the IHRC appeals to all states to prohibit the development of such weapons “before it is too late”. The report was issued by Human Rights Watch.
“Giving machines the power to decide who lives and dies on the battlefield would take technology too far,” said Steve Goose, Arms Division director at Human Rights Watch.