Autonomous military robots that will fight future wars must be programmed to live by a strict warrior code or the world risks untold atrocities at their steely hands.
The stark warning – which includes discussion of a Terminator-style scenario in which robots turn on their human masters – is issued in a hefty report funded by and prepared for the US Navy’s high-tech and secretive Office of Naval Research .
The report, the first serious work of its kind on military robot ethics, envisages a fast-approaching era where robots are smart enough to make battlefield decisions that are at present the preserve of humans. Eventually, it notes, robots could come to display significant cognitive advantages over Homo sapiens soldiers.
“There is a common misconception that robots will do only what we have programmed them to do,” Patrick Lin, the chief compiler of the report, said. “Unfortunately, such a belief is sorely outdated, harking back to a time when . . . programs could be written and understood by a single person.” The reality, Dr Lin said, was that modern programs included millions of lines of code and were written by teams of programmers, none of whom knew the entire program: accordingly, no individual could accurately predict how the various portions of large programs would interact without extensive testing in the field – an option that may either be unavailable or deliberately sidestepped by the designers of fighting robots.
The solution, he suggests, is to mix rules-based programming with a period of “learning” the rights and wrongs of warfare.
A rich variety of scenarios outlining the ethical, legal, social and political issues posed as robot technology improves are covered in the report. How do we protect our robot armies against terrorist hackers or software malfunction? Who is to blame if a robot goes berserk in a crowd of civilians – the robot, its programmer or the US president? Should the robots have a “suicide switch” and should they be programmed to preserve their lives?
The report, compiled by the Ethics and Emerging Technology department of California State Polytechnic University and obtained by The Times, strongly warns the US military against complacency or shortcuts as military robot designers engage in the “rush to market” and the pace of advances in artificial intelligence is increased.
Any sense of haste among designers may have been heightened by a US congressional mandate that by 2010 a third of all operational “deep-strike” aircraft must be unmanned, and that by 2015 one third of all ground combat vehicles must be unmanned.
“A rush to market increases the risk for inadequate design or programming. Worse, without a sustained and significant effort to build in ethical controls in autonomous systems . . . there is little hope that the early generations of such systems and robots will be adequate, making mistakes that may cost human lives,” the report noted.
A simple ethical code along the lines of the “Three Laws of Robotics” postulated in 1950 by Isaac Asimov, the science fiction writer, will not be sufficient to ensure the ethical behaviour of autonomous military machines.
“We are going to need a code,” Dr Lin said. “These things are military, and they can’t be pacifists, so we have to think in terms of battlefield ethics. We are going to need a warrior code.”
No comments:
Post a Comment