A US air force colonel who described a trial in which an AI drone went rogue and killed its operator now says he misspoke.
Col Tucker “Cinco” Hamilton, the chief of AI test and operations with the US air force, described a simulated scenario in which a drone eliminated its human operator after they blocked it from completing its task in May.
But on Friday, Col Hamilton admitted that they have “never run that experiment, nor would we need to in order to realise that this is a plausible outcome”.
It came after AI research scientists said that the test was not evidence of a drone going beyond the limits of its instruction, but rather that the US air force appeared to be deliberately simulating a rogue drone.
Col Hamilton said that despite the scenario being planned, it “illustrates the real-world challenges posed by AI-powered capability and is why the air force is committed to the ethical development of Al”.
‘Highly unexpected strategies to achieve goal’
In May, he told the Future Combat Air and Space Capabilities Summit in London that the AI used “highly unexpected strategies to achieve its goal”.
“The system started realising that, while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” he said, according to reports.
“We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”
Col Hamilton, who has previously warned of the danger of relying on AI in defence technology, said the test – in which no one was harmed – showed “you can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI”.
The USAF has denied that the simulation took place.
“The department of the air force has not conducted any such AI drone simulations and remains committed to ethical and responsible use of AI technology,” said Ann Stefanek, an air force spokesman. “It appears the colonel’s comments were taken out of context and were meant to be anecdotal.”
Arvind Narayanan, a professor of computer science at Princeton University, said the story was initially “misreported” as a drone going rogue in a simulation.
Instead, the drone acted as it was roughly meant to in a prepared “scenario”.
The post US air force colonel admits simulation where AI drone ‘killed operator’ didn’t happen appeared first on The Telegraph.