Military AI arms race

So yesterday there was the news that over 1000 people had signed an open letter  requesting a ban on autonomous weapons.  I signed it too. While AI is advancing rapidly and the very existence of the letter indicates that research is almost certainly already progressing in this area, as a species we need to think about where to draw the line.

Completely autonomous offensive AI would make its own decisions about who to kill and where to go.  Battlefields are no longer two armies facing up on some open fields.  War is far more complex, quite often with civilians mixed in.  Trusting an AI to make those kill decisions in complex scenarios is not something that would sit easily with most. Collateral damage reduced to an “acceptable” probability?  

There was universal horror at some of the videos exposed under Wikileaks regarding friendly fire incidents and the killing of civilians due to misjudgement (made worse by some of the callous comments surrounding the actions).   However, these were people getting it wrong, not machines – distinct individuals.  People who can be put on trial, lessons learned and justice given.   Individual people make mistakes but are only individuals.

Imagine the same scenario with autonomous AI – the probability is calculated, a decision made and a kill shot given.  Then it turns out that it was the wrong person.  The same wrong decisions are likely to happen with every single machine. What do we do? Reprogram the machine? Turn that one off?  Attempt a worldwide recall?

Once machines are out there, they can be easily copied.  Even if we added safeguards, I’m sure it would not be long until these were hacked1 and adapted to remove any artificial safety.  Suddenly our robotic protectors could be repurposed for nefarious tasks.  They would be stronger than us, faster than us and less emotional than us, killing with a simple decision based on probability.

This has been compared to nuclear bombs and biochemical weapons – it is a power that nobody should have and cannot become part of the arms race.

The letter does not suggest any limitation on non-offensive AI and indeed we should encourage this to benefit all of humanity.  We must never allow AI to make decisions about offensive action, this must be a human responsibility and even then as a last resort when all other options have failed.

  1. Every technology sold has had someone reverse engineer it, hack it to find out how it works and modify it, whether intended or not.  Autonomous offensive AI would be the same.

Published by

janet

Dr Janet is a Molecular Biochemistry graduate from Oxford University with a doctorate in Computational Neuroscience from Sussex. I’m currently studying for a third degree in Mathematics with Open University. During the day, and sometimes out of hours, I work as a Chief Science Officer. You can read all about that on my LinkedIn page.