aAutonomous weapons systems – commonly known as killer robots – could be kill humans for the first time last year, according to the United Nations Security Council recently Report on the Libyan Civil War, History may well recognize this as the starting point of the next major arms race, with humanity’s potential to be the last.
United Nations Convention on Certain Conventional Weapons At its every five-year review meeting in Geneva December 13-17, 2021, debated the question of banning autonomous weapons, but There was no consensus on the ban, Established in 1983, the convention has been updated regularly to ban some of the world’s most brutal conventional weapons, including land mines, booby traps, and incendiary weapons.
Autonomous weapon systems are robots with lethal weapons that can operate independently, selecting and attacking targets without human weight on those decisions. armies around the world heavy investment In the research and development of autonomous weapons. America alone Budget US$18 billion for autonomous weapons between 2016 and 2020.
Meanwhile, human rights and humanitarian organization Rushing to establish rules and prohibitions on the development of such weapons. Without such scrutiny, foreign policy experts warn that disruptive autonomous weapons technologies would dangerously destabilize existing nuclear strategies, as they could radically alter perceptions of strategic dominance, There is an increasing risk of preemptive attacks, and because they can be Combined with chemical, biological, radiological and nuclear weapons Self.
As a human rights specialist with a focus on weaponization of artificial intelligence, I think autonomous weapons create an unstable balance and fragmented safeguards of the nuclear world – for example, the minimal compulsion of the US President right to strike – More volatile and more fragmented. Given the pace of research and development in autonomous weapons, the UN meeting may be the last chance to end the arms race.
Read also: Robot dogs join US Air Force for major exercise, could be ‘key to next generation warfare’
Fatal errors and black boxes
I see four primary dangers with autonomous weapons. The first is the problem of mistaken identity. When selecting targets, will autonomous weapons be able to differentiate between hostile soldiers and 12-year-olds playing with toy guns? Between civilians fleeing a conflict site and rebels making a tactical retreat?
The problem here is not that machines will make such mistakes and humans will not. is that the difference between human error and algorithmic error is like the difference between mailing a letter and tweeting. The scale, scope and speed of killer robot systems – governed by a targeting algorithm, deployed across continents – could lead to recent misidentifications by individual humans US drone strike in Afghanistan Seems like just rounding errors by comparison.
Autonomous weapons expert uses the metaphor of Paul Schare runaway gun to explain the difference. A runaway gun is a faulty machine gun that continues to fire even after the trigger is released. The gun continues to fire until the ammunition is exhausted, because so to speak, the gun does not know that it is making an error. Runaway guns are extremely dangerous, but fortunately they have human operators who can break the ammunition chain or attempt to point the weapon in a safe direction. Autonomous weapons, by definition, have no such protection.
Crucially, the weaponized AI also doesn’t have to be faulty to produce the runaway gun effect. As shown by numerous studies on algorithmic errors across industries, the best algorithms – operating as designed – can Produces correct results internally that still spread horrible errors rapidly across the population.
For example, a neural net designed for use in Pittsburgh hospitals was identified Asthma as a Risk Reducer in cases of pneumonia; Image recognition software used by Google identify black people as gorillas, and a machine-learning tool used by Amazon to rank job candidates systematically assign negative marks to women,
The problem isn’t just that when AI systems make mistakes, they make mistakes in bulk. It’s that when they make mistakes, their creators often don’t know why they did it and, therefore, how to fix them. black box problem AI makes it almost impossible to imagine the ethically responsible development of autonomous weapons systems.
diffusion problem
The next two threats are the problems of low-end and high-end proliferation. Let’s start from the bottom end. Armies developing autonomous weapons are now proceeding on the assumption that they will be able to Control and control the use of autonomous weapons, But if the history of weapons technology has taught the world anything, it’s this: weapon spreading.
Market pressures could result in the creation and widespread sale of what could be thought of as autonomous weapons. Kalashnikov assault rifle: killer robots that are cheap, effective and nearly impossible as they move around the world. “Kalashnikov” autonomous weapons could fall into the hands of people outside government control, including international and domestic terrorists.
However, the high-end spread is just as bad. Nations can compete to develop increasingly destructive versions of autonomous weapons, including those capable of Increasing chemical, biological, radiological and nuclear weapons, The moral hazards of increasing weapon lethality will be increased by increasing weapon use.
High-end autonomous weapons are likely to lead to more frequent warfare because they will reduce the two primary forces that have historically prevented and shortened wars: concern for civilians overseas and concern for their own soldiers. Weapons likely to be expensive moral governor The collateral is designed to minimize damage, using what UN Special Rapporteur Agnes Callamard has “said”myth of surgical strike“To suppress moral antagonism. Autonomous weapons would also reduce both the need for and risk their own troops, dramatically changing cost benefit analysis that nations go through when starting and maintaining a war.
Asymmetric warfare – that is, wars waged on the soil of nations lacking competing technology – are likely to be more common. Think about the global instability caused by Soviet and US military interventions, from the first proxy war during the Cold War Shock experienced around the world today, Multiply that by the number of high-end autonomous weapons each country currently targets.
Read also: AI robots can now help with the harvesting of orchards. This way
undermining the laws of war
Ultimately, autonomous weapons will undermine humanity’s final stand against war crimes and atrocities: the international law of war. These laws are codified in treaties up to 1864. Geneva Convention, international are the thin blue lines that separate war from genocide with respect. They are based on the idea that people can be held accountable for their actions even during war, that the right to kill other soldiers during war does not confer the right to kill civilians. A prime example of keeping an account of someone is Slobodan Milosevic, former President of the Federal Republic of Yugoslavia, who was indicted by the United Nations International Criminal Tribunal for the former Yugoslavia on charges of crimes against humanity and war crimes.
But how can autonomous weapons be held accountable? Who is to blame for the robots who committed war crimes? Who will be prosecuted? Weapon? Soldier? Commander of troops? The corporation that made the weapon? NGOs and experts in international law worry that autonomous weapons will lead to a dire situation accountability gap,
to capture a soldier criminally responsible To deploy an autonomous weapon that committed a war crime, prosecutors would need to prove both actus reus and mens re, the Latin words describing a guilty act and a guilty mind. This would be difficult in terms of law, and possibly unjust in terms of morality, given that autonomous weapons are inherently unpredictable. I believe that in a rapidly evolving environment the distance separating the soldier from the independent decisions made by autonomous weapons is too great.
The legal and ethical challenge is not made easy by chaining the order or transferring the defect back to the production site. What is mandatory in a world without rules meaningful human control In the case of autonomous weapons, there would be war crimes and no war criminals would be held accountable. The structure of the laws of war, with their deterrent value, would be significantly weakened.
A new global arms race
Imagine a world in which armies, insurgent groups and international and domestic terrorists could deploy theoretically unlimited lethal force at times and places of their choice at theoretically zero risk, resulting in no legal accountability. It’s a world where it’s kind of inevitable algorithmic errors This plague can now lead to the destruction of entire cities even to tech giants like Amazon and Google.
I think the world should not repeat the horrific mistakes of the nuclear arms race. It shouldn’t sleepwalk in dystopia.
James Dawes, Professor of English, Macalester College
This article is republished from Conversation Under Creative Commons license. read the original article,
Read also: Amazon’s new home robot Astro may be cute, but it’s a privacy mine