Future Wars: Are killer robots on the battlefield tomorrow?

Future Wars: Are killer robots on the battlefield tomorrow?

Humanity is on the brink of a new era of war.

Driven by rapid advances in artificial intelligence, weapons platforms capable of detecting, targeting and deciding to kill humans on their own – without an officer leading the attack or a soldier pulls the trigger – rapidly changing the future of conflict.

Officially, they are called Lethal Autonomous Weapon Systems (LAWS), but their detractors call them killer robots. Many countries, including the United States, China, United Kingdom, India, Iran, Israel, South Korea, Russia and Turkey, have invested heavily in the development of such weapons in recent years. years.

A United Nations report believes that Turkish-made Kargu-2 drones marked the dawn of this new era when they attacked militants in Libya in 2020 amid the country’s ongoing conflict.

Autonomous drones have also played a vital role in the war in Ukraine, where Moscow and kyiv have deployed these unmanned weapons to target enemy soldiers and infrastructure.

The great public debate

The emergence and development of such machines has sparked intense debate among experts, activists and diplomats around the world as they exchange arguments about the potential benefits and potential risks of using robots. At the same time, the question of whether to stop them here and now is escalating and remains unanswered.

However, in an increasingly divided geopolitical landscape, can the international community reach a consensus on these machines? Can the moral, legal and technological threats posed by these weapons freeze them before they dominate the battlefield? Is a general ban feasible or is the introduction of a set of regulations a more realistic option? Al Jazeera’s lengthy article tackled these questions, posed them to leading experts in the field, and got some inspiring answers.

A first short answer is that a complete ban on autonomous weapon systems doesn’t seem likely anytime soon. The world is divided between those who believe it is necessary to introduce many regulations that will essentially “nullify” the scope of their use and those who are “softened” by the promises of certain victory they make.

Yasmin Afina, a research associate at the London-based Chatham House think tank, presented to the House of Commons as recently as March how the US National Security Agency (NSA) at one point mistakenly identified an Al Jazeera reporter as an Al-Qaeda agent. . The incident, which led to the aforementioned journalist’s name being placed on the US Persons of Interest list, was revealed by leaked documents in 2013 by NSA contractor Edward Snowden.

The surveillance system behind the event is not lethal, but it can lead to death, in this particular case of the journalist, in other circumstances of many others, Athena argued.

Mass deaths?

The possibility that LAWS could trigger an escalating series of events leading to mass deaths worries Toby Wells, an artificial intelligence expert at the University of New South Wales in Sydney, Australia. “We know what happens when we pit complex electronic systems against each other in an uncertain and competitive environment. It’s called… the stock market,” he wrote in his report to the House of Lords.

But that doesn’t mean, he says, that researchers should stop developing the technology behind automatic weapons systems, as it may have significant benefits in other areas.

For example, the same algorithm is used in automotive safety systems to avoid collisions with pedestrians. “It would be morally wrong to deprive people of the possibility of reducing road accidents,” he continues.

One possibility would be to address them in the same way that chemical weapons have been contained, namely through the UN Chemical Weapons Convention, which prevents their development, production, stockpiling and use. “We can’t put Pandora in the box, but these measures have generally been successful in reducing abuse in domains around the world,” he concludes.

The benefits of Artificial Intelligence

In essence, autonomous weapon systems determined by artificial intelligence have great advantages from a military point of view. For example, they can carry out certain missions without the help of soldiers, thus reducing casualties.

Proponents point out that these systems can also reduce human error in decision-making and eliminate bias, while their accuracy can, at least in theory, reduce losses.

For some other experts, however, the dangers of LAWS outweigh the benefits of using them, as potential technical malfunctions, violations of international law, and ethical concerns about machines making life-and-death decisions cannot be ignored.

And who is responsible?

At the center of it all is the burning question of who bears the blame for any evil that happens, that is, who will be held accountable if they fail with catastrophic results.

For example, if the robots commit a war crime, will the chief officer overseeing the conflict or his superior who decided to implicate the machines in the first place be called to account? Could it be that the one who should sit in the dock is the one who built them?

All this “represents a big gap in the political debate on the issue”, add researchers Vincent Bulanin and Marta Bo of the Stockholm International Peace Research Institute (SIRPI).

For them, what is needed is “to specify exactly which weapon or which scenario may prove problematic”.

Simply put, two different sets of regulations are to be developed based on which some will be abolished and others will be used as long as they meet specific standards in their use.

Human dignity

“The million dollar question is which of them can fit in both baskets.”

In any case, for Wells, who goes beyond the question of whether regulations should be political or legal and the lack of trust between different states, the most fundamental of all problems is that critical decisions on the fields of battle will be taken by machines devoid of human empathy. .

“It’s disrespectful of human dignity,” he concludes.

Leave a Reply

Your email address will not be published. Required fields are marked *