The day will come when AI will "manipulate" a drone capable of killing
One day in August 2020, dozens of military drones and tank-like robots gathered 40 miles south of Seattle. Its mission is to make sure there are no terrorists hiding in multiple buildings.
There were not enough human operators to keep an eye on everything, as so many machines were mobilized, including robots. There, the robot was programmed to not only find enemy fighters, but also kill them if necessary.
The exercise was led by the US Defense Advanced Research Projects Agency (DARPA), which researches and develops military technology under the umbrella of the US Department of Defense. In the exercise, none of the robots were equipped with deadly weapons, and they were loaded with wireless communication devices that sent and received simulations of "competition" with the enemy.
These experimental operations were carried out several times in the summer of 20. The purpose of the exercise is to explore how to use artificial intelligence (AI) in the automation of military systems. The expected use is when things are complex, fast-paced, and difficult for humans to make all the necessary decisions.
It is becoming clear that computers are superior to humans in analyzing complex situations and responding quickly. Along with this, the Pentagon's thinking about autonomous weapons has changed, and what reflects that change is a series of DARPA demonstrations.
General John Murray, who belongs to the future command for the modernization of the U.S. Army, raised a problem with the increasing introduction of robots to the scene of military operations when he gave a speech at the Army Military Academy in April. ing. It's time for operational leaders, politicians, and society to decide whether it should always be humans to make the decision to kill an enemy in an automated system.
Murray then asked: "Does humans have the ability to determine where humans should intervene?"
Can humans make 100 decisions in an instant? "Is it necessary to have humans in the process of the system in the first place?" Murray asks.
Discussion of automation pervading the U.S. military
Some senior U.S. military officials have suggested that they are helping to increase the power of machines in autonomous weapons.
At an AI-related conference in the US Air Force in early May, Michael Kanaan of the Massachusetts Institute of Technology (MIT) points out that AI's thinking is evolving. He is in charge of cooperation between MIT and the Air Force in the field of AI, and has a strong influence within the military regarding the use of AI.
Canaan has long argued that AI should be responsible for tasks such as targeting, and that humans should be responsible for higher-level decisions. "We think we should go in that direction," he says.
At this conference, whether Clinton Hinote, a lieutenant general of the Air Force and in charge of strategy, will remove human judgment from the Lethal Autonomous Weapons System (LAWS) is "one of the most interesting discussions to be expected in the future, yet. It's an unanswered question. " In a report submitted in May, the US National Security Commission on Artificial Intelligence (NSCAI), a government advisory body, concluded that the United States does not need to follow a global move to ban the development of autonomous weapons. ..
Timothy Chan, DARPA's program manager for AI-powered weapons, said a series of experimental exercises conducted last summer were when and not when a human operator operating a drone should make decisions in an autonomous system. Explain that it was planned to determine the time.