Urgent warnings about the danger of autonomous weapons systems have come from two very different segments of American life this week.
On July 18, the second
highest-ranking general in the U.S. military testified at a Senate hearing that
the use of robots during warfare could endanger human lives—echoing concerns
brought up by inventor Elon Musk the previous weekend.
Gen. Paul Selva spoke about automation at his
confirmation hearing before the Senate Armed Services Committee, saying that
the "ethical rules of war" should be kept in place even as artificial
intelligence (AI) and drone technology advances, "lest we unleash on humanity a set of robots that we don't know
how to control."
The Defense Department
currently mandates that a human must control all actions taken by a drone.
But at the hearing,
Sen. Gary Peters (D-Mich.) suggested that by enforcing that requirement, which
is set to expire this year, the U.S. could fall behind other countries
including Russia.
Peters cited recent
reports of Russia's "ambition to employ AI-directed weapons equipped with
a neural network capable of identifying and engaging targets," and to sell
those weapons to other countries.
"Our adversaries often do not to consider
the same moral and ethical issues that we consider each and every day," Peters said.
"I don't think it's reasonable for us to
put robots in charge of whether or not we take a human life," Selva told the committee.
In an open letter in
2015, Tesla and SpaceX CEO Elon Musk joined with scientist Stephen Hawking
to warn against competing with other
countries to develop AI for military purposes.
"Starting a military AI arms race is a
bad idea, and should be prevented by a ban on offensive autonomous weapons
beyond meaningful human control," the letter said.
Musk has
previously called the development of robots that can
make their own decisions, "summoning the demon." Days before Gen.
Selva's hearing, Musk spoke at the National Governors
Association about the potential for an uncontrollable contingent of robots in
the future.
The inventor
acknowledged the risks AI poses for American workers, but added that the
concerns go beyond employment. "AI
is a fundamental existential risk for human civilization, and I don't think
people fully appreciate that," Musk said.
He urged governors
throughout the U.S. to start thinking seriously now about how to regulate
robotics—before AI becomes an issue that's out of humans' control.
"Until people see robots going down the
street killing people, they don't know how to react because it seems so
ethereal. AI is a rare case where I think we need to be proactive in regulation
instead of reactive. Because I think by the time we are reactive in AI
regulation, it's too late," warned Musk.