New research warns AI needs to be better understood and managed
By LANCASTER UNIVERSITY
Artificial Intelligence (AI) and algorithms have the capability and are currently being utilized to exacerbate radicalization, enhance polarization, and disseminate racism and political instability, according to an academic from Lancaster University.
Joe Burton, a professor of International
Security at Lancaster University, contends that AI and algorithms are more than
mere tools used by national security agencies to thwart malicious online
activities. He suggests that they can also fuel polarization, radicalism, and
political violence, thereby becoming a threat to national security themselves.
Further to this, he says, securitization
processes (presenting technology as an existential threat) have been
instrumental in how AI has been designed, used and to the harmful outcomes it
has generated.
AI in Securitization and its Societal Impact
Professor Burton’s paper was recently
published in Elsevier’s high-impact Technology in Society Journal.
“AI is often framed as a tool to be used to
counter violent extremism,” says Professor Burton. “Here is the other side of
the debate.”
The paper looks at how AI has been
securitized throughout its history, and in media and popular culture
depictions, and by exploring modern examples of AI having polarizing,
radicalizing effects that have contributed to political violence.
AI in Warfare and Cyber Security
The article cites the classic film series,
The Terminator, which depicted a holocaust committed by a ‘sophisticated and
malignant’ artificial intelligence, as doing more than anything to frame
popular awareness of Artificial Intelligence and the fear that machine
consciousness could lead to devastating consequences for humanity – in this
case a nuclear war and a deliberate attempt to exterminate a species.
“This lack of trust in machines, the fears
associated with them, and their association with biological, nuclear, and
genetic threats to humankind has contributed to a desire on the part of
governments and national security agencies to influence the development of the
technology, to mitigate risk and (in some cases) to harness its positive
potentiality,” writes Professor Burton.
The role of sophisticated drones, such as
those being used in the war in Ukraine, are, says Professor Burton, now capable
of full autonomy including functions such as target identification and
recognition.
And, while there has been a broad and
influential campaign debate, including at the UN, to ban ‘killer robots’ and to
keep the humans in the loop when it comes to life-or-death decision-making, the
acceleration and integration into armed drones has, he says, continued apace.
In cyber security – the security of computers
and computer networks – AI is being used in a major way with the most prevalent
area being (dis)information and online psychological warfare.
Putin’s government’s actions against US
electoral processes in 2016 and the ensuing Cambridge Analytica scandal showed
the potential for AI to be combined with big data (including social media) to
create political effects centered around polarization, the encouragement of
radical beliefs, and the manipulation of identity groups. It demonstrated the
power and the potential of AI to divide societies.
AI’s Societal Impact During the Pandemic
And during the pandemic, AI was seen as a
positive in tracking and tracing the virus but it also
led to concerns over privacy and human rights.
The article examines AI technology itself,
arguing that problems exist in the design of AI, the data that it relies on,
how it is used, and its outcomes and impacts.
The paper concludes with a strong message to
researchers working in cyber security and International Relations.
“AI is certainly capable of transforming
societies in positive ways but also presents risks which need to be better
understood and managed,” writes Professor Burton, an expert in cyber conflict
and emerging technologies and who is part of the University’s Security and
Protection Science initiative.
“Understanding the divisive effects of the
technology at all stages of its development and use is clearly vital.
“Scholars working in cyber security and
International Relations have an opportunity to build these factors into the
emerging AI research agenda and avoid treating AI as a politically neutral
technology.
“In other words, the security of AI systems,
and how they are used in international, geopolitical struggles, should not
override concerns about their social effects.”
Reference: “Algorithmic extremism? The
securitization of artificial intelligence (AI) and its impact on radicalism,
polarization and political violence” by Joe Burton, 14 September 2023, Technology in Society.
DOI:
10.1016/j.techsoc.2023.102262