Artificial intelligence (AI) and algorithms have the ability and are being used to fuel radicalization, increase polarization, spread racism and political instability, according to an academic at Lancaster University. International security professor Joe Burton believes that artificial intelligence and algorithms are not just tools used by national security agencies to thwart malicious cyber activities. He believes that artificial intelligence and algorithms may also fuel polarization, radicalization and political violence, thus posing a threat to national security itself.

Furthermore, he said, the process of securitization (viewing technology as an existential threat) plays an important role in the design, use and harmful consequences of artificial intelligence.

Professor Burton's paper was recently published in Elsevier's influential "Technology in Society" magazine.

"Artificial intelligence is often thought of as a tool to counter violent extremism, and this is the other side of the debate," he said.

This paper explores how AI has been securitized historically and in its depictions in media and popular culture, by exploring modern examples of AI having polarizing, radicalizing effects that contribute to political violence.

The article cites the classic film series "The Terminator," which depicts carnage carried out by a "sophisticated and vicious" artificial intelligence. More than any other film, it raised public awareness of artificial intelligence and the fear that machine consciousness could lead to devastating consequences for humanity - in this case, nuclear war and the deliberate attempt to exterminate a species.

Professor Burton writes: "Distrust of machines, fears associated with them and their connection to biological, nuclear and genetic threats to humanity have led governments and national security agencies to want to influence the development of the technology to reduce risks and (in some cases) exploit its positive potential."

Professor Burton said advanced drones, such as those used in the war in Ukraine, were now capable of full autonomy, including capabilities such as target recognition and identification. While countries, including the United Nations, engage in widespread and influential debate calling for a ban on "killer robots" and the inclusion of humans in life and death decisions, the acceleration and integration of armed drones continues.

In cyber security - the security of computers and computer networks, artificial intelligence is being widely used, with the most common areas being information and network psychological warfare.

The Putin government’s actions against the U.S. electoral process in 2016 and the ensuing Cambridge Analytica scandal demonstrated the potential for AI to be combined with big data, including social media, to produce political effects centered on polarization, encouraging radical beliefs, and manipulating identity groups. It demonstrated the power and potential of AI to divide society.

Artificial intelligence has been seen as a positive means of tracking the virus during the pandemic, but has also raised concerns about privacy and human rights.

The article studies the artificial intelligence technology itself and believes that there are problems with the design of artificial intelligence, the data it relies on, how it is used, and its results and impacts. At the end of the article, it sends a strong message to those engaged in cybersecurity and international relations research.

Professor Burton, an expert in cyber conflict and emerging technologies and a member of the university's Security and Protective Science programme, wrote: "Artificial intelligence has the capacity to undoubtedly transform society in positive ways, but it also poses risks that we need to better understand and manage. Understand the divisive nature of this technology at all stages of its development and use. The implications are clearly critical. Scholars working in cybersecurity and international relations have an opportunity to incorporate these factors into the emerging AI research agenda and avoid viewing AI as a politically neutral technology. In other words, the safety of AI systems, and how they can be used in international geopolitical struggles, should not override concerns about their social impacts."

Compiled source: ScitechDaily