Tech
ChatGPT: The Future of Chatbots with a Potential for Deleterious Exploitation

AI-enabled programs like ChatGPT can be used for criminal purposes, with concerns raised by MI5 Director General Ken McCallum and FBI Director Christopher Wray at a Five Eyes Intelligence Summit in October 2023. The platform’s safety mechanisms can be circumvented with “correct prompt engineering,” which involves users refining the way a question is asked to influence the output generated by the AI system. This can be abused to bypass content moderation limitations and produce potentially harmful content.

The coding in ChatGPT can detect and refuse to obey sensitive commands and searches, but actors can use seemingly ordinary phrases and commands to undermine human and national security. Disinformation and misinformation are not just the prerogatives of state actors; experts have highlighted that this AI platform can assist terrorists in creating malicious web pages and social engineering-reliant scams. Non-State actors, including terrorists, can exploit these tactics to spread propaganda, rally support, and discredit legitimate state representatives and institutions. Astroturfing, propaganda designed to look like a grassroots campaign, is another concern.

ChatGPT’s basic services require stable internet connection and a device for instructions, highlighting decentralisation, affordability, and technology accessibility, enabling widespread exploitation through affordable and accessible technology.

Despite increased surveillance efforts to detect and combat terrorists and violent extremists, ChatGPT has been used to deceive the system. The AI platform recommends forums like Tor Browser, Signal, ProtonMail, DuckDuckGo, SecureDrop, and Zeronet for free exchange of views. In December 2022, ISIS announced that they had begun relying on ChatGPT to strengthen and protect a renewed Caliphate. ChatGPT can provide precise guidelines for identifying and enlisting a core group of supporters, formulating a political and ideological strategy, garnering backing from the Muslim community, capturing territory, and establishing institutions and governmental structures.

ChatGPT can also churn out information that can strengthen a non-State actor’s strategy to radicalize and recruit extremist individuals. This information can inspire lone wolves to commit violence or attempt to do so. In the past, ChatGPT has generated information legitimizing conspiracy theories, such as the QAnon Movement, which has caused socio-political fragmentation and polarization in Western countries.

AI chatbots are accused of replicating ideologically consistent, interactive online extremist environments and amplifying extremist movements that seek to radicalize and recruit individuals. They can also create gripping video games, which are a popular recreational avenue for the youth and a key medium for radicalising habitual gamers. Video games filled with violent imagery and audio-visual effects were used by ISIS to expand recruitment until its physical caliphate’s defeat in March 2019. Today, far-right and neo-Nazi extremist groups are adopting this strategy to mobilise support.

ChatGPT is a platform that offers a beginner’s manual for building video games from scratch, which can be launched on self-publishing platforms like Epic Games Store with minimal effort and regulation. It also lists popular video editing platforms for creating doctored videos at minimal or no cost, exacerbated by concerns about deep fakes and disinformation in the public domain. A detailed breakdown of instructions for constructing 3D weapons and instruments, which are not necessarily regulated, can be found on ChatGPT. This information can be released on chat forums like 4kun, 8 Chan, and Gab, which are not subjected to content moderation.

The ease of access to low and high-cost 3D printers, such as Creality 3D Ender 3 Series, Prusa i3 MK3/MK4, and Anycubic i3 Mega, can exacerbate the problem. 3D weapons have been used in violent acts, including a synagogue attack in Halle, Germany, and the UK’s first-ever conviction of a far-right lone-wolf terrorist in July 2021. ChatGPT’s resources and the ease of access to 3D printers can exacerbate concerns about the potential for violent acts and the spread of disinformation.

ChatGPT provides insights into secure platforms like High Fidelity and Decentraland for engaging with others in the metaverse, emphasizing data security and user privacy. However, the metaverse can be exploited by terrorists and violent extremists to gather and plan their activities. ChatGPT can also list encrypted chat forums and secure cryptocurrency platforms, such as Matrix/Riot, Mastodon, Diaspora, Monero, and Zcash, to evade detection or surveillance.

Encrypted chat forums are often used to bypass law enforcement surveillance, disseminate propaganda, recruit radicals, and engage in terror financing. The privacy and data protection provided by crypto trading and funding platforms make these avenues attractive to hostile actors.

ChatGPTChatGPT: The Future of Chatbots with a Potential for Deleterious Exploitation

© 2024 ASIA MEDIA RESEARCH CENTER PVT. LTD. ALL RIGHTS RESERVED.