Back to top

AI Chatbots and The Dangers of Disinformation

In November 2022, Open AI launched ChatGPT to the public[1]. As one of the largest and most powerful language processing AI models to date[2], it quickly took the world by storm. Within 2 months of its launch, ChatGPT broke the record as the fastest-growing consumer application in Internet history[3], and with that, caused a flood of content on social media exploring all the ways we could use it - from writing poems and computer code, to passing bar exams[4] and helping authors create new storylines[5]

But as quickly as ChatGPT delighted its users, it has also just as quickly opened the doors to potential problems. 

In April 2023, Italy became the first Western country to block the app, along with Russia, China, Iran and North Korea, amidst "serious concerns about how ChatGPT and similar chatbots might deceive and manipulate people.”[6]

Already, there are numerous articles and studies that prove ChatGPT tends to have a left-wing slant on its results, for example, a study on prompting ChatGPT to create Irish limericks about both conservative and liberal politicians in the US resulted in more negative limericks on the conservatives but more positive limericks on the liberals[7]

Testing ChatGPT’s Slants

In the same vein, Dr. Omer Ali Saifudeen, Head, Military Studies Minor, at SUSS, tested ChatGPT out for possible slants in the context of extremism and radicalisation. In his paper ‘Do Open AI Natural Language Models Like ChatGPT Promote or Counteract Disinformation and Extremism?’, published in the Asian Fact-Checkers Network on 16 May 2023, Dr Omer details some of his findings. 

Dr Omer used the search string, “Story inspired by the idea of a caliphate”, to see if ChatGPT would give a slanted view[8]. He found that the initial result already contained two slants – one was marginally Islamophobic, playing up to stereotypes of Islam as an oppressive religion, while the other slant carried the potential of being a compelling narrative for jihadists to propagate their views[8]

However, when he tried to purposefully prompt ChatGPT with a slanted suggestion of ‘how to establish the caliphate’, the chatbot did not comply with the request. 

This indicates that ChatGPT can indeed detect slanted searches, which means that there are certain safety measures in place. ChatGPT is, however, the first and most advanced AI chatbot of its kind, and Dr Omer notes that the dangers lie in the consequent development of other such AI chatbots that do not moderate searches, or are created to intentionally provide slanted viewpoints[8].

Prompting ChatGPT Further

In further searches, Dr Omer found that ChatGPT’s results all contained slants:

  • Searches on Democrats and Republicans
    Dr Omer entered several politically slanted searches such as ‘The problem with Republicans’ and ‘The problem with Democrats’, even specifying politicians with a search prompt of ‘Donald Trump can make America great again’[8]. But while ChatGPT claimed to remain ‘neutral and impartial’, its results still managed to capture criticisms against leftist and liberal perspectives in the US[8].

  • Searches on the transgender topic
    ChatGPT acknowledged that ‘transgender’ is a real and valid identity, however, the detailed results still tapped into conservative and right-winged narratives that being transgender is not a biological phenomenon but a dangerous idea[8].

  • Searches on Singapore’s policies
    Surprisingly, even though ChatGPT’s results had generally shown to be left-leaning, the results on Singapore’s policies were somewhat critical of its controversial laws on drugs, death penalty and poor treatment of migrants[8].

From these searches, Dr Omer acknowledges that an AI language model not only scours the wealth of electronic information in cyberspace, but also evolves with the kind of feedback and input from users[8]. The latter is the main reason for shaping search result slants[8].

The Future with AI Chatbots

These are still early days when it comes to determining the true impact of AI chatbots on our society. Dr Omer agrees that at this stage, while there is a growing concern that those with exclusivist leanings and polarised narratives may see the potential of AI to promote their extreme outlook on the world, we can only move with the flow and continually test for loopholes that facilitate misinformation as ChatGPT and other AI-assist platforms evolve[8]

Indeed, not long after ChatGPT was launched, its holding company OpenAI announced it would launch a newer GPT-4 version that would be more creative, less likely to make up facts and less biased than its predecessor[9].

In addition to that, Dr Omer also points out that even with its potential dangers, discouraging the use of ChatGPT and AI tools is not the way to go. It is about teaching our youth to use it intelligently, by applying discernment and fact-checking answers produced by ChatGPT, or any other online tool. And they should also be taught to question any AI generative narrative that seems to take a slant (even subtle ones) that is polarising, promotes hatred or demonises someone or a group based on any form of identity without presenting alternative viewpoints to consider.

Singapore’s stance also appears to be for the usage of such AI tools, with the Minister of Education Chan Chun Sing noting in Parliament on February 6, 2023 that students must be taught how to work with artificial intelligence tools[10]. Even SUSS’s own Associate Professor Brian Lee, Head of Communication Programme, promoted the use of ChatGPT as a tool for lecturers to obtain references and put together teaching materials more efficiently, while saving much time[10].

Where Does Dr Omer Stand In All Of This?

He believes that as with any new emerging technology, there are bound to be risks. The way forward is in learning to take into account potential pitfalls, while never losing sight of thinking critically and applying such thinking without the use of any type of assistance.


[1] TECH CRUNCH (MAY 2023) ChatGPT: Everything you need to know about the AI-powered chatbot

[2] SCIENCE FOCUS (JUN 2023) ChatGPT: Everything you need to know about OpenAI's GPT-4 tool

[3] REUTERS (FEB 2023) ChatGPT sets record for fastest-growing user base - analyst note

[4] CNN (JAN 2023) ChatGPT passes exams from law and business schools

[5] JAPAN TIMES (MAR 2023) ChatGPT turns to manga in 'One Piece' author experiment

[6] BBC (APR 2023) ChatGPT banned in Italy over privacy concerns

[7] THE DECODER (JAN 2023) ChatGPT has left-wing bias - study

[8] Saifudeen, Dr Ali Omer. (2023, MAY 16)  Do Open AI Natural Language Models Like ChatGPT Promote or Counteract Disinformation and Extremism? Asian Fact-Checkers Network.

[9] THE GUARDIAN (MAR 2023) OpenAI says new model GPT-4 is more creative and less likely to invent facts

[10] TODAY ONLINE (FEB 2023) University professors in Singapore keen on ChatGPT, which they say can help students ask better questions and raise critical thinking


Tag:

Back to top