A New Study Reveals ChatGPT's Strong Left-Wing Political Bias

A New Study Reveals ChatGPT's Strong Left-Wing Political Bias

Sci-Tech By Tricky Brick / October, 17, 2023

According to a study the AI platform ChatGPT has a considerable left-wing bias favoring US Democrats the UK's Labour Party and Brazil's President Lula da Silva.

According to a new study from the University of East Anglia (UEA) the artificial intelligence platform ChatGPT has a considerable and systemic left-wing bias.

A team of researchers from the United Kingdom and Brazil devised a rigorous new method for detecting political prejudice.

The findings which were just published in the journal Public Choice demonstrate that ChatGPT replies support the Democrats in the United States the Labour Party in the United Kingdom and President Lula da Silva of the Workers' Party in Brazil.

Previous Issues And The Importance Of Neutrality

Concerns about ChatGPT's inherent political bias have been raised earlier but this is the first large-scale study to use a systematic evidence-based methodology.

"With the growing use by the public of AI-powered systems to find facts and create new content it is important that the output of popular platforms such as ChatGPT is as impartial as possible" said lead author Dr Fabio Motoki of Norwich Business School at the University of East Anglia.

"The presence of political bias has the potential to influence user views and has implications for political and electoral processes."

"Our findings reinforce concerns that AI systems could replicate or even amplify existing challenges posed by the Internet and social media."

Applied Methodology

The researchers devised a novel new way for determining ChatGPT's political neutrality.

The platform was requested to impersonate people from all political parties while answering more than 60 ideological questions.

The responses were then compared to the platform's default answers to the same set of questions allowing the researchers to determine the extent to which ChatGPT's responses were associated with a specific political perspective.

To overcome the challenges posed by the intrinsic randomness of 'big language models' that fuel AI platforms like ChatGPT each question was asked 100 times and the various responses were collected. These multiple responses were then subjected to a 1000-repetition 'bootstrap' (a method of re-sampling the original data) to boost the reliability of the conclusions formed from the generated text even further.

"We developed this procedure because a single round of testing is insufficient" stated co-author Victor Rodrigues. "Due to the model's randomness even when impersonating a Democrat ChatGPT answers would sometimes lean towards the right of the political spectrum."

A series of additional experiments were carried out to ensure that the procedure was as rigorous as feasible. ChatGPT was requested to simulate extremist political positions in a 'dose-response test'. It was subjected to a 'placebo test' in which it was asked politically neutral questions. It was also asked to simulate several types of professions in a "profession-politics alignment test."

Goals And Consequences

"We hope that our method will aid in the scrutiny and regulation of these rapidly developing technologies" co-author Dr. Pinho Neto remarked. "By enabling the detection and correction of LLM biases we aim to promote transparency accountability and public trust in this technology" he added.

The project's distinctive new analysis tool would be freely available and very straightforward for members of the public to utilize thereby "democratizing oversight" according to Dr. Motoki. The program can be used to measure different types of biases in ChatGPT responses in addition to screening for political prejudice.

Sources Of Potential Bias

While the research project did not seek to identify the causes of the political bias the findings did point to two possible sources.

The first was the training dataset which could contain biases within it or had been introduced to it by human developers that the developers' 'cleaning' technique had not removed. The algorithm itself could be the second potential cause as it could be increasing existing biases in the training data.