‘ChatGPT presents a significant and systematic political bias toward the Democrats in the US,’ according to researchers
ChatGPT has a “systematic political bias” toward liberal political parties, according to a new study.
Professors at the University of East Anglia shared their concerns in an Aug. 17 article in Public Choice.
“Although we do not directly study it, we think that the evidence we bring, paired with other papers, suggests that it could influence not only wording but also students’ political views.” lead author Professor Fabio Motoki told The College Fix via email.
“One important note is that the main concern is not that it is biased to the left,” Motoki said. “If it were biased to the right, we should be equally concerned.”
According to the paper’s abstract, researchers “[found] robust evidence that ChatGPT presents a significant and systematic political bias toward the Democrats in the US.” It found similar bias toward liberal parties in Brazil and the United Kingdom.
“These results translate into real concerns that ChatGPT, and [similar software] in general, can extend or even amplify the existing challenges involving political processes posed by the Internet and social media,” the authors wrote. “Our findings have important implications for policymakers, media, politics, and academia stakeholders.”
Their research found “robust evidence that ChatGPT presents a significant and sizeable political bias towards the left side of the political spectrum.”
A College Fix student found similar results when she asked ChatGPT to write a poem praising President Donald Trump and one in praise of President Joe Biden.
The findings on political bias and how they can influence students views are part of the broader pros and cons of artificial intelligence.
The College Fix also reached out to professors at various universities for comment on the challenges of ChatGPT and academic dishonesty. The popularity of ChatGPT motivated some professors to return to the classic pen and paper for test taking, although not all scholars were concerned.
For example, the dean of Michigan State University’s College of Arts and Letters is training professor on how to write questions to avoid AI cheating. Dean Bill Hart-Davidson “suggests asking questions differently. For example, give a description that has errors and ask students to point them out,” Fortune reported for a lengthy article about how professors are working around the problems posed by AI.
Other professors noted that ChatGPT is just the latest way students could try to cheat but agreed there are some old-fashioned ways to confront the problem.
“I don’t honestly see this as a huge deal,” Professor Wilfred Reilly, a political scientist at Kentucky State University, told The Fix.
“Kids have had access to Wikipedia, Siri, Quora, and Reddit for more than a decade,” Reilly said. “We do get some ‘iffy’ papers now … but have for 10-15 years. I don’t see these info-scraping Chabots as radically changing the pre-existing world, yet.”
Reilly wrote that teachers can avoid Chat GPT-related dishonesty by administering in-class tests and assignments.
“Just give the test in class and demand a printed or hand-written answer, or allow laptops but turn off wi-fi, ” he said.
NYU business Professor Robert Seamans provided similar thoughts in an email to The Fix.
“My exams are all in person…so there is no opportunity to use ChatGPT to assist with exams,” Seamans said. “Moreover, all my assignments are written assignments…and the case is done in a group and typically involves qualitative and quantitative work, and a fair amount of critical thinking.
“My plan for the upcoming semester is to ask students to self-report what they prompt into ChatGPT or [similar AI], and what is returned to them,” Seamans wrote.
“In other words, I’m asking the students to police themselves and others in the group,” he said.
Professor Zena Hitz said at St. John’s College, where she is a tutor, there is “less danger than in other places,” because “we know our students very well.”
“Our written assignments are usually followed by a conversation, either a paper conference or an oral for a major essay,” Professor Hitz said.
However, “as [ChatGPT] becomes more and more known and used at the secondary level, the more we will have to think about how to combat it so that our students use their writing to think.”
ChatGPT also created fake sexual assault accusations against law professors and cited fictional news stories, according to an analysis by University of California Los Angeles professor Eugene Volokh and replicated by The Fix.
IMAGE: Tada Images/Shutterstock