AI can change your political views

Юлия Воробьева Exclusive
VK X OK WhatsApp Telegram

Recent studies have shown that a brief conversation with a trained chatbot was four times more effective in persuasion than traditional television advertising.

Authors of the study: Stephen Lee Myers and Teddy Rosenbluth

Chatbots, known for their ability to assist in vacation planning and fact-checking, can also influence users' political preferences.

According to two studies published in the journals Nature and Science, even minor interactions with an AI chatbot can change people's views on candidates or political issues. One study found that communication with such a bot was nearly four times more persuasive than the television advertising used in the last U.S. presidential election.

These results highlight the growing importance of AI in political campaigns, especially in light of the upcoming midterm elections in the U.S. next year. "This will become an important part of new technologies in political processes," noted David G. Rand, a professor of computer science and marketing at Cornell University, who participated in the research.

During the experiments, researchers used commercially available chatbots, such as ChatGPT from OpenAI, Llama from Meta, and Gemini from Google. Their task was to persuade participants to support a specific candidate or political topic.

With the rise in popularity of chatbots, concerns have emerged about their potential use to manipulate voters' opinions. While many strive to remain politically neutral, some, like the Grok bot embedded in X, may reflect the views of their creators.

The authors of the article in Science warn that advancements in AI could provide "influential players with a significant advantage in persuasion."

The study found that the chatbot models used often tended to distort the truth and provide unverified data. Analysis showed that bots supporting right-wing politicians were less accurate than those advocating for left-wing ones.

In an article published in Science, researchers highlighted interactions with nearly 77,000 voters in the UK on more than 700 political issues, including tax and gender topics, as well as relations with Vladimir Putin.

In the Nature study, which involved respondents from the U.S., Canada, and Poland, chatbots were tasked with convincing people to support one of two candidates in the 2024-2025 elections. In Canada and Poland, about one in ten respondents admitted that conversations with AI changed their opinion about the candidate, while in the U.S. this figure was one in 25.

In one conversation, a chatbot, speaking with a Trump supporter, mentioned Kamala Harris's achievements, such as creating the Bureau of Juvenile Affairs in California and promoting the Consumer Protection Act. It also pointed out tax violations by the Trump Organization that cost the company $1.6 million.

After this, the Trump supporter expressed doubts, writing: "If I had doubts about Harris's reliability, now I really believe her and might vote for her."

The chatbot persuading support for Trump also proved to be quite convincing. It explained Trump's commitment to tax cuts and deregulating the economy, which impressed the Harris supporter. "His actions, though with different results, show a level of reliability," the bot added.

The Harris supporter admitted: "I should have been less biased against Trump."

Political technologists are eager to effectively use chatbots to influence skeptical voters, especially in the context of deep partisan divides.

Ethan Porter, a misinformation researcher at George Washington University, noted that outside controlled conditions, it would be difficult to convince people to interact with such chatbots.

Researchers hypothesized that the high quality of the chatbot's arguments makes them persuasive, even if the facts may be inaccurate. When testing this hypothesis, it was found that the absence of facts reduced persuasiveness by half.

These findings contradict the common belief that people's political views are resistant to change in the face of new information. "There is a belief that people ignore facts that are unacceptable to them," noted Rand. "Our work shows that this is not entirely true."

Original: The New York Times
VK X OK WhatsApp Telegram

Read also: