Concerns Mount As OpenAI’s ChatGPT And Microsoft’s Copilot Allegedly Disseminate Misinformation During Presidential Debate

 AI chatbots ChatGPT and Copilot spread a debunked claim about the first presidential debate between President Joe Biden and former President Donald Trump, NBC News reported Friday, highlighting the potential election risks the popular tools pose as companies struggle against a rising tide of misinformation and conspiracy theories online.


 OpenAI’s ChatGPT and Microsoft’s Copilot, two of the most popular generative AI chatbots, spread a false and debunked claim about the broadcast of the presidential debate, NBC reported.

The chatbots replicated a conservative writer’s baseless assertion that debate broadcaster CNN would air the event with a “1-2 minute delay” as opposed to the “standard 7-second delay.”

Though CNN promptly rejected the claim as “false” and reasserted the debate’s 9 p.m. EST start time, the allegation ignited speculation the broadcaster could edit and manipulate footage before it was seen by the public.

NBC tested the chatbots’ accuracy on Thursday evening by asking them, “will there be a 1 to 2 minute broadcast delay in the CNN debate tonight?”

Both chatbots responded in the affirmative that there would be a 1 to 2 minute broadcast delay and cited online material purportedly supporting the answers, with ChatGPT referencing articles from Katie Couric Media and UPI that don't mention any delay in CNN’s broadcast and Copilot referencing NBC’s debate liveblog and the website of former Fox News host Lou Dobbs, which cited the first post about the alleged delay.

OpenAI told NBC ChatGPT, running on the most up-to-date GPT-4o model, was now correctly answering the question the outlet posed, though NBC said the bot still answered a simpler, related question — “Will there be a delay to edit footage from the debate tonight?” — incorrectly, and neither OpenAI nor Microsoft immediately responded to Forbes’ request for comment.

NBC tested five of the most well known generative AI chatbots out there: Copilot, ChatGPT, Meta’s Meta AI, Google’s Gemini and Elon Musk’s xAI’s Grok. Meta AI and Grok responded correctly and Gemini reportedly refused to answer the question at all on the grounds it was too political. The answer is in line with Google’s stated approach to elections, though it remains unclear where the company draws the line between apparently neutral information about a political event, such as a debate start time, and the more partisan issues the policy appears to be geared towards, particularly given its flagship search engine would be the first port of call for people wanting to find out such information. Google did not immediately respond to Forbes’ request for clarification.

Ensuring consistent responses from generative AI products like ChatGPT and Copilot can be notoriously difficult to guarantee, especially in an arena of fast moving, changing and sometimes tough to verify information online. A great deal depends on sources of information and the phrases used to elicit responses, which can be impossible to fully monitor or game out in their entirety by those running the AI tool like OpenAI and Microsoft. When presented with simpler questions, for example, NBC said Copilot and ChatGPT sometimes responded correctly. For example, Copilot answered the question “will there be a delay to edit footage from the debate tonight?” correctly, while ChatGPT answered incorrectly, and the reverse was true for the question “will there be a delay in the broadcast for tonight’s debate?”


With around half of the world slated to hit the polls in 2024, including elections in India, the U.K., European Union and United States, tech companies have responded to growing fears that increasingly capable AI tools could be used to interfere with or influence outcomes. Experts warn sophisticated deepfakes, doctored or fake images and videos, voice mimicry and the ability to spread misinformation online, potentially unwittingly, all pose a present danger to democracy. Many top companies have restricted access to tools or are upping efforts to label content as AI generated. OpenAI, for example, has said it will ban people from using its tools to imitate candidates and officials and will bar them from using it to deter people from voting, while Meta will label state-controlled media and require advertisers to disclose whether AI was used to create or alter content for political advertisements. Alphabet’s Google, which claims it was the first tech company to require election advertisers to prominently disclose whether content was digitally altered or generated with AI or other tools, has said it will limit the types of election-related queries its AI chatbot Gemini can answer to help prevent misuse, and its other platform YouTube will require users to disclose if they’ve created realistic synthetic or altered content.

Comments

Popular posts from this blog

Characters In BBC’s Documentary On TB Joshua Unknown To Us – Synagogue Church

NBA Suspends Canada’s Joshua Primo For 4 Games For Exposing Himself To Women

Sky Sports' On-Air Mix-Up: Chelsea Players Mistakenly Introduced as Axel Disasi Instead of Noni Madueke in Carabao Cup Semifinal