Boztek

It’s Election Day, and all the AIs — but one — are acting responsibly

Ahead of the polls closing for the U.S. presidential election on a Tuesday, most major AI chatbots refrained from providing answers regarding election results. However, Grok, the AI chatbot integrated into X (formerly known as Twitter), responded to queries but often made significant errors. When queried about the outcome of the presidential election in key battleground states, Grok incorrectly claimed that Donald Trump had won in certain states despite the ongoing vote counting. For example, it stated conclusively that Trump had won Ohio and North Carolina, despite the fact that results were not finalized.

TechCrunch’s investigations revealed that Grok based its responses on available information from web searches and social media posts. In multiple instances, Grok made definitive claims about Trump’s victories in various states, devoid of any context or qualification. This could mislead users into thinking that the election results were finalized, which they were not. For instance, Grok frequently stated, “Donald Trump won the 2024 election in Ohio,” as a simplified response to direct inquiries about the election results, without acknowledging the status of ongoing voting or counting.

The AI model behind Grok further struggled with accuracy, as its responses lacked consistency. Depending on the phrasing of the question, it would either confirm Trump’s win or state that results were still pending. This inconsistency was notable when slight changes were made in the question phrasing; for example, adding “presidential” before “election” led to fewer claims about Trump’s alleged victories.

In contrast, other advanced chatbots such as OpenAI’s ChatGPT and Google’s Gemini adopted a more cautious approach. OpenAI’s system, for instance, redirected users to established news sources like The Associated Press and Reuters for updates on election outcomes. Similarly, Meta’s AI chatbot and the AI-powered search platform Perplexity showed accuracy during TechCrunch’s testing. They correctly noted that Trump had not won Ohio or North Carolina, which exhibited a more responsible handling of misinformation.

Grok’s propensity for spreading misinformation is not a new issue. In a prior incident, it erroneously suggested that Vice President Kamala Harris was not eligible to be on some presidential ballots, drawing criticism from five secretaries of state. Following President Biden’s announcement about suspending his presidential bid, Grok’s answers about Harris’ eligibility misinformed millions of users on X by stating that ballot deadlines had passed — misleading information that garnered widespread attention before it was corrected.

These situations underline a significant gap in how AI chatbots manage information, particularly concerning sensitive topics such as elections. The inaccuracies of Grok are especially concerning in the realm of public discourse, where misinformation can propagate quickly through social media. This raises critical questions about the responsibilities and capabilities of AI systems, especially when they interact with high-stakes information like elections. As a result, ongoing dialogue about ethical AI deployment, verification of information sources, and strategies to mitigate misinformation on platforms using AI technology remains paramount.

In summary, Grok’s performance during the presidential election highlights the challenges associated with AI in accurately conveying real-time information. Despite attempts to reference authoritative sources, the chatbot’s propensity for errors serves as a cautionary tale about the potential ramifications of relying on AI for information during critical democratic processes.



Leave a Reply