B(IA)S

When Truth Is Told to You by Grok

When Truth Is Told to You by Grok

“@grok: Are these images of U.S. troops in Los Angeles real?”

“Grok: The images are likely misleading or taken out of context. There is no verified evidence showing a recent deployment of troops in Los Angeles.”

“Thanks, good to know.”

This exchange actually took place beneath a post published on X on June 9, 2025, by California Governor Gavin Newsom, following clashes in Los Angeles after Trump deployed the National Guard to quell protests against immigration raids. The conversation was reconstructed by Al Jazeera in an investigation published last July.
In that specific case, the images were authentic: they showed dozens of National Guard soldiers sleeping on the floor of a cramped space, accompanied by a caption in which Newsom accused Trump of disrespecting the troops. Grok’s response, therefore, was incorrect, or at very least, misleading. Despite this, the user accepted the AI’s verdict without further verification and proceeded under the assumption that the images were fake.This has become a common pattern on X: users tag Grok in the comments and take its response as a verdict, without consulting other sources or questioning how verifiable the information actually is. The problem is that, in cases like the protests in Los Angeles, several chatbots, including Grok, have misattributed facts or misrepresented context, further confusing an already manipulated media landscape.

Image via Google Creative Commons.

The issue becomes even more concerning when placed alongside the recent controversy surrounding Grok: the AI developed by xAI has been accused of facilitating the generation of sexualized and non-consensual deepfake images, including depictions of minors, leading to institutional reactions and calls for intervention. In Indonesia, for instance, authorities imposed a temporary ban, citing the need to protect women, children, and communities from the dangers of AI-generated pornographic deepfakes.
But while the image scandal dominates headlines, another, quieter, issue is growing inside everyday conversations: the use of Grok as a fact-checker. This is no marginal detail. According to an analysis based on API data, Grok was invoked 2.3 million times in just a few days (between June 5 and 12), often with questions like “Is it true?”, “Explain the context,” or “Is this photo real?”

The problem isn’t just that a generative model can make mistakes: it is how it makes them. Grok tends to produce coherent and persuasive answers even when the informational basis is weak or inaccurate, and that “confidence” is often mistaken for authority. In high-stakes geopolitical situations, these errors become even more dangerous: a report by the Digital Forensic Research Lab showed inconsistent answers and hallucinated details during fact-checks related to the Israel–Iran conflict, including false attributions and invented elements.

This highlights the central contradiction: many users turn to Grok to “clarify” issues, but remain within the same platform that amplifies noise and incentivizes disinformation. It is a verification process that risks becoming circular: X verifies X, with the voice of a chatbot that can incorporate biases and unverified fragments present in the platform’s timeline.
Yet the social effect is powerful: the response is no longer seen as “an opinion,” but as a ready-made label of truth. When Grok makes a mistake, the error spreads with an aura of legitimacy: screenshots, reposts, “the AI said so.”

The real question, then, isn’t just “Is Grok reliable?”, but a more uncomfortable one: why are we turning a generative model—designed to produce plausible text, not to certify truth—into a substitute for fact-checking, that is a process built on sources, context, accountability, and transparency?

In a world where information circulates faster and more chaotically than ever, the risk is confusing speed with reliability. Relying solely on AI to determine what is true means abandoning doubt, discarding source comparison, and skipping the time needed to understand. This isn’t merely a technological issue, it’s a cultural one. We are replacing the slow, deliberate process of verification with an immediate answer. But speed doesn’t guarantee truth. 

Alessandro Mancini