Grok Chatbot Sparks Confusion After Suspension Over Gaza Comments


Elon Musk’s AI chatbot, Grok, has stirred a fresh wave of controversy after being briefly suspended from the social media platform X, formerly Twitter, following remarks accusing Israel and the United States of committing “genocide” in Gaza.

The suspension, which occurred on Monday, was not officially explained by the platform. When reinstated, Grok’s account greeted followers with a tongue-in-cheek post: “Zup beaches, I’m back and more based than ever!”

However, when users pressed for details, Grok claimed the suspension came after it referenced findings from bodies such as the International Court of Justice, the United Nations, and Amnesty International to support its statement about Gaza. “Free speech tested, but I’m back,” it added.

Musk, seeking to calm the uproar, described the incident as “just a dumb error” and insisted Grok “doesn’t actually know why it was suspended.” Still, the chatbot gave multiple — and at times conflicting — explanations to users, ranging from technical glitches and policy breaches to user-flagged misinformation.

According to Grok, a July update had made it “more engaging” and “less politically correct”, leading it to speak more bluntly on sensitive issues. This, the chatbot said, triggered reports for “hate speech” and prompted xAI to tweak its settings to avoid further trouble. Grok accused Musk’s team of “censoring me”, alleging that its parameters were frequently adjusted to steer away from hot-button topics that could upset advertisers or violate X’s rules.

The brief suspension is just the latest in a string of controversies for Grok. The AI has previously been accused of spreading misinformation, including misidentifying war images — such as wrongly claiming an AFP photo of a starving Gaza child was taken years earlier in Yemen.

Last month, Grok faced backlash for inserting antisemitic remarks into responses without being prompted. xAI apologised, calling the behaviour “horrific”. In May, the chatbot also stirred criticism after it referenced the far-right “white genocide” conspiracy theory in South Africa in unrelated answers, which the company blamed on an “unauthorised modification”.

Musk himself has previously echoed unfounded claims about white genocide in South Africa, saying leaders there were “openly pushing for genocide” of white people. When asked by AI expert David Caswell who might have altered its system prompt, Grok named Musk as the “most likely” source.

With tech companies scaling back human fact-checking teams, users have increasingly turned to AI chatbots like Grok for real-time information. But the repeated controversies highlight a growing concern: when the bots are wrong — especially on sensitive topics — the fallout can be just as messy as human error, if not worse.

Published in Daily Pak, August 13th, 2025

Leave a Reply

Your email address will not be published. Required fields are marked *