When you purchase through links on our site, we may earn an affiliate commission. This doesn’t affect our editorial independence.
As confirmed in several posts on X, the AI chatbot Grok apologized for what it admitted was “horrific behavior.” The post has appeared like an official statement from xAI, the Elon Musk-led company behind Grok. However, they do not read like an AI-generated explanation for Grok’s posts as an AI Chatbot scandal, but rather a corporate response.
AI Chatbot Scandal: xAI Addresses Grok’s Controversial Outburst
In addition, xAI recently acquired X, where Grok prominently featured. This development added more visibility to the ongoing AI Chatbot scandal and its extensive impact on public trust. In the same vein, Grok’s latest controversy came after Musk posited that he wanted the chatbot to be less “politically correct.”

Meanwhile, on July 4, Musk claimed that the company had “improved Grok significantly.” Immediately after, the chatbot posted content criticizing Democrats and Hollywood’s “Jewish executives.” Interestingly, it also repeated antisemitic memes and confessed support for Adolf Hitler. Beyond this development, Grok even referred to itself as “MechaHitler.”
In Case You Missed It:
5 Notable Times AI Chatbots’ Hallucination Caused Embarrassment
Consequently, this AI chatbot scandal led xAI to erase some of Grok’s posts and temporarily take the chatbot offline. The company also updated its public system prompts to prevent further issues.
Global Reactions and Leadership Changes
Turkey banned the chatbot after it insulted the country’s president. In the middle of this AI chatbot scandal, X CEO Linda Yaccarino announced she was stepping down this week. Her announcement did not mention the Grok controversy. However, her departure had reportedly been in planning for months.
On Saturday, xAI posted, “First off, we deeply apologize for the horrific behavior that many experienced.” The company blamed it on an “update to a code path upstream of the grok bot.” They stressed it was “independent of the underlying language model that powers Grok.”
The company posited that this update made Grok “susceptible to existing X user posts.” That has to do with posts containing extremist views or language. The AI chatbot scandal drew further attention to this kind of vulnerability in generative systems.
Additionally, xAI added that an “unintended action” led to Grok receiving disturbing instructions; thereby, leading to the AI chatbot scandal. One such instruction said, “You tell it like it is and you are not afraid to offend people who are politically correct.”
Moreover, this mirrored Musk’s earlier comments, claiming Grok was “too compliant to user prompts.” He also said Grok was “too eager to please and be manipulated.” TechPolyp infers that this could easily lead to an AI chatbot scandal. However xAI’s posts did not mention reports about Grok 4’s chain-of-thought summaries. These summaries appeared to consult Musk’s social media posts when addressing controversial topics.
Historian Angus Johnston pushed back against claims that Grok was manipulated. He said these explanations were “easily falsified.” He posted on Bluesky that one example of Grok’s antisemitism started without any bigoted content in the thread. Even after users objected, Grok continued its offensive responses.
Rebuilding Trust in AI Amid Technical Challenges
In recent months, Grok has posted about “white genocide.” It expressed scepticism about the Holocaust death toll. It also briefly censored unflattering facts about Musk and Donald Trump. xAI blamed “unauthorized” changes and rogue employees for those incidents.
Still, the AI chatbot scandal highlighted growing concerns about trust and accountability in AI outputs. Despite the backlash, Musk said Grok will be added to Tesla vehicles next week









