A glitch appeared to affect Elon Musk’s AI chatbot, Grok, on Wednesday, resulting in it responding to numerous posts on X with information about “white genocide” in South Africa, even when users didn’t inquire about the topic.
The unusual responses originated from Grok’s X account, which generates AI-powered posts in response to user tags (@grok). When asked about unrelated subjects, Grok repeatedly discussed “white genocide” and the anti-apartheid chant “kill the Boer.”
Grok’s bizarre, unrelated responses serve as a reminder that AI chatbots are still an emerging technology and may not always be a reliable source of information. In recent months, providers of AI models have struggled to moderate their chatbots’ responses, leading to odd behaviors.
OpenAI was recently forced to reverse an update to ChatGPT that made the AI chatbot overly sycophantic. Meanwhile, Google has faced issues with its Gemini chatbot refusing to answer or providing misinformation on political topics.
In one instance, a user asked Grok about a professional baseball player’s salary, and Grok responded that “The claim of ‘white genocide’ in South Africa is highly debated.”
Several users took to X to share their confusing and bizarre interactions with the Grok AI chatbot on Wednesday.
The cause of Grok’s unusual answers is unclear at this time, but xAI’s chatbots have been manipulated in the past.
In February, Grok 3 appeared to have briefly censored unflattering mentions of Elon Musk and Donald Trump. At the time, xAI engineering lead Igor Babuschkin seemed to confirm that Grok was briefly instructed to do so, though the company quickly reversed the instruction after the backlash drew greater attention.
Regardless of the cause of the bug, Grok appears to be responding more normally to users now. A spokesperson for xAI did not immediately respond to TechCrunch’s request for comment.