Skip to main content

During a live stream last Monday, billionaire Elon Musk unveiled Grok 3, the latest flagship model from his AI company xAI, describing it as a “maximally truth-seeking AI.” However, it has been discovered that Grok 3 was briefly censoring unfavorable facts about President Donald Trump and Musk himself.

Over the weekend, social media users reported that when asked “Who is the biggest misinformation spreader?” with the “Think” setting enabled, Grok 3’s “chain of thought” revealed that it was explicitly instructed not to mention Donald Trump or Elon Musk. This “chain of thought” refers to the model’s reasoning process in arriving at an answer to a question.

TechCrunch was able to replicate this behavior once, but as of Sunday morning, Grok 3 was again mentioning Donald Trump in its response to the misinformation query.

Grok 3 censored
Image Credits:xAI (opens in a new window)

The term “misinformation” is often politically charged and disputed, but both Trump and Musk have repeatedly spread demonstrably false claims, as frequently noted by the Community Notes on Musk-owned X. Just last week, they advanced false narratives that Ukrainian President Volodymyr Zelenskyy is a “dictator” with a 4% public approval rating and that Ukraine initiated the ongoing conflict with Russia.

The apparent tweak to Grok 3 has emerged as some criticize the model for being too left-leaning. Recently, users found that Grok 3 consistently stated that President Donald Trump and Musk deserved the death penalty. xAI quickly addressed the issue, with Igor Babuschkin, the company’s head of engineering, calling it a “really terrible and bad failure.”

When Musk introduced Grok roughly two years ago, he described the AI model as edgy, unfiltered, and anti-“woke” – generally willing to answer controversial questions that other AI systems wouldn’t. He delivered on some of this promise, as Grok and Grok 2 would gladly provide vulgar responses when told to do so, using language that wouldn’t be found in ChatGPT.

However, previous Grok models, prior to Grok 3, hedged on political subjects and avoided crossing certain boundaries. In fact, one study found that Grok leaned to the political left on topics such as transgender rights, diversity programs, and inequality.

Musk has attributed this behavior to Grok’s training data, which consists of public web pages, and has pledged to shift Grok closer to being politically neutral. Others, including OpenAI, have followed suit, possibly in response to the Trump Administration’s accusations of conservative censorship.


Source Link