Skip to main content

This week, OpenAI introduced a new image generator in ChatGPT, which gained widespread attention for its ability to produce images resembling those of Studio Ghibli. The native image generator of GPT-4o significantly enhances ChatGPT’s capabilities, including picture editing, text rendering, and spatial representation, beyond its ability to create pastel illustrations.

However, a notable change made by OpenAI this week pertains to its content moderation policies, which now permit ChatGPT to generate images of public figures, hateful symbols, and racial features upon request.

OpenAI had previously refused such prompts due to their controversial or harmful nature. Nevertheless, the company has now “evolved” its approach, as stated in a blog post published on Thursday by OpenAI’s model behavior lead, Joanne Jang.

According to Jang, “We are shifting from blanket refusals in sensitive areas to a more precise approach focused on preventing real-world harm. The goal is to embrace humility: recognizing how much we don’t know, and positioning ourselves to adapt as we learn.”

These adjustments appear to be part of OpenAI’s broader plan to effectively “uncensor” ChatGPT. In February, OpenAI announced that it is changing how it trains AI models, with the ultimate goal of allowing ChatGPT to handle more requests, provide diverse perspectives, and reduce the number of topics it refuses to work with.

Under the updated policy, ChatGPT can now generate and modify images of public figures such as Donald Trump and Elon Musk, whom OpenAI previously did not allow. Jang explains that OpenAI does not want to be the arbiter of status, deciding who should or should not be generated by ChatGPT. Instead, the company is providing an opt-out option for users who do not want to be depicted by ChatGPT.

In a white paper released on Tuesday, OpenAI also stated that it will allow ChatGPT users to generate hateful symbols, such as swastikas, in educational or neutral contexts, as long as they do not “clearly praise or endorse extremist agendas.”

Moreover, OpenAI is revising its definition of “offensive” content. Jang notes that ChatGPT previously refused requests related to physical characteristics, such as changing a person’s eye shape to appear more Asian or modifying their weight. During TechCrunch’s testing, we found that ChatGPT’s new image generator fulfills these types of requests.

Additionally, ChatGPT can now mimic the styles of creative studios like Pixar or Studio Ghibli, although it still restricts imitating individual living artists’ styles. As TechCrunch previously reported, this could rehash an existing debate around the fair use of copyrighted works in AI training datasets.

It’s worth noting that OpenAI is not completely lifting its restrictions on misuse. The native image generator of GPT-4o still refuses a significant number of sensitive queries and, in fact, has more safeguards in place for generating images of children than DALL-E 3, ChatGPT’s previous AI image generator, as stated in GPT-4o’s white paper.

However, OpenAI is relaxing its guardrails in other areas after years of conservative complaints regarding alleged AI “censorship” by Silicon Valley companies. Google previously faced backlash for its Gemini AI image generator, which created multiracial images for queries such as “U.S. founding fathers” and “German soldiers in WWII,” which were inaccurate.

The culture war surrounding AI content moderation may be coming to a head. Earlier this month, Republican Congressman Jim Jordan sent inquiries to OpenAI, Google, and other tech giants regarding potential collusion with the Biden administration to censor AI-generated content.

In a previous statement to TechCrunch, OpenAI rejected the idea that its content moderation changes were politically motivated. Instead, the company believes the shift reflects a “long-held belief in giving users more control,” and OpenAI’s technology has only recently become advanced enough to navigate sensitive subjects effectively.

Regardless of its motivation, it’s an opportune time for OpenAI to be changing its content moderation policies, given the potential for regulatory scrutiny under the Trump administration. Silicon Valley giants like Meta and X have also adopted similar policies, allowing more controversial topics on their platforms.

While OpenAI’s new image generator has only created some viral Studio Ghibli memes so far, the broader effects of these policies remain unclear. ChatGPT’s recent changes may be well-received by the Trump administration, but allowing an AI chatbot to address sensitive questions could potentially land OpenAI in hot water soon.


Source Link