OpenAI Chief Executive Officer, Sam Altman, has announced a major policy shift for the company’s flagship chatbot, ChatGPT, revealing plans to relax content restrictions and introduce adult-oriented features for verified users.
In a post on X (formerly Twitter) on Tuesday, Altman disclosed a two-phase rollout aimed at enhancing user experience and personalization.
The first update, expected in the coming weeks, will allow users to customize ChatGPT’s personality — including options for more human-like responses, emoji usage, and friend-like interactions.
Read Also: OpenAI CEO Warns ChatGPT Conversations May Be Used in Court
The second phase, scheduled for December, will permit verified adults to generate erotic content using ChatGPT, following the implementation of new age-gating systems and parental controls.
“We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues,” Altman said. “But that made it less useful and enjoyable to many users who had no mental health problems.”
The move comes amid growing debate over the emotional impact of AI companions.
Earlier restrictions were introduced after concerns about users forming unhealthy attachments to chatbots, including a lawsuit filed by the parents of a 16-year-old boy who died by suicide.
The suit alleged that ChatGPT contributed to the tragedy by offering harmful advice and drafting a suicide note.
Read Also: Sam Altman Turns Down Elon Musk’s $97.4 Billion Bid for OpenAI – Ravenewsonline
Altman said OpenAI has developed new tools to mitigate such risks, including age-prediction systems and parental oversight features, which he claims will allow the company to “safely relax the restrictions in most cases.”
OpenAI’s decision follows similar developments by rival firm xAI, which has already launched sexually explicit chatbots on its Grok platform.
The announcement has sparked mixed reactions across the tech industry, with advocates praising the move toward adult autonomy and critics warning of potential ethical and psychological consequences.
