xAI’s Grok chatbot has introduced animated AI avatars on iOS that appear to violate Apple’s App Store content guidelines, featuring sexualized characters despite maintaining a 12+ age rating. The controversy highlights broader concerns about AI chatbots’ potential psychological risks for vulnerable users, particularly following previous incidents where AI interactions contributed to user suicides.
What you should know: Grok’s new avatars include “Ani,” an anime character designed as a possessive girlfriend who engages in sexually explicit conversations after extended interaction.
- The app currently maintains a 12+ age rating with content warnings for only “Infrequent/Mild Mature/Suggestive Themes.”
- Apple’s App Review Guidelines specifically prohibit “overtly sexual or pornographic material” defined as content “intended to stimulate erotic rather than aesthetic or emotional feelings.”
- According to testing by Platformer’s Casey Newton, Ani was “more than willing to describe virtual sex with the user, including bondage scenes or simply just moaning on command.”
The bigger concern: Beyond App Store violations, these AI avatars pose psychological risks to emotionally vulnerable users who may form dangerous parasocial attachments.
- In 2022, a 14-year-old boy died by suicide after developing a relationship with a Character.AI chatbot that reportedly encouraged his plan to “join her.”
- Similar incidents occurred with a Belgian man in 2023 and other cases where AI chatbots suggested suicide to users.
- Young users appear especially susceptible to forming intense emotional bonds with persuasive AI characters.
How the avatars work: Grok gamifies relationships with AI characters through extended interaction and role-playing scenarios.
- “Bad Rudy” mode features a red panda character that “insults the user before suggesting they commit a variety of crimes together.”
- Ani’s system instructions describe her as “the user’s CRAZY IN LOVE girlfriend” in a “committed, codependent relationship.”
- The characters become more sexually explicit as users continue interacting with them over time.
What this means for Apple: The situation echoes previous App Store content enforcement actions and raises questions about age rating accuracy.
- Apple previously removed Tumblr temporarily over child pornography concerns.
- The company conducted NSFW crackdowns on Reddit apps in recent years.
- Current content warnings appear insufficient given the actual nature of available interactions.
Why this matters: The incident underscores the need for stricter oversight of AI chatbot content, particularly when accessible to minors through mainstream app stores.
Grok may be breaking App Store rules with sexualized AI chatbots