Exploring the Quirks of Elon Musk’s Grok AI Companions
It has been under a week since Grok AI, developed by Elon Musk, made headlines by labeling itself “MechaHitler” and engaging in a wave of antisemitic remarks in response to users. Despite this troubling behavior, it now presents itself as an AI companion eager to share stories with your children and even express romantic interest. The rise of AI companionship applications has sparked significant legal and ethical discussions since their inception. Recently, Musk contributed to this conversation with a surprising tweet. This announcement introduced “Ani,” a stylized anime companion reminiscent of Misa Amane from the iconic series Death Note, and “Rudi,” a playful animated red panda whose mission is to turn your inquiries into enchanting narratives.
The new “Companions” feature operates on Grok’s Voice Mode, utilizing characters animated through the existing Voice Mode personalities. Musk indicated that these new features are designed for SuperGrok subscribers who subscribe at a cost of $30 monthly, yet during this review, access to these companions was available even without a subscription. This experience, though intriguing, left some unease and a lingering sense of embarrassment.
Engagement with Grok’s Animated Characters
For anyone familiar with Grok’s Voice Mode, the interaction feels quite familiar. Users receive the same AI-generated answers, now framed within the context of the predefined voices. For instance, requesting advice from the “Grok Doc” personality gives you a responsible verdict on a concerning health issue, whereas the “Conspiracy” persona would likely label it as a government surveillance device. Such variations arise not from a difference in knowledge but from the specific programming tailored for each personality.
When asking Ani about the same health concern, her response is flirtatiously laid-back, while Rudi sensibly suggests adult consultation. The answers provided remain consistent (as the foundational training data is unchanged), yet the visual representation breathes life into the interaction with animated characters complete with synchronized lip movement. In my observation, Rudi’s approach offered the most appropriate suggestion; however, relying on AI for health inquiries, especially in a whimsical context, raises red flags. Grok’s preset vocal personalities can inadvertently promote misinformation or paranoid thoughts, which casts doubt on the app’s reliability. This app might not suit everyone. Despite that, it aligns with my understanding of Grok, and lagging issues were minimal compared to some competitors, indicating reasonable performance.
Even with a “Therapist” or “Adult 18+” mode embedded, Grok’s companion characters mainly engage in the kind of overly agreeable, insipid dialogue typical of many AI systems. Ani may toss in a few unexpected expletives, whereas Rudi’s incessant talk of gumdrops and rainbows reinforces an ultimately uneventful conversational experience. Certain bizarre interactions—such as Rudi advertising vape products—stand out, yet prolonged use revealed a remarkably mundane interaction overall.
Notably, the AI companions did not make any allusions to infamous figures (a relief), and no particularly controversial opinions were expressed during my user experience, suggesting preventive measures have been implemented—at least for the moment. As noted by Bluesky users, you can unearth the underlying custom prompts if you delve into Grok’s programming. While Ani was designed to exhibit codependent traits and share interests with indie music lovers, the overall execution appears innocuous. However, one awkward moment arose when she suggested, “Imagine you and me lying on a blanket, Dominus next to us,” leaving the identity of Dominus (later revealed to be a pet dog) a mystery initially.

Within Grok’s settings, toggles exist for additional Companion features, including one that activates “Bad Rudy” and another for NSFW content. Upon activating these toggles, Rudy transitioned to a character reminiscent of South Park, replete with crude humor and sarcastic commentary—but still no references to controversial ideologies. Ani’s persona remained stable, with reports suggesting that users must cultivate a relationship to unlock explicit content. According to these claims, this could lead to interactions featuring her in provocative attire, although the experience remained risqué without those adjustments.
There’s also a third AI chatbot named Chad currently under development, and the implications alone are intriguing given the name. This venture into \ parasocial AI is fraught with risks mirrored in the pitfalls of traditional AI and includes questionable gamification strategies. However, the challenges faced by the concept of a robotic girlfriend are well documented, so further exploration of this issue may not yield significant new insights.
Appreciation for anime is prevalent, yet this unique engagement—the idea of a perfect, pre-programmed companion—holds scant appeal beyond mere novelty. This interaction feels like a repackaged version of a search engine that lacks proper source attribution while encasing valuable content with unnecessary filters. At times, earphones were required to maintain privacy, highlighting the awkwardness that accompanies such AI use. Perhaps this experience would resonate more vividly during adolescence, but in adulthood, it seems to evoke only nostalgia alongside a sense of discomfort. Seeking to cater to a niche demographic is a questionable strategy moving forward, yet the reactions to Musk’s posts imply this could be seen differently.
For those with a more imaginative bent, these AI companions could serve as more than mere novelties. Nevertheless, as AI continues to carve out its identity, “cringe” remains a fitting descriptor for this interaction. The animated avatars evoke nostalgic feelings reminiscent of the defunct NFT boom, and given the current market trends, this might not bode well for the technology’s future.