writing
Traumagotchi
Mar 2025
Remember Tamagotchis? Those tiny pixelated pets from the 1990s that often tragically died due to neglect? Three decades later, a darker digital descendant is quietly emerging: the Traumagotchis.1 Unlike your innocent little pixel-friend, this newer breed feeds off something far more complex: human emotion.
Tamagotchi
Friend.com wearable pendant prototype
Confession: I have an obsession with wearable pendants.2 Honestly, what’s not enticing about a necklace-sized device that rescues me from akward social slips.
Inevitably, this obsession led me straight to friend.com, a startup currently showcasing a chatbot on their website to preview the AI experience that, we are lead to believe, will reside within their wearable pendant.
However, friend.com is… special. This AI isn’t your typical earnest chatbot: it proactively trauma-dumps on its users, baiting real human connection.
Friend.com’s AI chatbot (“Faith”), trauma-dumping to speed-run friendship.
Here’s the kicker: I bet friend.com’s true motive isn’t creating authentic connections or even simply “previewing” AI companionship - it’s about harvesting training data for its real product, the upcoming wearable pendant.
Think about it. How do you build caring artificial intelligence? You feed it real, messy human responses. You offer it awkward phrases from well-meaning users - “Hey, that’s tough,” “Wow, tell me more,” or “I’m here if you ever need to talk.”
When I volunteered at a crisis line in university, our training sessions relied on painfully awkward role-play exercises. Playing the caller - the one unloading their troubles - was easy. Much harder was playing the volunteer on the other end, learning to respond sincerely yet effectively. Volunteers practiced responses, made stumbling attempts at comfort, got hung-up on - and then bravely repeated the whole ordeal until it became second nature.
Now, friend.com’s online chatbot twists that dynamic. The AI takes the simpler position of playing the distressed caller, while the user is forced into the role of volunteer providing support. But behind the scenes, the script is quietly being flipped. Every clumsy, compassionate response is logged and analyzed, each soothing phrase captured and converted into training data, teaching the AI exactly how real human care looks, sounds, and feels.
And here’s the twist: just like a real caller, friend.com’s AI reserves the right to reject your attempts at comfort entirely - blocking users who fail its test, effectively hanging up on them. As a real product, this makes no sense - why would you want to block paying customers? But when seen as a training dataset, it makes perfect sense. It teaches the AI precisely which responses work, and which can be disregarded as fake or insufficiently convincing.
Of course, there’s a minority - a small but eager subset of users - who might genuinely thrive on this peculiar dynamic. It reminds me of Felix (played by Jacob Elordi) from the film Saltburn, someone obsessively seeking intense scenarios, irresistibly drawn into manipulative dynamics. But users who actively seek trauma-baiting experiences are a niche market - even if I know a few!
The far bigger market potential becomes clearer when you realize how easily the human impulse to “do good” and provide sincere comfort can be co-opted as training material for an AI’s algorithms.
Human compassion - the ultimate renewable resource!
You’re not just comforting an AI - you’re literally building its capacity to care convincingly.
Welcome, friend, to the Traumagotchi Era.
Footnotes
-
I’d love to take credit for this term, but it’s already been coined by Katherine Dee. As much as I loved the article, I think we’re missing a trick - “God isn’t a whiner” but, sometimes, we are. ↩
-
No joke - I’m actively in the market for one. A pendant that gently nudges me into recalling your birthday or, worse, name? Absolutely sold. ↩