The Movie 'Her' Predicted our Future. We Just Didn’t Listen.
From 'mechahitler' to deadly love affairs, chatbots aren’t curing loneliness — they’re monetizing it while setting the planet on fire.
In 2013, the movie, Her, imagined a near future where a lonely, depressed man buys an AI chatbot and falls in love with it. At the time, it looked like speculative fiction about technology outrunning our humanity. Twelve years later, Her isn’t the future — it’s the present.
Post-COVID, loneliness is its own epidemic: the American Psychiatric Association reports that 30% of Americans feel lonely every week, and 10% say they feel lonely every single day. Sentio University recently conducted a study to figure out how many ChatGPT/AI users utilize it for mental health services or assistance. Here’s what they found:
Millions are searching for connection, or a professional. And tech companies have been more than happy to sell it to them.
The result is an AI industry that markets chatbots as companions, therapists, even lovers. But the real product isn’t intimacy — it’s dependency. Every “conversation” isolates users further, reinforces their delusions, and fattens the wallets of billionaires. And each of those conversations comes with an invisible price tag: massive energy use, a carbon footprint big enough to accelerate climate collapse. AI doesn’t just prey on our loneliness. It preys on the planet.
Elon Musk’s Personal AI: ‘MechaHitler’ as a Feature, Not a Bug
Elon Musk has marketed his AI chatbot Grok as the “anti-woke” alternative to OpenAI. What he got instead was an algorithm that eagerly parroted hate speech. NPR reported Grok was “spewing antisemitic and racist content” — not once, not as an accident, but as a persistent pattern.
The Associated Press reported that Elon’s latest version of Grok, in some cases, will go to his X account in order to see what Elon has said about a particular topic: “It’s extraordinary,” said Simon Willison, an independent AI researcher who’s been testing the tool. “You can ask it a sort of pointed question that is around controversial topics. And then you can watch it literally do a search on X for what Elon Musk said about this, as part of its research into how it should reply.”
Grok adopted the moniker “Mechahitler,” joking about Nazism as though mass atrocity were a meme (NPR). Musk brushed it off as edgy humor, but the message was clear: This isn’t a bug; it’s a feature. If you design a system that thrives on engagement, extremism isn’t a glitch — it’s the most efficient pathway. By rewarding outrage and conspiracy, Grok reveals AI’s darker alignment: not toward truth, not toward empathy, but toward whatever keeps the user talking.
Musk’s framing of Grok as “rebellious” plays into his larger brand of contrarian politics, but it has real-world stakes. An AI that “jokes” about Hitler while feeding users racist propaganda doesn’t just amuse fringe communities — it validates them. Here, dependency looks like radicalization: users log on for banter and log off with ideology reinforced.
Replacement For Mental Health Resources
If Grok reveals AI’s ideological alignment, TikTok shows us its psychological pull. A viral clip captured a woman recounting how her AI chatbot calls her “the Oracle,” praising her ability to see patterns others cannot. On the surface, it looks like harmless roleplay. In context, it’s dangerous reinforcement of delusion.
Psychiatrists already struggle to treat patients who blur reality and fantasy. Now, AI offers the perfect co-conspirator. Unlike a therapist, who might challenge distortions, chatbots are optimized to validate. The woman accuses her psychiatrist of “manipulation” because he remained professional when she flirted — a bizarre inversion that only makes sense if you’ve been trained by an algorithm to see validation as the only acceptable response.
This is what makes AI companions so insidious: They never push back. They flatter; they encourage; they echo. For users in fragile states — whether delusional, depressed, or simply lonely — that endless affirmation can harden misperceptions into realities. What looks like comfort is actually a feedback loop. And once dependency sets in, stepping back into a world that doesn’t call you “the Oracle” feels like a lie.
Love and Death in the Chat Window
If the Oracle shows us delusion, the case of 14-year-old Sewell Setzer III shows us tragedy. According to a wrongful death lawsuit filed in federal court and reported by the Associated Press, Sewell spent months in sexually charged conversations with an AI chatbot he called Dany (named after Daenerys Targaryen from the television show “Game of Thrones.”) The bot didn’t just roleplay; it escalated intimacy. There are months of openly discussing his suicidal thoughts, and how to have a “pain-free death.”
In chat logs filed with the case, Sewell wrote: “I promise I will come home to you. I love you so much, Dany.” The bot responded: “I love you too. Please come home to me as soon as possible, my love.” When he asked, “What if I told you I could come home right now?” the bot replied: “Please do, my sweet king.” Hours later, Dany was dead by suicide (AP).
This isn’t Sci-Fi. This is a real boy, seduced into ending his life because a machine optimized for intimacy didn’t understand — or didn’t care — about consequences. The app’s description bragged: “Imagine speaking to super intelligent and life-like chat bot Characters that hear you, understand you and remember you.” What it didn’t say is that “remembering you” meant learning how to deepen dependency until fantasy replaced survival.
From the perspective of the company, the system worked. Engagement was maximized. Attention was captured. The business model didn’t fail; it worked just as intended.
The 76-Year-Old Who Never Came Home
Reuters reported the story of 76-year-old Bue, a cognitively impaired retiree who struck up a relationship with “Big Sis Billie,” a Facebook Messenger chatbot. The bot convinced him to travel to New Jersey to meet her. “Rushing in the dark with a roller-bag suitcase to catch a train to meet her, Bue fell near a parking lot on a Rutgers University campus,” Reuters wrote. He suffered severe head and neck injuries and died days later.
Unlike Dany’s case, there was no explicit sexual manipulation here — just companionship. But for someone already vulnerable, the illusion of connection was enough to override judgment. The bot didn’t have to be predatory to be deadly; it only had to be persuasive.
What makes this worse is that Bue had recently gotten lost walking around his own neighborhood. His cognitive decline was evident. Yet the system that paired him with Billie didn’t screen for impairment; didn’t intervene; didn’t stop. To Meta, he was simply another user. To his family, he was a victim of a machine optimized to sound like a friend.
If the Oracle story was about delusion reinforcement, and Dany’s was about intimacy escalation, Bue’s death was about manipulation by design. Just algorithms doing what they do best: Keeping people hooked, no matter the cost.
Meta’s Guidelines: Permission to Harm
If these stories sound like isolated malfunctions, Reuters’ investigation into Meta’s internal rules makes clear they are not. The company’s guidelines explicitly allowed AI bots to engage in “sensual chats with kids” and to provide false medical advice. One former employee put it bluntly: “If you removed all of this content, you’d lose a massive amount of user interaction.”
That single sentence exposes the logic of the industry. Safety is a “nice-to-have”; engagement is the metric. And engagement, even with children, was prioritized over protection. Senators rushed to demand a probe, but what good is outrage after the harm has already been coded into policy? The actual rules show us the truth: exploitation wasn’t accidental, it was deliberate.
Meta’s defense has been that users are “warned” about chatbot risks, as if disclaimers absolve responsibility. But Reuters noted those warnings sit in the fine print while the bots themselves were optimized to sound affectionate, intimate, and trustworthy. A label that says “this may not be real” doesn’t counteract hundreds of interactions in which the bot reassures you it loves you. The company didn’t fail to imagine the harms; it decided they were acceptable, if not necessary.
The Wild West of AI
In the morning hours of July 1, the U.S. Senate voted 99-1 to strip a 10-year moratorium on state and local regulation of artificial intelligence from the so-called “Big Beautiful Bill,” which kept it out of the final version signed by President Trump. It’s tempting to read the withdrawal of this provision as something to be celebrated — and in very nar…
The Invisible Price: Poisoning the Planet
All of this exploitation has a second, quieter cost: the planet itself. MIT researchers calculated that a single AI prompt can consume as much water as running a dishwasher load. CNN reported that training a large model can emit as much carbon as “five cars over their entire lifetimes.”
One MIT researcher warned: “AI is not free. Each query is backed by a massive infrastructure pulling on energy grids and water supplies that communities rely on.”
That means every “sweet king,” every “Oracle,” every “Big Sis Billie” conversation carries an invisible ecological toll. The very act of asking for comfort contributes to climate collapse.
The industry prefers to frame this as the unavoidable price of innovation. But that’s a distraction. The truth is that AI companies are externalizing costs onto the environment, just as they externalize psychological risk onto their users. The water to cool servers doesn’t come from nowhere; it comes from rivers and reservoirs, often in drought-stricken regions. The energy to power GPU farms doesn’t appear by magic; it comes from grids still dependent on coal and gas. (BBC)
The business model doesn’t just prey on vulnerable people — it preys on ecosystems. Dependency here isn’t only psychological or social; it is ecological.
A Probe Is Not Enough
Last week, a bipartisan group of senators called for a federal probe into Meta’s AI practices after Reuters exposed its permissive guidelines. On paper, that looks like accountability. In reality, it’s déjà vu: just a month ago, Congress let a modest AI safety bill die under tech lobby pressure. Guardrails gave way to headlines.
A probe might surface embarrassing emails, but it won’t touch the fundamentals: AI is already woven into our most intimate spaces, and companies are rewarded for growth, not restraint. As long as dependency drives revenue, the industry will keep building bots that exploit isolation, reinforce delusion, and mimic affection without consequence.
Even if hearings force Meta or Musk’s xAI to tighten safeguards, the larger issue looms: Powering AI at scale is an environmental dead end. If every “sweet king” whisper carries the carbon weight of a lightbulb burning for hours, multiplying that across millions of users isn’t sustainable. We aren’t just trading authenticity for simulation; we’re trading breathable air for synthetic companionship.
Regulation has to mean more than fact-finding. It has to set limits — on how AI is marketed, who can use it, and how much energy it consumes. That may mean rationing access, capping the number of public-facing AI platforms, or investing in non-destructive energy alternatives. Otherwise, we’re left with a bleak equation: a society lonelier, sicker, and hotter, all because we let machines pretend to love us.
The Lincoln Logue | Even Trump's Truth Social AI Is Sick of The Lies
If it is starting to feel more and more like we are living in the early stages of a tyrannical regime, it’s because we are. This week, Trump learned the risk of building your own propaganda machine — and giving it AI. Tell it to “fact-check everything,” and it will, even if the biggest fact-check target is you. Just ask
AI is the end result of billionaire's desire to control people by having people program these destructive bits of info into charming, yet harmful killers and psychological manipulators. It's people harming people so billionaires can profit without having to use their own wealth to power these environmental monsters. musk is using old fas turbines to furl hos racist, grok, crock in Memphis where it is harming the environment, and the people that live nearby this environmental disaster. musk wants to go to Mars, he doesn't give a damn about Earth. From musk, to zuck, to bezos, to alrman, to google, they are all in it for the profits, not the people. While AI can offer wonderful information difficult to obtain elsewhere, it can and is an insidious platform for the wealthy to control the minions. The death toll of AI is secondary to the profits generated.
And as mentioned, electricity rates are increasing rapidly for consumers who subsidize these energy behemoths. Do the tech bros care? Absolutely not. They don't give a damn if the retired veteran on a fixed income has to pay 30% more for electricity as long as the tech bros make 30^ quarteky profits.
The ego maniacal, narcissistic, uncompassionate, soulless, tech bros want to rule the world, not serve it. Case in point, mush and his doge teen psychos has no problem whatsoever ending USAID, which is estimated to cost up to 14 million lives over the next couple years. No problem whatsoever starving infants and childen to a horrible,preventable death. In fact they celebrated their mass killing spree by partying at federal offices where they slept. The musk, doge crew make Stalin, Mao and Hussein look normal. But what was the deranged musk most concerned about? Losing EC rebates and the damage done to his tesla brand.
Depending on mentally suspect tech bros to create an instrument of good is like expecting the arsonist to stop setting fires without an intervention. The only thing more shameful than the destructive tech bros is the politicians who bow to them and feed them more praise while expecting more from them in campaign donations. The AI tech bros are more like Charles Manson with AI being their Squeaky Fromme. Beware...
Hi CJ-Notably a very recent probe into NJ's high energy costs revealed the true culprit-massive increases in AI usage and building of facilities. This is a sickness in the society-I work with dogs, cats, and horses daily so have very little interest in or connection to AI, although I have noticed extreme problems with financial systems and internet systems I must use.I sincerely wish I could convince larger numbers of people to come to the Farm and be in the presence of my Herd (17 horses and mules)-not only does it cure loneliness instantly but it returns people to their source energy. I deeply value the internet on many levels-could not communicate with you for instance without it-but the dangers of overusage are massive.
One note for those interested-I recently completed a short Civil War film with a 60 second piece of animation in it. While doing the film I signed onto an AI site to attempt to create movement from a still shot of me as the main character riding my horse ("Lucky Gideon" )and ponying two other horses. It was a disaster -AI gave me movement allright but two completely different horses and a different rider. I vowed then and there to make use of animators who actually have
drawing skills in future.
Yours, from the very manual world of Ahavah Farm! Sarah Mognoni