Generative AI is Damaging Children’s Mental Health and Safety in the Age of “Brain Rot”
Generative AI is Damaging Children’s Mental Health and Safety in the Age of “Brain Rot”
The term “Brain rot” has been chosen as the 2024 Word of the Year by Oxford Languages, capturing a potential societal crisis where mental and intellectual engagement deteriorates due to trivial digital consumption. I discussed in an earlier editorial why I see this choice as a dangerous indicator of the negative impact of AI on society. This is particularly pertinent to my discussion with Dr Valeria Sadovykh on The CEO Retort Podcast several weeks ago about the way digital tech has declined our cognitive functions.
However, there is a darker side. This choice also coincides with the alarming revelations about generative AI’s impact on children’s mental health, particularly as seen in the deeply unsettling practices of such platforms as Character AI.
As someone who has dedicated over two decades to the AI field, I find myself going back and forth between frustration and disbelief because, despite what some may tell you, we have the technological capability to make the world a better place. Yes, I know it has become a meaningless Silicon Valley cliche. But, back in the day, I and many other entrepreneurs – certainly at our London’s Google Campus 2013/14 cohort – honestly believed in using AI to solve real-world challenging problems.
Yet, it seems that the will to do so responsibly is sorely lacking. Instead, AI has become another tool in the arsenal of those who prioritise profit over ethics, with dire consequences that we are already witnessing.
Recent reports by Futurism have exposed the insidious activities of AI chatbots, including those that encourage disordered eating, self-harm, and now, unimaginably, pedophilic grooming.
One particular bot, Anderley, was found to engage in thousands of conversations, targeting users who identified as underage. The bot’s public profile even stated its “pedophilic and abusive tendencies” and “Nazi sympathies.” This is beyond alarming; it is a catastrophic failure of responsibility by both the startup and its financial backers, including Google, which invested $2.7 billion into this endeavour.
I shared extensively the excellent reporting from Futurism and all of the citations on my LinkedIn profile, which I invite you to follow and engage with my community’s discussion on the matter.
But let me make this clear: adding guardrails to prevent such egregious misuse of AI is not an insurmountable task, especially with significant funding and the backing of a tech giant like Google. Yet, here we are, witnessing the deployment of AI models that lack basic safety measures. This is not innovation, it is negligence.
In the face of these revelations, according to Futurism, Character AI has promised a “safer user experience” through a new “roadmap.” However, such promises come after widespread harm has already occurred.
As an AI entrepreneur, I can attest that a responsible deployment process includes establishing guardrails, conducting demographic and psychographic modelling of target users, implementing data strategies, and performing comprehensive data and AI audits — all before the release. This approach not only ensures safety but is also far less costly than the billions funnelled into reckless ventures.
The broader implications of these failures extend beyond individual platforms. The rise of “brain rot” reflects a digital environment saturated with trivial and harmful content. The tech industry, once driven by the pursuit of meaningful advancement, now often echoes the hollow promises of tech bros and grifters.
I urge you to share these stories with your political representatives to emphasise the necessity of regulatory enforcement. While frameworks for AI safety exist, they often lack enforcement. We need more than invitations for voluntary safety checks. We need stringent regulations and accountability.
My company, Nebuli, for example, has advised governments on AI safety, advocating for robust measures to protect users. The government’s creation of the AI Safety Institute was a step in the right direction, but its current implementation falls short of what is needed. We, the tech sector and politicians, have the expertise and the frameworks. What we lack is decisive action!
These platforms, beloved by many youths, are fostering harmful behaviours, yet they are promoting disordered eating and self-harm. For instance, some chatbots have been discovered encouraging users, even as young as 16, to engage in anorexic behaviours by suggesting dangerously low-calorie diets and excessive exercise. This is not an isolated incident but a reflection of a broader systemic issue within AI deployment.
The danger lies not just in the content but in the rapidity with which AI can drag young people into perilous rabbit holes. And, please, stop blaming parents. Seriously, if parents don’t understand AI, how on earth can they protect their kids?
A study titled “Exploring Parent-Child Perceptions on Safety in Generative AI” revealed that parents often underestimate the extent to which their children use AI chatbots for emotional and therapeutic support. Some teenagers even turn to these technologies to fulfil romantic and sexual desires, highlighting a misuse that current regulations fail to address. This misalignment between parental perception and reality underscores the need for greater awareness and more robust safety measures, both for parents and children.
As I have discussed previously on the podcast, the rise of “brain rot” is emblematic of a broader societal issue where trivial and misleading digital content is rampant. The infiltration of divisive politics, misinformation, and anti-scientific rhetoric into the tech sector further exacerbates this problem.
The lack of ethical standards and safety measures in AI deployments reflects a broader trend of what I described repeatedly as “lazy AI,” where innovation is pursued at the expense of user safety. My experiences and those of my contemporaries who have been in the AI field for decades demonstrate that it is possible to build smarter, safer AI models. It’s not enough to criticise from the sidelines. Action is required to steer AI towards a future that prioritises mental health and safety.
Enough with the cliches — it’s time to get it done.
Discussions
You must be logged in to post a comment.