X Users Using Elon Musk’s Grok AI to “Unblur” Photos of Children in Epstein Files
X Users Using Elon Musk’s Grok AI to “Unblur” Photos of Children in Epstein Files
In this busy February of a constant stream of Jeffrey Epstein revelations, stemming from the US Department of Justice (DOJ) publication of 3.5 million pages of documents related to the late sex offender on January 30, 2026, we are learning more disturbing aspects of not only the “Epstein Class” and apologists but also our society.
A new report published by Bellingcat – the independent investigative collective of researchers, investigators and citizen journalists – found that only days after the DOJ’s release, multiple users flocked to X and have asked Grok to “unblur” or remove the black boxes covering the faces of children and women in images included within these files. Of course, these redactions were meant to protect the privacy of these women and children, who may also have been victims.
As many of us, so-called “AI critics” – AKA: real AI experts who actually understand how AI works – have repeatedly highlighted over the years, this is not just a story about a technical glitch. Grok is a perfect example of the intersection of user intent, platform ethics (or the lack of), and the inherent hallucination problems of Generative AI.
So, when I see the common question on social media about why Grok seems to engage with these “creep” queries more readily than its competitors (like ChatGPT or Claude), the answer is quite simple. It is its design philosophy.
Grok was marketed as an AI that would answer “spicy” or “taboo” questions that other models refuse. This lean toward being “unfiltered” created a vacuum where safety guardrails were initially thinner or easier to circumvent.
Furthermore, in this case, Grok does not have access to the original unblurred files. Instead, it uses predictive modelling to fill in the blanks. When a user asks it to “unblur” a photo, Grok simply generates a face it “thinks “thinks” might fit based on surrounding pixels.
Hence, it should come as no surprise that the behaviour of X users has been enabled, perhaps even encouraged, by intentionally insufficient safety filters in Grok. We have seen it with the recent shocking rise of deepfakes of women and children generated through Grok.
Bellingcat revealed that they reviewed 31 separate requests from users for Grok to “unblur” or identify the women and children in these images. In response, the chatbot noted to questions or requests by some users that the faces of minors in the files were blurred to protect their privacy “as per standard practices in sensitive images from the Epstein files”, and it stated it could not unblur or identify them. However, Bellingcat confirmed that Grok still generated images in response to 27 of the requests that the researchers reviewed.
Following the publication of this investigation, X had not responded to Bellingcat’s subsequent query about whether new guardrails had been put in place, but Grok’s guardrails seem to have been tightened. Bellingcat confirmed that the chatbot now seems to be refusing most “unblur” requests, citing ethical and legal protections. I recommend that you read the full report.
Do any of you still believe in further deregulation of the currently useless regulatory framework that failed to protect users and consumers for well over 15 years?
Discussions
Sign-in to join the discussion.
Not a member yet? Consider joining us and become part of our mission of helping the world understand AI without bias or hype.