Last week, Australia’s eSafety Commissioner issued legally enforceable transparency notices to four of the largest gaming platforms in the world, demanding they explain how they are detecting and preventing the grooming and radicalisation of children on their services. The action is welcome. It is also, I think, an inflection point that exposes how lopsided our public conversation about AI and child safety has become.
In the same statement, Commissioner Julie Inman Grant made an observation that should have prompted more attention than it received:
“What we often see after these offenders make contact with children in online game environments, they then move children to private messaging services.”
This is not a footnote. It is a textbook description of displacement and it tells us almost everything we need to know about why the next phase of platform safety has to look very different from the last one.
The displacement concern is real, but it is also well-understood Criminologists have been studying displacement since at least Reppetto’s 1976 paper laying out six pathways through which thwarted offenders adapt: they move in space, in time, to different targets, to different methods, to different crime types, or are replaced by new offenders.
The fear that situational interventions simply push crime around the corner has been one of the most persistent critiques of preventative policing for half a century. The empirical evidence, however, has been kinder to prevention than the intuition suggests. Guerette and Bowers’ systematic review in Criminology(2009) examined 102 evaluations of situational crime prevention, generating 574 distinct observations of outcome. Displacement was detected in roughly 26% of observations.
The opposite phenomenon, what Clarke and Weisburd (1994) termed diffusion of benefits, where prevention spills protective effects beyond the targeted environment, was detected in 27%. Where displacement did occur, it was almost always partial: smaller in magnitude than the crime reduction in the targeted setting. Hesseling’s earlier 1994 review reached substantively the same conclusion.
The orthodox criminological position, then, is not that displacement is inevitable. It is that displacement is one possible outcome among several, that it tends to be incomplete when it happens, and that designing-out opportunity at the point of first contact remains the most effective intervention available particularly for offenders whose offending is opportunity-driven rather than commission-driven.
This matters because the gaming-platform-to-encrypted-messenger pipeline that Inman Grant describes is precisely the kind of tactical and spatial displacementthe literature anticipates. The policy question is not whether we should worry about it. It is what we do once we accept it as an empirical reality. And that is where our conversation about AI has gone strangely quiet.
The asymmetry in how we talk about AI
If you spent the last twelve months reading about AI and child safety, you would be forgiven for thinking that large language models exist almost exclusively as a threat. The discourse in regulatory circles, in the press, and on this platform is dominated by what offenders are doing with generative AI: synthetic CSAM, deepfaked intimate images, scaled grooming scripts, AI-personated minors used in extortion, “nudify” services. eSafety’s own recent commentary on generative AI and child safety is largely framed around weaponisation. These concerns are entirely warranted, but they are also only half of the picture.
Comparatively little attention is paid to the rapidly maturing body of academic work demonstrating that the same class of models is now extraordinarily good at detectingthe very behaviours we are trying to disrupt:
- A 2025 systematic meta-analysis published in Scientific Reportsassessed machine learning approaches to grooming detection and reported strong performance across multiple model classes, with deep learning approaches achieving F1 scores well above traditional baselines.
- Hamm and McKeever’s 2025 study in Frontiers in Pediatrics compared transformer-based models against traditional classifiers on grooming chat logs, finding that contemporary deep learning models materially improve sensitivity to the conversational tonepredators use including manipulation, secrecy, and isolation patterns rather than relying on lexical signals alone.
- Nguyen et al. (2023) at Monash University’s AiLECS Lab demonstrated that fine-tuned open-weights LLMs (Llama 2, in their case) achieve consistent, robust classification performance on predatory chat detection across heterogeneous datasets, addressing one of the longest-standing weaknesses in this field.
- A 2025 survey in Applied Sciencescovering generative AI methods for detecting child exploitation crimes documents a step-change in capability over the past 24 months, a window during which most of the public-facing discourse has remained focused on the offensive use of these same tools.
None of this is unfettered optimism. The literature is candid about the constraints: imbalanced datasets (predatory conversations are statistically rare), benchmark obsolescence (PAN2012 is showing its age), the persistent risk of false positives at scale, and the speed at which offender language adapts – but the trajectory is unmistakable.
We now have models that understand intent, context, age-asymmetric power dynamics, and the discourse markers of grooming, secrecy enforcement, isolation from caregivers, gradual desensitisation at a level no keyword filter or rule-based system has ever approached.
What this means for the eSafety notices
Transparency reporting is a useful instrument. It surfaces what platforms are doing, exposes gaps, and creates a record against which future regulatory action can be calibrated. But transparency alone does not detect a single grooming conversation. It does not interdict a single recruitment funnel into extremist spaces.
If we take displacement theory seriously, and the evidence base says we should, the implication is that prevention has to happen at the point where contact is initiated, not after offenders have successfully migrated children to encrypted environments where intervention becomes far more difficult, ethically and technically.
That point of first contact is, overwhelmingly, on the very platforms eSafety has just put on notice. The honest version of the policy ask, then, is not “tell us what you are doing.” It is:
Deploy AI-driven detection at the point of first contact, at scale, in real time, with age-calibrated risk scoring, before the conversation has migrated anywhere harder to reach.
That is a technical capability that exists today. I know this first hand, as its what we have developed at Tuteliq .
One integration. Three lines of code. And you go from zero coverage to full compliance in days. That analyse how conversations evolve, the trajectory, the power dynamics, the emotional tenor over time. We detect grooming, bullying, coercive control, fraud, romance scams, radicalisation, and sixteen other threat categories. Across 27 languages. Across text, voice, image, and video, all in under 400 milliseconds.
So not only can it be done, it has been – with deeply robust empirical research to back it up. The gap eSafety is highlighting is not a research gap, it is a deployment gap. The tools exist, the literature on their efficacy is no longer speculative. The barrier is institutional willingness on the part of platforms to integrate detection upstream of the harm rather than downstream of the headline.
The asymmetry Inman Grant is right that gaming platforms cannot be allowed to function as onramps to abuse and radicalisation. She is also right that the contact often migrates elsewhere. Both things can be true. The mistake would be to read the second observation as a counsel of despair, when the criminological evidence reads it as a counsel of where to act.
It is past time we held two thoughts together with equal seriousness: that AI is being weaponised against children, andthat AI is the most powerful child-safety capability we have ever built. The first half of that sentence is the one we have been having for two years. The second half is the conversation that will determine whether the next two years are any better than the last.
If this resonates and you are working on platform safety, child protection, or trust & safety policy, I would welcome the conversation.
References
Clarke, R. V., & Weisburd, D. (1994). Diffusion of crime control benefits: Observations on the reverse of displacement. Crime Prevention Studies,
Guerette, R. T., & Bowers, K. J. (2009). Assessing the extent of crime displacement and diffusion of benefits. Criminology, 47(4).
Hamm, L., & McKeever, S. (2025). Comparing machine learning models with a focus on tone in grooming chat logs. Frontiers in Pediatrics.
Hesseling, R. (1994). Displacement: A review of the empirical literature. Crime Prevention Studies,
Nguyen, T. T., et al. (2023). Fine-tuning Llama 2 large language models for detecting online sexual predatory chats and abusive texts. AiLECS Lab, Monash University.
Reppetto, T. A. (1976). Crime prevention and the displacement phenomenon. Crime & Delinquency, 22(2).
Effectiveness of machine learning methods in detecting grooming: a systematic meta-analytic review (2025). Scientific Reports.
A Survey of Generative AI for Detecting Pedophilia Crimes (2025). Applied Sciences, 15(13).