On September 23, 2025, I delivered a keynote at the 19th International Conference “Keeping Children and Young People Safe Online” in Warsaw, Poland. My talk, Protecting Without Excluding, came at a moment when age restrictions dominate policy debates. From parliaments to press headlines, calls are growing for stricter limits on when young people can access social media or online platforms. The impulse is understandable: protecting children’s wellbeing is a shared priority. But as I suggested, relying on age alone risks creating more problems than it solves.
Australia offers a telling example. Beginning in December, anyone under 16 will be barred from holding age-restricted social media accounts, with platforms like YouTube included for the first time. To enforce this, the government has tested age-assurance technologies — and the results raise difficult questions. Teenagers as young as fifteen were misclassified as being in their twenties, Indigenous and Southeast Asian children were more likely to be flagged incorrectly, and women and Black users were often deemed older than they are. Instead of cleanly separating adults from children, these tools have both false-positive and false-negative risks that risk excluding some while letting others slip through. Age inference, for now at least, is likely to be plagued by the stereotyoes built into our AI-systems. Age verification, by it’s very nature, will remain exclusive to those who have identification to verify. I do believe that age-based tools have a place, but they are imperfect. My fear is that others see this tool as the solution for all. It might be part of a solution, but it’s surely not everything.
Europe is also moving quickly in this space. Under Article 28 of the Digital Services Act (for which I was involved), the European Commission has issued new guidelines and launched a prototype age-verification app that aims to balance safety with privacy. The ambition is clear: to give young people stronger protections without creating new risks around data. I’m fully behind this stance, but still, I’m left with two questions:
- Can technical solutions can ever keep pace with the realities of digital life?
- Does focusing so heavily on gates distract from deeper design problems?
I believe the answer to question 1 is “no” and question to is “yes”. Consider a recent study of short-form video platforms which found that potentially unsafe algorithmically-driven short form content (which find their way into age-gated spaces) often relies on dark visual cues or anxious emotional tones. This is helpful design information that can be used to detect and mitigate risk by being built into the design of algorithmic recommendations. Such a design change could go much farther than age-gating alone. This also highlights an uncomfortable truth: keeping children off platforms, or segmenting them by age, does not do enough – by default – to address the ways in which design and recommendation systems shape what they actually see and feel online.
In my keynote, I argued that age restrictions and verification tools have a place, but they are not the holy grail. Over-reliance on age verifications and bans risks excluding young people from opportunities to learn, connect, and participate, while creating a false sense of safety for parents and policymakers. A more effective approach is to advocate for design for digital competence and wellbeing from the start — building environments that are safe by default, offering supportive structures that guide use, and equipping children with the skills they need to navigate digital spaces confidently.
Protecting children online will always require a mix of measures. But if we want solutions that are sustainable, equitable, and truly rights-based, we must move beyond exclusion and focus on designing digital spaces where children can both be safe and thrive.
For more about the event, visit the official conference website. Contact me for a copy of the slide-deck.
