What’s the rationale behind Australia’s ban on under-16s using social media platforms, and how effective is it likely to be? An interview with Professor Michael Dezuanni.
Back in the day, there used to be a lot of talk about ‘digital childhood’. I wasn’t the first or the only person to question such ideas: the notion of a particular generation or stage of life being defined or determined by technology is self-evidently problematic.
Yet in the past few years, increasing numbers of commentators, campaigners and policy-makers have become enthused about the possibility of returning to non-digital childhoods. The frequently expressed idea (and it’s by no means a new one) is that media and technology are stealing, or even destroying, childhood itself.
And so we seem to be moving into a new age of bans and restrictions. These take various forms, as I’ve discussed many times before on this blog. Guidance is issued about the need to reduce children’s ‘screen time’ (an unhelpful concept, to be sure). There are bans on particular devices, like mobile phones in schools; bans on content that is deemed to be dangerous or inappropriate, as in the UK’s Online Safety Act; and bans on children accessing specific online services, like social media platforms.
The focus in this post is on the latter, and specifically on the Australian government’s move to ban children under the age of sixteen from holding social media accounts. This isn’t an isolated move: similar proposals are currently being developed in European countries like France, Denmark, Greece and Spain, although not yet in the UK. However, the Australian legislation – which will be implemented next month (December 2025) – is probably the first nationwide ban to be enacted: it sets the age bar comparatively high, and it is likely to extend to YouTube, which is currently young people’s major source of news.
I sent a few questions about this to a leading Australian academic in the field, Professor Michael Dezuanni. Michael works at Queensland University of Technology (QUT) in Brisbane, where he is the Program Leader on ‘Creating Better Digital Futures’ at the Digital Media Research Centre. He has published several very interesting books at the intersection between education and digital media, including one on Minecraft and play, and most recently a coauthored study of online ‘reading cultures’. He’s also currently the chair of the Australian Media Literacy Alliance. As a true professional, he’s provided some helpful links if you would like to read further.
. . . . .
Michael, I’m hoping you might begin by explaining the political process that led to the legislation. Where did the push for legislation come from? Why was government legislation seen to be necessary? What other alternatives (like imposing codes of guidance on the platform companies) were considered, and why were they dropped? Why was the age set at 16, rather than (say) 13, which is the official age for children to have an account with some of these platforms?
There has been a general sense in Australia in recent years that ‘something should be done’ about the assumed negative influence of digital technologies and social media on young people. This is partly an extension of long-standing concerns about cyberbullying, online predators and children having access to adult content. I think it is fair to say that many adults feel a sense of frustration about the lack of effective regulation of social media companies. In addition, as is the case in many parts of the world, there have been some tragic but isolated cases of younger teens committing suicide, which have been connected to their online activity.
A few of these cases became central to the so-called ‘36 months campaign’ founded in 2024 by Australian radio personality Michael ‘Wippa’ Wipfli and Rob Galluzzo, the CEO of a media production company. Wipfli and Galluzzo lobbied the federal government, including the Prime Minister directly, to argue that younger teenagers should be ‘free from’ social media influence for an additional 36 months, with access from age 16 rather than 13. I am not aware of any specific research promoted by Wipfli and Galluzzo, or the government for that matter, apart from broad reference to Jonathan Haidt’s ‘The Anxious Generation’, which they claim justifies a delay in access to social media to solve the issues they identify.
The 36 months campaigners are quite proud of their strategy of using high profile tragic cases to further their cause. Our Labor Prime Minister, Anthony Albanese, was very receptive to the group’s lobbying, and his subsequent 2024 announcement about the ban is popular, with some surveys suggesting that it is supported by 77% of Australian adults. Notably, Albanese announced the ban in the year leading up to the 2025 Federal election and, disappointingly, there was little sense that any alternative policy was considered, such as more stringent or targeted digital platform regulation. In addition, there is no evidence at all that that ban will actually stop tragedies from occurring, that young people will access less adult content, or even that they will spend less time online. We have recently conducted over 80 interviews with 12- to 15-year-olds. While we haven’t published on this yet, the interviews suggest that teens’ online behaviour is unlikely to change following the ban and that they will simply migrate to other platforms or messaging services to connect with friends, content and experiences. Of course, these alternative spaces are likely to be less visible to adults and less regulated.
. . . . .
I wonder if you could say something about the evidence that was drawn upon here. I’m sure you’d agree that evidence of harm from social media is contested. So what kinds of evidence (especially research evidence) has the government cited in support of the ban? Which experts are being seen as legitimate authorities?
Jonathan’s Haidt’s book ‘The Anxious Generation’ has had undue influence on politicians and policy makers in Australia. At the height of debate about the legislation in October 2024, the Labor State Governments in New South Wales and South Australia hosted what became a de facto national social media summit, during which U.S. imports – Haidt, along with high profile psychologist Jean Twenge, and Meta Whistleblower Frances Haugen – were featured as keynote speakers. Meanwhile, as reported by my QUT colleague Axel Bruns, Australian researchers, youth advocates, and representatives from non-profit organisations who work on the front lines with young people, were relegated to breakout sessions and excluded from the conference live-stream.
The Summit organisers didn’t seem to find it necessary to invite someone of Bruns’ standing to the main stage, even though he is an Australian Laureate professor of international repute. The organisers also saw it unnecessary to feature anyone from Australia’s world leading (and handsomely government funded) national Centre of Excellence for the Digital Child. It is quite clear that the Summit’s purpose was not to enable a national conversion, or to debate the research evidence, but rather to rally behind Federal Labor’s legislation.
South Australian Premier, Peter Malinauskas, opened the Adelaide event by declaring that ‘the science is settled’, referring to Twenge’s and Haidt’s work. Of course, Haidt’s argument that there exists a direct correlation between the rise of the smart phone and what he identifies as a crisis in youth mental health, has been well and truly contested; as has Twenge’s work. There is considerable opposition to the ban amongst many Australian researchers, including across interdisciplinary lines. Many social psychologists are just as concerned about the bluntness of the legislation as are education and media and communications researchers.
During October 2024, over 100 Australian and prominent international researchers signed an open letter, addressed to the Australian Prime Minister and state Premiers, raising concerns about the legislation, arguing that the ban is unlikely to address the problems it purports to solve. I have been part of a group of researchers at QUT and within the ARC Centre of Excellence for the Digital Child who have consistently argued against the ban. Part of this effort has been the development of an explainer document that aims to communicate our objections, targeted at journalists, policy makers and parents.
. . . . .
Obviously, lots of young people under the age of 16 are highly active on social media, and many of them are likely to be worried about the prospect of an instant cut-off. What is likely to happen to the content they have created on these platforms, and the social networks they have developed there? Will they just lose everything?
The truth is, no one really knows at this point what will happen on December 10 when the legislation comes into effect. It is plausible that 15-year-old teens who have been posting on a platform like TikTok since age 13 could lose all their content. If that happens, it will be highly unfair to young people who undertake a range of legitimate, and often very labour intensive, activities on social media, from the frivolous to the profound. There is also a considerable risk that many teens who have established networks of support, communities, and friendship groups online may be cut off. That would be particularly concerning for young people who may be isolated from support in their local communities. As advocates have pointed out, many LGBTQI+ young people find vital communities of support online. Several legal experts have also recently argued that the ban may lead to a high court challenge on the basis that it could infringe the right to freedom of political communication, drawing attention to young climate activists like 16-year-old Ivy Sheng, who has been politically active on social media from a young age.
It is incredibly frustrating that just weeks out from the legislation coming into effect we have little sense of how the ban will actually play out, which platforms it will include, and how the social media companies will comply. For quite some time, the eSafety Commissioner, Julie Inman-Grant, who is required to implement the legislation, identified the main social media companies as the likely targets, including Facebook, Instagram, TikTok, Snapchat and X. YouTube was initially excluded on the basis that it is primarily a video sharing platform. Several months ago, Inman-Grant suggested YouTube could be included; and very recently she indicated that she had asked an extended list of 16 companies to complete a self-assessment to see how their services align to the legislative requirements. That list includes Roblox, Discord, Lego Play, Steam, Twitch and other popular services. She also indicated that in future more companies would be asked to self-assess and potentially be externally assessed if they are suspected of non-compliance. The ‘ban’ could, in effect, ultimately extend to much of children’s digital culture. Meanwhile, there are odd inconsistencies in the legislation. For instance, under 16s will be able to view YouTube and TikTok content, so long as they don’t have their own account.
. . . . .
Then there are questions about implementation. In particular, how is the government proposing to verify the age of account holders or users who access these platforms? Will all adults have to be subject to age verification? How robust are the systems, especially in respect of likely cyber-attacks?
How the ban will be implemented, particularly through age verification, is one of the more controversial aspects. The federal government conducted a trial which invited companies to test approaches to age verification using a range of techniques, but principally using facial recognition. Unsurprisingly, the trial flopped in terms of the development or identification of any system that can accurately and consistently distinguish age, particularly for the very age groups to be impacted by the ban. It turns out that while it is easy enough for facial recognition to distinguish between a fifty-year-old and a fourteen-year-old, it cannot consistently distinguish between fourteen and seventeen year-olds. Other technology is likely to be used as part of the process, such as age inference technology based on online user behaviour. That is, the tech companies may try to estimate age based on behaviour such as time of use (outside school hours), the kinds of content a user is accessing, age of online friends or connections and so on. The back up to technological processes remains proof of age through official documents, such as passports or birth certificates.
The eSafety Commission has stated that any data collected by the companies to determine age can only be held for a short period following proof of age but concerns obviously remain about the security of any data collected. The Office of the Australian Information Commissioner has recently released advice about its expectations related to the ban, which sets out to protect people’s privacy, with a specific focus on children’s privacy. In addition, the eSafety Commissioner has said that individuals will not be punished for being on social media, and that it is up to the companies to demonstrate that they are taking action to remove under-age accounts or face fines of up to AU$50 million. It is quite possible that following December 10, some (or many) over 16s will have their access disrupted in cases where the platform ‘suspects’ users are under 16, until their age can be verified. At present, it seems unlikely that all adults will be required to provide proof of age, but the level of disruption to service and how widespread this will be remains unclear.
. . . . .
Finally, what do you think will be the implications for media literacy education? You’ll recall in my Media Education Manifesto, I was offering some ways of extending existing approaches to encompass social media. Is all that superfluous now? Will it be possible to teach about social media in a situation where young people don’t have access to those media?
The ban makes media literacy education more important than ever because children and young people are not going to stop using social and digital media. The ban draws attention to the definitions we apply to digital media products and services and the reality that it is becoming more difficult to make distinctions between media forms as gaming platforms, social media, and messaging services merge. The recent emergence of Generative AI chatbot platforms like Charatcer.ai further complicates things. We need children and young people, parents and educators to become more knowledgeable about the complexities of the digital media environment, including the variety of business models that have emerged; how internet fame or ‘micro-celebrity’ relates to fandoms and parasocial friendships; how online communities police representational practices; and how information and learning opportunities are made available, to name just a few.
I agree entirely with the argument you present in the Media Education Manifesto that we can learn a great deal from previous media literacy approaches and apply them to social media. An approach that applies conceptual thinking remains highly relevant and can be adapted to address the more social and relational elements of how social media platforms invite participation. Perhaps one benefit of the social media ban is that it may draw renewed attention to the expectations we have for how to get the balance right between regulation and education, with which media education has had a long relationship.
For the past four years, I have been leading some work about how to think about regulation and education and children’s digital platforms at the ARC Centre of Excellence for the Digital Child through a project called The Children’s Internet. The project takes its inspiration from children’s television. Of course, it is impossible to regulate the internet in the same way that it was possible to regulate television. None-the-less, my colleagues and I think that it is important not to cede children’s and young people’s digital media experiences to the big technology companies. We can do much more to improve the digital media environment and media literacy is central to this. We outline 17 principles for a better Children’s Internet under the areas of Quality (which we appreciate is a contested idea); slow, child-centred design of digital products and services made for children; effective and targeted child-centred regulation; and media literacy. I’ve been selling this approach to anyone who will listen as an alternative to the social media ban. In more optimistic moments, I hope that when the ban fails to live up to expectation politicians and policy makers, and the technology companies themselves, will be receptive to more nuanced alternatives.
Thanks so much to Michael for these detailed and thoughtful responses. The Australian ban is undoubtedly going to be one to watch…

Pingback: Media bans and ‘media literacy’: a dissenting view | David Buckingham