
Technologies to enforce the Australian government’s social media ban for under 16s are “private, robust and effective”. That’s according to the preliminary findings of a federal government-commissioned trial that has nearly finished testing them.
The findings, released today, may give the government greater confidence to forge ahead with the ban, despite a suite of expert criticism. They might also alleviate some of the concerns of the Australian population about privacy and security implications of the ban, which is due to begin in December.
For example, a report based on a survey of nearly 4,000 people and released by the government earlier this week found nine out of ten people support the idea of a ban. But it also found a large number of people were “very concerned” about how the ban would be implemented. Nearly 80% of respondents had privacy and security concerns, while roughly half had concerns about age assurance accuracy and government oversight.
The trial’s preliminary findings paint a rosy picture of the potential for available technologies to check people’s ages. However, they contain very little detail about specific technologies, and appear to be at odds with what we know about age-assurance technology from other sources.
From facial recognition to hand movement recognition
The social media ban for under 16s was legislated in December 2024. A last-minute amendment to the law requires technology companies to provide “alternative age assurance methods” for account holders to confirm their age, rather than relying only on government-issued ID.
The Australian government commissioned an independent trial to evaluate the “effectiveness, maturity, and readiness for use” of these alternative methods.
The trial is being led by the Age Check Certification Scheme – a company based in the United Kingdom that specialises in testing and certifying identity verification systems. It includes 53 vendors that offer a range of age assurance technologies to guess people’s ages, using techniques such as facial recognition and hand-movement recognition.
According to the preliminary findings of the trial, “age assurance can be done in Australia”.
The trial’s project director, Tony Allen, said “there are no significant technological barriers” to assuring people’s ages online. He added the solutions are “technically feasible, can be integrated flexibly into existing services and can support the safety and rights of children online”.
However, these claims are hard to square with other evidence.
High error rates
Yesterday the ABC reported the trial found face-scanning technologies “repeatedly misidentified” children as young as 15 as being in their 20s and 30s. These tools could only guess children’s ages “within an 18-month range in 85 percent of cases”. This means a 14-year-old child might gain access to a social media account, while a 17-year-old might be blocked.
This is in line with results of global trials of face-scanning technologies conducted for more than a decade.
An ongoing series of studies of age estimation technology by the United States’ National Institute of Standards and Technology shows the algorithms “fail significantly when attempting to differentiate minors” of various ages.
The tests also show that error rates are higher for young women compared to young men. Error rates are also higher for people with darker skin tones.
These studies show that even the best age-estimation software currently available – Yoti – has an average error of 1.0 years. Other software options mistake someone’s age by 3.1 years on average.
This means, at best, a 16-year-old might be estimated to be 15 or 17 years old; at worst, they could be seen to be 13 or 19 years of age. These error rates mean a significant number of children under 16 could access social media accounts despite a ban being in place, while some over 16 could be blocked.
Yoti also explains businesses needing to check exact ages (such as 18) can set higher age thresholds (such as 25), so fewer people under 18 get through the age check.
This approach would be similar to that taken in Australia’s retail liquor sector, where sales staff verify ID for anyone who appears to be under the age of 25. However, many young people lack the government-issued ID required for an additional age check.
It’s also worth remembering that in August 2023, the Australian government acknowledged that the age assurance technology market was “immature” and could not yet meet key requirements, such as working reliably without circumvention and balancing privacy and security.
Outstanding questions
We don’t yet know exactly what methods platforms will use to verify account holders’ ages. While face-scanning technologies are often discussed, they could use other methods to confirm age. The government trial also tested voice and hand movements to guess young people’s ages. But those methods also have accuracy issues.
And it’s not yet clear what recourse people will have if their age is misidentified. Will parents be able to complain if children under 16 gain access to accounts, despite restrictions? Will older Australians who are incorrectly blocked be able to appeal? And if so, to whom?
There are other outstanding questions. What’s stopping someone who’s under 16 from getting someone who is over 16 to set up an account on their behalf? To mitigate this risk, the government might require all social media users to verify their age at regular intervals.
It’s also unclear what level of age estimation error the government may be willing to accept in implementing a social media ban. The legislation says technology companies must demonstrate they have taken “reasonable steps” to prevent under 16s from holding social media accounts. What is considered “reasonable” is yet to be clearly defined.
Australians will have to wait until later this year for the full results of the government’s trial to be released, and to know how technology companies will respond. With less than six months until the ban comes into effect, social media users still don’t have all the answers they need.
by : Lisa M. Given, Professor of Information Sciences & Director, Social Change Enabling Impact Platform, RMIT University
Source link