Children have been lying about their ages online for decades, but alternative social-media networks are exploring ways that artificial intelligence (AI) tools can catch them out by automatically verifying their ages without sensitive identity documents.
AI-based age verification has become a core method of user authentication for Yubo, a French social networking app that was designed to facilitate online “friendships” and went into the market knowing full well that its target audience of teens was both vulnerable to older users, and likely to lie about their own ages.
Use of artificial intelligence (AI) based age and identity tools from Yoti helped social-networking firm Yubo block more than 600,000 false profiles last year, chief operating officer Marc Antoine-Durand told Information Age.
Yubo users are required to upload photos of themselves, which are checked against Google archives and analysed using an AI algorithm that has been trained on large numbers of age-verified photographs – and can deliver an estimated age within two seconds.
Separating 13-to-17 year old users into separate communities – a process known as ‘age-gating’ – “can be challenging because it creates parallel communities on the same app,” Antoine-Durand said, “but we think it’s very important for safety”.
Cognisant of the challenges faced by free-for-all sites like Tinder, Yubo is one of a growing number of social networking ventures that is tapping both human and technological tools to clamp down on online imposters.
Its more than 40 million users include 1.1m users in Australia, for whom the company said it provides a “safe social space where young people can hang out” and have “real-time, one-to-few conversations”.
Yubo has implemented “multiple technical and human solutions” to validate age and identity, Antoine-Durand said, and leans heavily on monitoring and interventionist capabilities – such as a feature that detects when a user is about to share private information like phone number or address, and presents a pop-up warning to remind the users to be careful.
Such automated controls will remain the company’s favoured method for age-gating and ongoing usage.
“We don’t think that people should – systematically – have to upload an ID to use social media,” Antoine-Durand said.
“We understand that people can be reluctant to have to upload their ID to verify their account. We think transparency and education are most important to encourage responsible and safe behaviours online.”
Automating your identity
Government bodies have long wrestled with ways to produce digital identities that are comparable to real-world documents and as easy to use.
As a form of automated gatekeeper that works on readily available biometric data – a photograph – AI-based age verification is far less intrusive than alternative methods that typically mirror 100-point checks by requiring the uploading of driver’s licenses, passports, and other digital ID.
Such services can not only be burdensome, but create security risks by concentrating sensitive personally identifiable information (PII) in online repositories that have, as in the discovery of 54,000 NSW driver’s licenses online last September, created new security risks.
Late last year, the formal consultation on proposed digital identity legislation surfaced a broad range of issues that were summarised in a Digital Identity Legislation Synthesis Report that will guide further formulation of identity laws throughout 2021.
Broader use of age verification – which was endorsed last year by a parliamentary committee as a way of limiting underage access to online pornography and gambling – caters better to young people that often don’t have full ID documents.
As automated face scanning takes off, oversight has become more important and age checking providers are converging around codes of practice such as the UK-based PAS 1296:2018 guidelines.
Such controls are crucial given the potential role of age verification in providing access to regulated content and services, with the Age Check Certification Scheme (ACCS) imposing technical standards to ensure that age-checking providers maintain controls including an “auditable trust framework” and use of algorithms “that are affective at preventing access to age-restricted content, goods or services”.
Such standards and requirements should, Yoti argued in its submission to the government’s recent consultation on proposed digital identity legislation, be encapsulated within the Trusted Digital Identity Framework (TDIF) so that age-verification safeguards can be readily amended to mirror changing technology and social values.
Yet debate over the proper structure for age and identity verification continues, with security researcher Ben Frengley arguing that the TDIF is insecure and doesn’t meet “intended security goals” that include security of the documents provided to prove age and identity.