Australia's social media giants have agreed to implement a ban on users under 16 years old, but experts are warning that the law is "problematic" and may not be effective in preventing minors from accessing online content.
The new regulations, which will come into effect on December 10, require platforms such as Meta, Snap, and TikTok to remove and deactivate over a million underage accounts. The companies have expressed concerns about the accuracy of age detection methods, citing issues with face scans that can incorrectly identify users between 16 and 17 years old.
"Age checks won't be perfect," says one expert, highlighting the limitations of current technology in detecting minors online. "There's no single solution that will work for all use cases, and even if we had, it wouldn't guarantee effectiveness across all deployments."
The law aims to reduce social media pressures on children and prevent them from accessing harmful content. However, critics argue that the ban may push kids to darker corners of the internet, where they are more likely to encounter online predators and experience harm.
YouTube spokesperson Rachel Lord has condemned the legislation as "extremely difficult to enforce" and failing to meet its promise of making kids safer online. The platform is among the loudest critics of the law, which it believes will have unintended consequences for users under 16.
Despite these concerns, social media companies are expected to comply with the regulations, citing fines of up to $32.5 million for non-compliance. As the law takes effect, platforms will need to find ways to detect underage users and prevent them from accessing online content.
The effectiveness of age checks will be spotty, at best, experts warn. While some companies may use sophisticated methods such as audio analysis or machine learning algorithms, others may rely on simpler approaches that can lead to false positives.
Ultimately, the law aims to reduce online harms by keeping harmful content out of reach. However, critics argue that the ban is a "simplistic" solution to complex issues and may have unintended consequences for users under 16.
The new regulations, which will come into effect on December 10, require platforms such as Meta, Snap, and TikTok to remove and deactivate over a million underage accounts. The companies have expressed concerns about the accuracy of age detection methods, citing issues with face scans that can incorrectly identify users between 16 and 17 years old.
"Age checks won't be perfect," says one expert, highlighting the limitations of current technology in detecting minors online. "There's no single solution that will work for all use cases, and even if we had, it wouldn't guarantee effectiveness across all deployments."
The law aims to reduce social media pressures on children and prevent them from accessing harmful content. However, critics argue that the ban may push kids to darker corners of the internet, where they are more likely to encounter online predators and experience harm.
YouTube spokesperson Rachel Lord has condemned the legislation as "extremely difficult to enforce" and failing to meet its promise of making kids safer online. The platform is among the loudest critics of the law, which it believes will have unintended consequences for users under 16.
Despite these concerns, social media companies are expected to comply with the regulations, citing fines of up to $32.5 million for non-compliance. As the law takes effect, platforms will need to find ways to detect underage users and prevent them from accessing online content.
The effectiveness of age checks will be spotty, at best, experts warn. While some companies may use sophisticated methods such as audio analysis or machine learning algorithms, others may rely on simpler approaches that can lead to false positives.
Ultimately, the law aims to reduce online harms by keeping harmful content out of reach. However, critics argue that the ban is a "simplistic" solution to complex issues and may have unintended consequences for users under 16.