Australia has expanded its under-16 social media restrictions by explicitly targeting Twitch, the popular live-streaming platform, while exempting Pinterest from the new rules. The eSafety Commissioner determined that Twitch meets the criteria for an age-restricted social media service due to its real-time chat features, community engagement, and creator-follower interactions. In contrast, Pinterest—primarily used for visual discovery and idea curation—does not fall under the regulations.
Why Twitch Is Included, But Pinterest Is Not
The Social Media Minimum Age regime now requires platforms that facilitate social interaction, user-to-user engagement, and content sharing to block access for users under 16. Twitch’s live-streaming model, with its real-time chat, polls, and community features, exposes minors to immediate, unfiltered contact. This creates unique moderation challenges: harmful behaviors like bullying, grooming, or exposure to inappropriate content can occur rapidly, often before moderation systems can intervene. In contrast, Pinterest is used mainly for collecting and organizing images, recipes, and plans, with little emphasis on open, real-time conversation.
Regulators worldwide have signaled that live-streaming platforms should face stricter controls, given the risks posed by unmoderated, instantaneous interactions. Australia’s decision reflects this growing consensus, drawing a clear line between platforms built for public social engagement and those focused on personal discovery or bookmarking.
Who Is Affected by the Ban
The under-16 restrictions now apply to Twitch, Facebook, Instagram, Snapchat, TikTok, X (formerly Twitter), YouTube.com*, Reddit**, and the Australian streaming service Kick. Notably, YouTube Kids and Google Classroom are excluded, as they are designed for children and already have dedicated safety protections. Platforms must prevent new sign-ups from users under 16 and restrict access for existing accounts in Australia. The eSafety Commissioner has also released a self-assessment tool to help companies determine whether their features and user interactions meet the age-restricted criteria.
Age Verification and Enforcement Challenges
While the policy is clear, enforcement remains complex. Age verification at scale is a significant challenge, with providers experimenting with document checks, mobile number cross-references, and AI-based age estimation from images. Each method comes with trade-offs in accuracy, privacy, and accessibility. Children’s rights advocates warn that overly strict identity checks could push young users toward less supervised corners of the internet, while privacy groups caution against creating new data vulnerabilities.
Industry groups, including DIGI (which represents Google, Meta, and TikTok), have called for a temporary enforcement pause while the government tests age-verification approaches. They argue that uniform, privacy-preserving standards are essential to avoid a fragmented landscape of services with inconsistent protections.
Impact on Young Users
The inclusion of Twitch is particularly significant for users aged 16 and under who stream or participate in creator communities. It affects both viewers and aspiring creators who use streaming to join esports teams, showcase art, or build an early audience. Under the new rules, under-16s will lose access to subscription and tip-based monetization, which are key revenue streams for many young streamers.
The contrast with Pinterest highlights how regulators are distinguishing between “social” platforms and those focused on idea discovery. Services that prioritize public broadcasting, live audience interaction, and direct messaging are subject to stricter controls, while platforms centered on personal curation face fewer restrictions.
Australia’s Role in Global Youth Online Safety
Australia’s approach is part of a broader international effort to protect children from high-risk online environments. Over two dozen U.S. states have enacted laws requiring age verification or parental consent for minors using certain platforms. The UK’s Online Safety Act mandates robust age checks for adult content and empowers regulators to impose heavy fines for non-compliance.
Despite shared policy goals, implementation varies. The UK focuses on risk-based duties of care, the U.S. operates a patchwork of state-level laws, and Australia has drawn a sharp line at the minimum age for specific social platforms. The latest move explicitly includes live-streaming, signaling a new level of scrutiny for real-time interaction.
What Platforms Must Do Next
To comply, platforms must implement age verification at sign-up and audit existing accounts, with escalation paths for users flagged as under 16. Expect updates to terms of service, stricter default privacy settings, and new customer support protocols for handling lockouts and appeals. Some platforms may introduce regional gating or adjust payout eligibility for creators.
Regulators will monitor measurable outcomes: a reduction in under-16 usage on restricted services, fewer reports of contact-based harms, and responsible use of age-assurance processes that minimize data collection. With Twitch now on the restricted list and Pinterest exempt, Australia has clarified where it draws the line between social interaction and content discovery for young users.



