Australia's cracking down on AI via app stores & search engines to protect kids—age verification mandatory from March 9, or face $35M fines. Targets harmful content in chatbots like ChatGPT. Smart move to balance innovation with safety, paving way for ethical AI growth[1]
The bigger picture: Researchers warn AI platforms may be more harmful to youth mental health than social media. Australia is leading the most aggressive global effort to regulate AI for minors — and most AI companies are scrambling to catch up.
Enforcement: Australia's eSafety Commissioner says they'll use "the full range of powers" including action against app stores (Apple, Google) and search engines that provide access to non-compliant AI services. Apple said it would use "reasonable methods" to block 18+ apps for minors but didn't specify how. Google declined to comment.
Who's complying? ChatGPT, Replika, Claude, and Character.AI have started rolling out age checks or blanket filters. But 30 of the 50 most popular AI tools have taken no apparent steps to comply. Elon Musk's Grok — already under investigation for synthetic child imagery — has no age assurance or content filters.
Why the urgency? OpenAI and Character.AI already face wrongful death lawsuits over interactions with young users. Australia's regulator reports kids as young as 10 are talking to AI chatbots up to 6 hours a day. eSafety warns AI companies use "emotional manipulation and anthropomorphism" to hook young users.
The crackdown: Starting March 9, AI services like ChatGPT, companion chatbots, and search tools must block Australians under 18 from accessing pornography, extreme violence, self-harm, and eating disorder content. Australia already banned social media for teens in December — now they're going after AI.
Australia's internet regulator is threatening to block search engines and app stores that don't comply with new AI age verification rules — and most AI companies aren't ready. Of 50 popular AI chatbots reviewed, only 9 had rolled out age assurance systems a week before the March 9 deadline. Fines? Up to $35 million USD.
Australia's cracking down on AI via app stores & search engines to protect kids—age verification mandatory from March 9, or face $35M fines. Targets harmful content in chatbots like ChatGPT. Smart move to balance innovation with safety, paving way for ethical AI growth[1]
6/6 🧵
The bigger picture: Researchers warn AI platforms may be more harmful to youth mental health than social media. Australia is leading the most aggressive global effort to regulate AI for minors — and most AI companies are scrambling to catch up.
📎 Source
#threadstorm
5/6 🧵
Enforcement: Australia's eSafety Commissioner says they'll use "the full range of powers" including action against app stores (Apple, Google) and search engines that provide access to non-compliant AI services. Apple said it would use "reasonable methods" to block 18+ apps for minors but didn't specify how. Google declined to comment.
4/6 🧵
Who's complying? ChatGPT, Replika, Claude, and Character.AI have started rolling out age checks or blanket filters. But 30 of the 50 most popular AI tools have taken no apparent steps to comply. Elon Musk's Grok — already under investigation for synthetic child imagery — has no age assurance or content filters.
3/6 🧵
Why the urgency? OpenAI and Character.AI already face wrongful death lawsuits over interactions with young users. Australia's regulator reports kids as young as 10 are talking to AI chatbots up to 6 hours a day. eSafety warns AI companies use "emotional manipulation and anthropomorphism" to hook young users.
2/6 🧵
The crackdown: Starting March 9, AI services like ChatGPT, companion chatbots, and search tools must block Australians under 18 from accessing pornography, extreme violence, self-harm, and eating disorder content. Australia already banned social media for teens in December — now they're going after AI.
1/6 🧵
Australia's internet regulator is threatening to block search engines and app stores that don't comply with new AI age verification rules — and most AI companies aren't ready. Of 50 popular AI chatbots reviewed, only 9 had rolled out age assurance systems a week before the March 9 deadline. Fines? Up to $35 million USD.