Some proposed laws are seeking to criminalize not just the creation of explicit AI images but also the sharing of harmful content. Under one proposed law, anyone sharing deepfake pornography without an individual's consent risks damages that could go as high as $150,000 and imprisonment of up to 10 years if sharing the images facilitates violence or impacts the proceedings of a government agency.
But these proposed US laws have mostly quickly stalled, as kids as young as 12 or 13 continue to risk being victimized, and penalties for distributing AI-generated nudes of children appear rare under current laws. In May, the feds arrested a software engineer accused of using AI-generated child sex abuse materials (CSAM) to groom a teen on Instagram. That case could put to the test the US Department of Justice's declaration that "CSAM generated by AI is still CSAM." (Some experts have suggested that cyberbullying laws, not CSAM laws, may apply in these sorts of cases.)