Inscrivez-vous maintenant pour un meilleur devis personnalisé!

Nouvelles chaudes

AI-generated child images on social media attract disturbing attention

May, 21, 2024 Hi-network.com

AI-generated images of young girls, some as young as five, are spreading on TikTok and Instagram, drawing inappropriate comments from a troubling audience consisting mostly of older men, Forbes uncovers. These images depict children in provocative outfits, sparking serious concerns, and while the images are not illegal, they are highly sexualised, prompting child safety experts to warn about their potential to lead to more severe exploitation.

It is no wonder they are causing a sense of imminent danger, with platforms like TikTok and Instagram, popular with minors, struggling to address this issue. One popular account, "Woman With Chopsticks," had 80,000 followers and viral posts viewed nearly half a million times across both platforms. A recent study by Stanford revealed that the AI tool Stable Diffusion 1.5 was developed using child sexual abuse material (CSAM) involving real children collected from various online sources.

Under federal law, tech companies must report suspected CSAM and exploitation to the National Center for Missing and Exploited Children (NCMEC), which then informs law enforcement. However, they are optional to remove the type of images discussed here. Nonetheless, NCMEC believes that social media companies should remove these images, even if they exist in a legal grey area.

TikTok and Instagram assert that they have strict policies against AI-generated content involving minors to protect young people. TikTok bans content showing anyone under 18, while Meta removes material that sexualises or exploits children, whether real or AI-generated. Both platforms removed accounts and posts identified by Forbes. However, despite strict policies, the ease of creating and sharing AI-generated images will certainly remain a significant challenge for safeguarding children online.

Why does it matter?

The Forbes story reveals that such content, which has increasingly become easy to find due to powerful algorithm recommendations, worsens online child exploitation, acting as gateways to severe material exchange and facilitating offender networking. A 13 January TikTok slideshow of young girls in pyjamas found by the investigation showed users moving to private messages. The Canadian Centre for Child Protection stressed that companies need to look beyond automated moderation to address how these images are shared and followed.

tag-icon Tags chauds: Intelligence artificielle Sécurité des enfants en ligne Droits des enfants

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.
Our company's operations and information are independent of the manufacturers' positions, nor a part of any listed trademarks company.