‘Sexual AI images using kids’ photos proliferate’

1 month ago 25
Suniway Group of Companies Inc.

Upgrade to High-Speed Internet for only ₱1499/month!

Enjoy up to 100 Mbps fiber broadband, perfect for browsing, streaming, and gaming.

Visit Suniway.ph to learn

Pia Lee-Brago - The Philippine Star

February 6, 2026 | 12:00am

At least 1.2 million youngsters have disclosed having had their images manipulated into sexually explicit deepfakes in the past year, according to a study across 11 countries conducted by the UN Children’s Fund (UNICEF), international police agency Interpol and the ECPAT global network working to end the sexual exploitation of children worldwide.

Olivier MORIN / AFP

MANILA, Philippines — New evidence revealed a proliferation of sexualized images of youngsters generated by artificial intelligence (AI), the United Nations children’s agency warned.

At least 1.2 million youngsters have disclosed having had their images manipulated into sexually explicit deepfakes in the past year, according to a study across 11 countries conducted by the UN Children’s Fund (UNICEF), international police agency Interpol and the ECPAT global network working to end the sexual exploitation of children worldwide.

In some countries, UNICEF said this represents one in 25 children or the equivalent of one child in a typical classroom, the study found.

“The harm from deepfake abuse is real and urgent,” UNICEF said in a statement. “Children cannot wait for the law to catch up.”

Deepfakes – images, videos or audio generated or manipulated with AI and designed to look real – are increasingly being used to produce sexualized content involving children, including through so-called “nudification,” where AI tools are used to strip or alter clothing in photos to create fabricated nude or sexualized images.

“When a child’s image or identity is used, that child is directly victimized. Even without an identifiable victim, AI-generated child sexual abuse material normalizes the sexual exploitation of children, fuels demand for abusive content and presents significant challenges for law enforcement in identifying and protecting children that need help,” the UN agency said. “Deepfake abuse is abuse and there is nothing fake about the harm it causes.”

The UN agency said it welcomed the efforts of AI developers who are implementing “safety-by-design” approaches and robust guardrails to prevent misuse of their systems.

However, too many AI models are not being developed with adequate safeguards.

The risks can be compounded when generative AI tools are embedded directly into social media platforms where manipulated images spread rapidly.

UNICEF called for immediate action from governments to expand definitions of child sexual abuse material to include AI-generated content and criminalize its creation, procurement, possession and distribution.

Digital companies are also called to prevent the circulation of AI-generated child sexual abuse material, not merely remove it, and strengthen content moderation with investment in detection technologies.

Read Entire Article