The United Nations Children’s Fund is warning that deepfake technology, or the manipulation of materials online through Artificial Intelligence, is rapidly harming children.
According to a study conducted by UNICEF and other concerned international groups and agencies, at least 1.2 million children in 11 countries disclosed that their images were turned into sexually explicit deepfakes in the past years.
In several countries surveyed, up to two-thirds of young people said they fear AI could be used to create fake sexual images of them.
One of the most common methods is known as “nudification,” where AI tools digitally remove or alter clothing to produce fabricated nude images, UNICEF highlighted.
The countries included in the study are Asia (Armenia, Pakistan), Africa (Morocco, Tunisia), Europe (Montenegro, North Macedonia, Serbia), North and South America (Dominican Republic, Mexico, Brazil, Colombia)]
In specific context, the AI chatbot Grok got into trouble in several countries because users were able to use it to easily create sexualized images just by typing text prompts.
To address the threat, UNICEF is urging governments to update laws to explicitly criminalize the creation and distribution of AI-generated child sexual abuse material.
It is also calling on AI developers to build stronger safeguards into their systems and on digital platforms to prevent deepfake abuse from circulating in the first place.
UNICEF emphasized that while the images may be artificial, the harm is not: “When a child’s image or identity is used, that child is directly victimised. Even without an identifiable victim, AI-generated child sexual abuse material normalises the sexual exploitation of children, fuels demand for abusive content and presents significant challenges for law enforcement in identifying and protecting children that need help.”
It can be recalled that Australia has already banned social media for teenagers under 16, meanwhile, Spain will be imposing the same rule next week.