UNICEF notes that this issue is so serious that there may be at least one child in every classroom who has fallen victim to such crimes.Deepfakes are images, videos, or audio files created using artificial intelligence (AI) technologies that look or sound real, and are increasingly used to create sexualized content involving minors.
“Children understand the risks associated with the use of AI. In some of the countries involved in the study, about 66% of children expressed concern about the possibility of fake sexual images or videos being created featuring them,” the researchers reported.
The creation of deepfakes with sexual content is considered a form of violence against children, UNICEF specialists emphasized.
“The Foundation supports initiatives by AI developers that implement safety measures by default and reliable protection mechanisms to prevent the misuse of their technologies,” the organization noted.
However, many AI models are developed without adequate safety measures. The problem is exacerbated when generative AI tools are integrated into social networks, where processed images can spread rapidly.
“AI developers must ensure safety at the design stage. All digital platforms are required to prevent the dissemination of materials containing images of sexual violence against children created using AI, rather than simply removing them after abuse has already occurred,” UNICEF emphasizes.
The photo on the main page is illustrative: pngtree.com.