YouTube is taking new steps to protect artists and YouTubers from audio and video deepfakes created with artificial intelligence. Google’s video platform plans to launch two tools to help detect fake content that mimics the voice and physical appearance of content creators and personalities.
Please follow us on Facebook and Twitter.
According to YouTube, one of these tools will be dedicated exclusively to detecting voices of synthetically simulated singers. It has already been developed and will start testing at the beginning of next year. The company has not specified how it works, but it has mentioned that it is integrated into Content ID, its copyright management system.
Once activated, the tool will automatically detect content on YouTube that mimics the singing of partners who publish their music on the site. If the material was created without authorization or for defamatory purposes, artists can request its removal.
The other feature YouTube is working on, also connected to Content ID, aims to stop deepfakes that use the faces of actors, YouTubers, athletes, musicians, and other personalities. This tool will automatically detect such deepfakes so that appropriate action can be taken. This feature is still in development, and it is not known when it will be launched on the platform, but it is expected not to arrive before 2025.
YouTube Expands Its Strategy Against Deepfakes And AI Simulations
This is not the first time YouTube has announced changes or new tools to combat deepfakes and AI-created simulations. In November 2023, the company began accepting requests to remove AI-generated content that “simulates identifiable individuals.” It also introduced a tool for record labels and artist management agencies to request the removal of AI-created songs.
In July, YouTube updated its privacy guidelines to allow users to report realistic AI-generated videos that include deepfakes. These measures are part of an expanding strategy to tackle the rapid evolution of generative AI. The increase in fake content has surged in recent years, as creating such content has become easier.
In addition to providing greater protection against deepfakes, YouTube has reaffirmed its commitment to preventing AI companies from training their language models using videos uploaded by content creators. This issue has been controversial for some time. For example, OpenAI faced criticism for allegedly using posts from YouTube to train its model, Sora. Recently, similar controversies have emerged involving NVIDIA, Anthropic, and even Apple.
YouTube has already stated that scraping videos to train AI models violates its terms of service. The company now promises to enhance its systems to better detect these activities and to block those who engage in them.