Youtube has decided to strengthen its policies against disinformation or misleading content caused by the use of artificial intelligence. That is why the platform will require creators to warn the public when they upload videos that include “synthetic” content; That is, they have been created or manipulated using tools such as generative AI. This will apply to both conventional long-form publications and Shorts.
The measure still does not have a specific implementation date. From the official YouTube blog they explained that the new altered content alerts will begin to be introduced in the coming months, so that they would only be seen with greater prominence in 2024. The idea is to work with the youtubers so that they are aware of the new requirements and guidelines.
Once this feature is made available, creators will have to indicate during the upload process if a video has been manipulated with artificial intelligence. YouTube will then show two types of warning on the post: one will be in the description box, and another will appear directly above the player. In both cases, it will include the following message: “Altered or synthetic content. The sound or images have been altered or digitally generated.
It is worth clarifying, however, that the alert on the player will be limited to synthetic content that is linked to sensitive topics and prone to misinformation. For example, public health crises, ongoing armed conflicts, events involving public officials or electoral processes, among others.
“We will require creators to disclose when they have created altered or synthetic content that is realistic, including using artificial intelligence tools. When creators upload content, we will have new options for them to select to indicate that it contains realistic, but altered or synthetic material For example, it could be an AI-generated video that realistically depicts an event that never happened, or content that shows someone saying or doing something they didn’t actually do,” they indicated from the Google platform.
Punishments for not following YouTube’s rules on AI-altered content
YouTube’s initiative is very interesting and aims to implement a new layer of control over the dissemination of misleading or outright false content. Although it is also a reality that, if their main motivation is to misinform, it is likely that certain creators will choose not to mark their videos as altered, even if they are. That is why Google must have an effective system to enforce the new guidelines.
YouTube indicates that if creators choose not to flag their manipulated videos as such, They will be exposed to different punishments. From the removal of the content in question, to the suspension of the user from the partner program (Partners) of the platform, among others. Of course, it is not specified whether the sanctions come into force after a certain number of violations; It is only said that it will occur if the behavior occurs “consistently.”
It is also important to mention that the new warnings of altered videos with synthetic content They do not replace community guidelines. This means that YouTube will be able to remove inappropriate content manipulated with AI, even if it includes the warning in question.
Finally, YouTube will not limit itself to flag videos manipulated with third-party software. Google is currently one of the main companies in the field of artificial intelligence, and is slowly integrating it into its products. Therefore, warnings will also be displayed when videos incorporate elements created with their own tools. For example, with Dream Screen, a utility that allows you to generate images and videos with AI to use in the background of the Shorts.