In a bid to enhance transparency and combat misinformation, TikTok announced its adoption of “Content Credentials,” a technology designed to label AI-generated images and videos shared on its platform. Developed by Adobe and endorsed by various companies including OpenAI, the watermark aims to provide insights into how content was created and edited.

The move comes amidst growing concerns from researchers about the potential misuse of AI-generated content, particularly in influencing events such as the upcoming US elections. TikTok, alongside 19 other tech firms, had previously committed to combating such threats through a signed accord earlier this year.

For the system to effectively function, both the creators of AI-generated content and the platforms hosting it must agree to utilise the industry-standard watermarking. As part of this initiative, if content is generated using tools like OpenAI’s Dall-E and subsequently uploaded to TikTok, it will be automatically flagged as AI-generated.

With TikTok boasting 170 million users in the U.S. alone, the platform’s decision to implement Content Credentials underscores its commitment to transparency and responsible content moderation. Notably, TikTok has faced regulatory challenges, including recent legal disputes over ownership and free speech concerns.

Adam Presser, TikTok’s head of operations and trust and safety, emphasised the platform’s existing policies against unlabeled realistic AI content, affirming that such content will be promptly removed in line with community guidelines.

By taking proactive measures to label and regulate AI-generated content, TikTok aims to foster a safer and more transparent online environment for its vast user base, while also addressing broader societal concerns regarding the responsible use of AI technology.