Meta, the parent company of Facebook and Instagram, will require political advertisers to disclose the use of AI or digital manipulation in their ads. Starting January, political, election, or social issue-related advertisements must declare any digitally altered image or video.

The policy covers alterations such as changing spoken words in a video, manipulating images or footage of real events, and creating realistic depictions of non-existent people. Users will be notified when ads are marked as digitally altered.

Meta’s worldwide policy will be enforced by a combination of human and AI fact-checkers. Advertisers failing to disclose alterations may face penalties. This move by Meta extends beyond existing policies on deepfakes and aims to address concerns related to digitally manipulated content in political advertising. The new rules highlight the company’s commitment to transparency in a landscape where deepfakes and manipulated media have been sources of misinformation and potential influence on public opinion.

Other tech giants, such as Google, have implemented similar policies, reflecting an industry-wide recognition of the need to combat misleading information in the context of political discourse and elections. With general elections scheduled in major democracies in the coming years, the impact of manipulated content on social media platforms remains a critical issue.