Meta updates policies on political ads; advertisers to disclose use of AI in images and videos

The rise of AI as a way to supercharge the creation of misleading ads presents a new issue for the social networking giant, which laid off large swaths of its trust-and-safety team as part of its cost-cutting efforts this year.

By
| November 29, 2023 , 4:25 pm
Meta also rolled out reels on Facebook in 2022, allowing users to create and watch videos clips from creators. One could also view public reels from Instagram, if the creator chooses to recommend it on Facebook. (Image source: Unsplash)
Meta also rolled out reels on Facebook in 2022, allowing users to create and watch videos clips from creators. One could also view public reels from Instagram, if the creator chooses to recommend it on Facebook. (Image source: Unsplash)

Meta shared more details about its policies on political ads, including a mandate that advertisers disclose when they use artificial intelligence to alter images and videos in certain political ads, as per CNBC.

Nick Clegg, Meta’s president of global affairs, discussed the new ad policies in a blog post, labelling them as “broadly consistent” with how the social networking giant has typically handled advertising rules during previous election cycles.

What’s different for the upcoming election season, however, is the increasing use of AI technologies by advertisers to create computer-generated visuals and text. Expanding on a previous announcement by Meta in early November, Clegg said that starting next year, Meta will require advertisers to disclose whether they have used AI or related digital editing techniques “to create or alter a political or social issue ad in certain cases.”

“This applies if the ad contains a photorealistic image or video, or realistic sounding audio, that was digitally created or altered to depict a real person as saying or doing something they did not say or do,” Clegg wrote. “It also applies if an ad depicts a realistic-looking person that does not exist or a realistic-looking event that did not happen, alters footage of a real event, or depicts a realistic event that allegedly occurred, but that is not a true image, video, or audio recording of the event.”

Meta has been criticized during the 2016 U.S. presidential elections, for failing to account for and reduce the spread of misinformation on its family of apps, including Facebook and Instagram.

The rise of AI as a way to supercharge the creation of misleading ads presents a new issue for the social networking giant, which laid off large swaths of its trust-and-safety team as part of its cost-cutting efforts this year.

Leave a comment

Your email address will not be published. Required fields are marked *