Meta to start labelling AI generated content across all Meta platforms

This step will help to regulate the potential malicious use of AI and protect the users from misinformation and disinformation on social media.

By
| February 9, 2024 , 7:09 pm
Over the last eight years, Meta says it has rolled out industry-leading transparency tools for ads about elections or politics, developed comprehensive policies to prevent election interference and voter fraud, and built the largest third party fact-checking programme of any social media platform to help combat the spread of misinformation. (Representative Image: Dima Solomin via Unsplash)
Over the last eight years, Meta says it has rolled out industry-leading transparency tools for ads about elections or politics, developed comprehensive policies to prevent election interference and voter fraud, and built the largest third party fact-checking programme of any social media platform to help combat the spread of misinformation. (Representative Image: Dima Solomin via Unsplash)

Meta has implemented measures to detect and label content generated by other companies’ artificial intelligence services. Both audio and video content along with images will be detected by AI detectors that adhere to industry standards and labelled as ‘Imagined by AI’ across all Meta platforms including Facebook, Instagram and Threads.

Meta will apply the labels to any content carrying the markers that is posted to its Facebook, Instagram and Threads services, in an effort to signal to users that the images – which in many cases resemble real photos – are actually digital creations, wrote the company’s president of global affairs, Nick Clegg in a blog post.

Further, the company is working with Partnership on AI ( PAI), a nonprofit corporation committed to the ethical use of AI, to determine common criteria for identifying AI generated content. AI general content carries visible markers and invisible watermarks that differentiate them from non-synthetic content. PAI has consented to the invisible markers used by Meta.

This step is crucial for the Meta users in India. The risk of misusing AI is high since the Indian law does not consider it to be a legal entity. Recent incidents such as the circulation of deepfake videos of various Bollywood celebrities and sports personalities including Nora Fatehi, Rashmika Mandanna, Alia Bhatt highlighted the issue of malicious use of AI on social media.

Following this, the Indian It Ministry had sent legal notices to social media firms stating that impersonating online is illegal under the Information Technology Act, 2000. The IT Rules, 2021 also highlight that impersonating another individual is prohibited and social media platforms are compelled to take down artificial images when alarmed.

This step by Meta is pivotal to regulate the potential misuse of AI and help users discern between authentic and synthetic content, preventing the spread of misinformation and disinformation.

Read More: Meta’s ad revenue surged 24 percent YoY; touches $38.7 bn in Q4

Leave a comment

Your email address will not be published. Required fields are marked *