As India gears up for the General Elections, Meta is ramping up efforts to ensure transparency and accountability across its platforms with special focus on countering risks from the misuse of GenAI.
In a blog titled ‘How Meta Is Preparing For Indian General Elections 2024’, Meta said, “As for all major elections, we’ll activate an India-specific Elections Operations Center, bringing together experts from across the company from our intelligence, data science, engineering, research, operations, content policy and legal teams to identify potential threats and put specific mitigations in place across our apps and technologies in real time.”
The internet giant is all misinformation from Facebook, Instagram and Threads, such as content that could suppress voting, or contribute to imminent violence or physical harm.
“During the Indian elections, based on guidance from local partners, this will include false claims about someone from one religion physically harming or harassing another person or group from a different religion. For content that doesn’t violate these particular policies, we work with independent fact-checking organizations. We are continuing to expand our network of independent fact-checkers in the country,” said Meta.
Meta’s fact checking partners are also being onboarded to the company’s new research tool, Meta Content Library, which has a search capability to support them in their work. Indian fact checking partners are the first amongst their global network of fact checkers to have access to Meta Content Library.
Talking about AI generated content, Meta said their Community Standards and Community Guidelines govern the types of content and behaviours that are acceptable on Facebook and Instagram, applying to all content on our platforms, including content generated by AI.
“When we find content that violates our Community Standards or Community Guidelines, we remove it whether it was created by AI or a person,” they said.
Fact checking experts reviewing content whether generated by AI or otherwise would be rating a piece of content as ‘Altered’, which includes ‘faked, manipulated or transformed audio, video, or photos’.
“Once a piece of content is rated as ‘altered’, or we detect it as near identical, it appears lower in Feed on Facebook. We also dramatically reduce the content’s distribution. On Instagram, altered content gets filtered out of Explore and is featured less prominently in feed and stories. This significantly reduces the number of people who see it,” Meta said in the blog.
Meta is also building tools to label AI generated images from Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock that users post to Facebook, Instagram and Threads.
Starting this year, the internet giant also requires advertisers globally to disclose when they use AI or digital methods to create or alter a political or social issue ad in certain cases. b) Consumer education initiatives to combat the spread of misinformation
Among other initiatives, Meta will be launching the ‘Celebrate Each Vote’ campaign by partnering with national and regional creators to encourage voter awareness and tackle voter apathy among their communities in local languages across the country. This will begin in late March and will target all voters, specially those voting for the first time and also debunk election related misinformation.