Meta to stop payments to news organisations that fact-check misinformation on WhatsApp

Most of the world is heading into election season this year, including India and the US. As per reports, the cuts in payments could potentially affect news organisations’ ability to verify non-election content.

By
  • Storyboard18,
| February 16, 2024 , 10:36 am
Over the last eight years, Meta says it has rolled out industry-leading transparency tools for ads about elections or politics, developed comprehensive policies to prevent election interference and voter fraud, and built the largest third party fact-checking programme of any social media platform to help combat the spread of misinformation. (Representative Image: Dima Solomin via Unsplash)
Over the last eight years, Meta says it has rolled out industry-leading transparency tools for ads about elections or politics, developed comprehensive policies to prevent election interference and voter fraud, and built the largest third party fact-checking programme of any social media platform to help combat the spread of misinformation. (Representative Image: Dima Solomin via Unsplash)

As per reports, Meta is cutting funding to news organisations that fact-check potential misinformation on WhatsApp. This tightening of budgets by the tech giant means that fewer fact-checkers will be overlooking political discussions ahead of the elections.

Most of the world is heading into election season this year, including India and the US. As per reports, the cuts in payments could potentially affect news organisations’ ability to verify non-election content.

Meta has recently been revealing new features that allow larger groups of people to chat with each other on the messaging platform. This could increase the spread of fake news.

Meta has implemented measures to detect and label content generated by other companies’ artificial intelligence services. Both audio and video content along with images will be detected by AI detectors that adhere to industry standards and labelled as ‘Imagined by AI’ across all Meta platforms including Facebook, Instagram and Threads.

Meta will apply the labels to any content carrying the markers that is posted to its Facebook, Instagram and Threads services, in an effort to signal to users that the images – which in many cases resemble real photos – are actually digital creations, wrote the company’s president of global affairs, Nick Clegg in a blog post.

Further, the company is working with Partnership on AI ( PAI), a nonprofit corporation committed to the ethical use of AI, to determine common criteria for identifying AI generated content. AI general content carries visible markers and invisible watermarks that differentiate them from non-synthetic content. PAI has consented to the invisible markers used by Meta.

This step is crucial for the Meta users in India. The risk of misusing AI is high since the Indian law does not consider it to be a legal entity. Recent incidents such as the circulation of deepfake videos of various Bollywood celebrities and sports personalities including Nora Fatehi, Rashmika Mandanna, Alia Bhatt highlighted the issue of malicious use of AI on social media.

Leave a comment

Your email address will not be published. Required fields are marked *