Facebook processed 16.2 million content pieces in India in November: Meta


Social media giant Meta said that over 16.2 million content pieces were “actioned” on Facebook across 13 consecutive infringing categories in India in the month of November. Its photo-sharing platform, Instagram, took action against more than 3.2 million pieces across 12 categories during the same period, according to data shared in the compliance report.

Under IT regulations that came into force earlier this year, large digital platforms (with over 5 million users) are required to publish periodic compliance reports every month, detailing complaints received and action taken on them.

It also includes details of content removed or disabled through proactive monitoring using automated tools. Facebook “action” more than 18.8 million pieces of content across 13 categories in October, while Instagram took action against more than 3 million pieces in 12 categories during the same period.

In its latest report, Meta said that between November 1 and November 30, Facebook received 519 user reports through its Indian complaints mechanism.

“Out of these incoming reports, we provided users with tools to resolve their issues in 461 cases,” the report said.

This includes pre-established channels to report content for specific breaches, self-healing flows where they can download their data, ways to resolve account hacked issues, etc. Between November 1 and November 30, Instagram received 424 reports through the Indian grievance mechanism.

Facebook’s parent company recently changed its name to Meta. Apps under Meta include Facebook, WhatsApp, Instagram, Messenger and Oculus.

According to the latest report, more than 16.2 million content actions taken by Facebook during November included spam (11 million), violent and graphic content (2 million), content related to adult nudity and sexual activity (1.5 million), and hate speech. was involved. 100,100).

Other categories under which the content was processed include bullying and harassment (102,700), suicide and self-injury (370,500), dangerous organizations and individuals: terrorist propaganda (71,700) and dangerous organizations and individuals: organized hate (12,400) ).

163,200 pieces of content were processed in categories such as the Child Threats – Nudity and Physical Abuse category, while 700,300 pieces were processed in the Child Threats – Sexual Abuse category and 190,500 pieces in the Violence and Provocation category. “Action” refers to the number of pieces of content (such as posts, photos, videos or comments) where action has been taken for violating the standards.

Taking action may include removing a piece of content from Facebook or Instagram or covering up photos or videos that may disturb some viewers, with a warning.

The active rate, which indicates the percentage of all content or accounts on which Facebook found and flagged users using the technology before it was reported, ranged between 60.5-99.9 percent in most of these cases.

The active rate for removal of content related to bullying and harassment was 40.7 percent because the content is contextual and highly personal in nature. In many cases, people are required to report this behavior to Facebook before it can recognize or remove such content. For Instagram, over 3.2 million content were processed across 12 categories during November 2021. This includes content related to suicide and self-injury (815,800), violent and graphic content (333,400), adult nudity and sexual activity (466,200). Bullying and Harassment (285,900).

Other categories under which the content was processed include hate speech (24,900), dangerous organizations and individuals: terrorist propaganda (8,400), dangerous organizations and individuals: organized hate (1,400), child abuse – nudity and physical abuse ( 41,100), and violence and provocation (27,500).

In November, 1.2 million content were actively executed in the Child Threats – Sexual Abuse category.


,