On August 27, Sabir Malik, a migrant worker in the Indian state of Haryana, was lured from his home and beaten to death by a mob of at least 10 Hindu men. They suspected that Malik, a Muslim, had eaten beef. Lab tests run by local police would later find that he hadn’t. But it didn’t matter: The attack was led by “cow vigilantes,” the name for Hindu nationalist militias and mobs that take it upon themselves to violently enforce Hindu supremacy on India’s minority communities, particularly Muslims.
A new report from the Center for the Study of Organized Hate (CSOH) shared exclusively with WIRED found that Instagram, which is owned by Meta, is becoming a key avenue for cow vigilantes to share their violent exploits with a wider audience, and even raise money.
“It's clear that Meta is complicit in the proliferation or the flourishing of cow vigilantism in India,” says Raqib Hameed Naik, founder and executive director of CSOH. These practices, Naik says, are likely in violation of Meta’s own policies around hateful and violent content.
Between February and August 2024, CSOH identified and analyzed 1,023 Instagram accounts run by users involved in cow vigilantism. Researchers found that 30 percent of the accounts shared content showing physical violence against Muslims involved in the cattle business. Some videos flagged by CSOH show high-speed car chases down India’s highways, where cow vigilantes tail and try to pull over trucks carrying cows. Others are more graphic, showing vigilantes beating men who they claim are engaging in cow slaughter or the cattle trade. One video, which garnered 5,200 likes, showed three frightened Muslim men in the trunk of a car. Another video shows a cow vigilante beating an older Muslim man with a wooden bat. That video received more than 1,200 likes.
The 121 Instagram Reels analyzed by CSOH showing physical violence against people transporting cattle garnered over 8.3 million views, and most were not labeled with the Meta filter that warns users of graphic content. CSOH found 53 accounts that had posted violent content were eligible for Instagram’s “Send Gift” function, which allows approved creators to earn money directly from donations from their followers. Other accounts would post bank details in their Reels or comments sections. “That means anyone on Instagram who likes their work can send them money to continue doing that violent extremist activity,” says Naik.
To test Meta’s systems, CSOH reported 167 posts that depicted violence using Instagram’s on-platform reporting systems. None of the posts had been removed as of October.
According to Meta’s policies, it does not allow “content that glorifies, supports, or represents events that Meta designates as violating violent events,” including “hate events” and “hate crimes.” Meta spokesperson Erin Logan told WIRED that Meta has “strict policies against violent or graphic content on our platforms, and we enforce these rules impartially. We will review this report once we are provided it and will remove any violating content and disable accounts of repeated offenders.” Logan declined to answer questions about whether Meta considers cow vigilantes as part of “violent or hateful groups.” Last year, the company removed profiles associated with Monu Manesar, a cow vigilante who was arrested and accused of instigating violence in Haryana.
Cow protection is not new in India, where Hinduism holds cows sacred. But the country also has a substantial minority population that includes Christians, Muslims, Buddhists, Sikhs, and Adivasis, or indigenous people, that have no religious prohibition against eating beef. Dalits, the group at the bottom of the Hindu caste system, also sometimes consume beef. Due to their marginalized status, Muslims and Dalits in particular have long relied economically on the cattle industry.
Since India prime minister Narendra Modi and his Hindu-nationalist Bharatiya Janata Party swept into power in 2014, several states have passed stricter laws when it comes to cow protection. A Congressional Research Service report released last week noted that cow vigilantism was one of several types of “religiously motivated repression and violence” used by Hindus and supported by the country’s Hindu nationalist government against minority communities. According to an April report from Armed Conflict Location and Event Data, cow vigilantism was the motivator for 22 percent of all communal violence by Hindus targeting Muslims between 2019 and 2024.
“Vigilantes organize their targeting to disburse punishment to minorities through extrajudicial means,” says Angana Chatterji, chair of the Political Conflict, Gender and People’s Rights Initiative at UC Berkeley. “Hindu nationalist leaders in government have aligned with these militias, and their speeches often function as dog whistles to rally people, reportedly stirring them to commit these extrajudicial acts that have included home invasion, theft, and lynching.”
Chatterji says that making the violence public on a place like Instagram allows cow vigilantes to recruit new members and rally other Hindu nationalists in different parts of the country. “For Muslims and minorities and their allies, Instagram messaging is calculated to spread terror with impunity,” she says. “To indicate, ‘Stop protesting. We are going to come for you and there will be nothing to stop us,’ especially as law enforcement is often either absent or in collusion.”
Naik worries that the problem is much deeper than just the accounts he and his team were able to identify. Earlier this year, Meta shuttered CrowdTangle, its tool that allowed researchers to track content across its platforms. “I would say it's the tip of the iceberg,” says Naik, because there is no public access to Meta’s data for journalists and civil society organizations.
India is an important market for Meta—it accounts for more than 362 million users on Instagram alone—and in the past, the company has been hesitant to take action on content that could put it in the crosshairs of the Indian government. In 2022, The The Washington Post reported that Facebook allowed hate speech and propaganda to stay on the platform under pressure from India’s government. (Meta’s shareholders later voted against an inquiry into the issue.) In 2020, The Wall Street Journal reported that employees in India worried that Meta’s then-head of public policy for India was unevenly applying the company’s hate speech policies to allow violent rhetoric from Bharatiya Janata Party politicians to stay up on the platform.
“It is interesting to note what is stopped by social media platforms—because some messaging is stopped immediately—and what is allowed to grow,” says Chatterji. “Just the extent of violence in the images requires that they should be taken down.”