
Here’s Asaduddin Owaisi’s X post at 6:18 IST Oct 1, 2025, on AI misuse in India. He warned that instead of serving progress. AI tools were being used to sexualize Muslim women, spread hate, and fuel conspiracy theories. And his post linked to the CSOHate report, ‘AI-Generated Imagery and the New Frontier of Islamophobia in India’. Posted on September 29, it comes in the wake of AI-powered hate campaigns. The findings are disturbing: hypersexualized images, glamorized violence, and anti-Muslim rhetoric are trending. Owaisi framed it as a civility challenge for India—if AI is a tool of progress or prejudice.
AI Hate Campaigns Documented
CSOHate’s report, for example, analyzed 1,326 AI-generated posts targeting Muslims between May 2023 and May 2025. They came from 297 accounts on X, Instagram, and Facebook. Together, they generated 27.3 million engagements. Instagram led the charge with 462 posts and 1.8m interactions, followed by X with 509 posts and 772,400, and Facebook with 355 posts and 143,200.
The report identifies four themes — sexualization of Muslim women, exclusionary language, conspiracy theories, and glamorized violence. The most viral content type—sexualized photos of Muslim women—drew 6.7 million engagements. Conspiracies like Love Jihad and Population Jihad also blossomed. Even stylish trappings, like Studio Ghibli animation, covered violence, rendering propaganda palatable to young eyes.
The CSOHate report associates this spike with the spread of tools like Midjourney, Stable Diffusion, and DALL·E, which lowered barriers, enabling even rudimentary users to produce sophisticated fakes. The result: AI-driven lies are faster, cheaper, and more believable than before.
Amplifiers, Failures, and Risks
CSOHate report doesn’t just map content— it names amplifiers. Hindu nationalist outlets such as OpIndia, Sudarshan News, and Panchjanya also embedded this AI imagery in mainstream coverage. Of the 1,326 flagged posts, 187 were for community guideline violations. None were removed. This disconnect mirrors platforms’ failings to enforce their own policies.
In the dataset, for example, there are 146 X accounts, 92 Instagram accounts, and 59 Facebook accounts. Strikingly, 86 were verified. So not fringe actors, but powerful accounts guiding these tales. The CSOHate report stresses that this targeted activity by a relatively small number of accounts has been disproportionately effective.
The hazards go far beyond online toxicity. Psychological damage, actual life threat, and the degradation of social trust. For India’s 200 million Muslims, these campaigns exacerbate insecurity. It also warns of damage to constitutional secularism and democratic institutions. Similar trends have been seen abroad. CBC News, GNET, quoteAI-based Islamophobia, France, UK, proving this is not a one-off.
CSOHate report calls for bespoke legislation, AI creator transparency, and inter-platform early-warning systems. And without these, AI-generated hate may become the inescapable hallmark of India’s digital ecosystem.
Conclusion
Owaisi’s post lays the problem bare: AI in India is being hijacked for hate. CSOHate’s account demonstrates how quickly sexualized imagery, conspiracy stories, and glamorized violence spread. Platforms aren’t stopping it. Political actors are amplifying it. And those defenseless groups — especially Muslim women — are paying the price. Responses to his post call for prosecutions, stricter regulation, and more digital education. But the clock is ticking. As AI tools get cheaper and more powerful, the risks just become greater. Owaisi’s warning is both a rebuke and a call to arms—use AI for empowerment, not exclusion.