
Meta is replacing up to 90% of its internal risk reviews with AI, shifting away from human evaluators. Previously, teams assessed new features for potential harms, like privacy issues or content abuse. Now, updates to Facebook, Instagram, and WhatsApp will mainly pass through automated systems. This change is seen internally as a win for developers, enabling faster launches. However, employees warn it raises the risk of overlooking serious problems. Meta claims only low-risk decisions will be automated, while complex issues still involve human input. Yet, documents show AI may also assess youth safety and misinformation concerns.
Employee Concerns and Oversight Challenges
Current and former Meta employees are alarmed by the shift, saying most engineers lack privacy expertise. Risk reviews were once mandatory for product releases; now, teams complete AI-generated questionnaires that return instant decisions. If teams believe further review is needed, they can request it, but it’s no longer required by default. Critics argue this removes vital checks. Former Meta executive Zvika Krieger warns that automating too much risks turning the process into a superficial task, potentially missing major red flags. Meta insists audits are in place and that automation applies mostly to routine updates.
Still, internal slides show reviews for sensitive areas like youth risks and AI safety could also be automated. EU users may see fewer changes due to stricter rules under the Digital Services Act. While Meta says automation helps move faster, some employees see it as a shortcut that compromises safety and responsibility. Meta claims human review remains for novel or complex cases, but critics say placing judgment in the hands of developers could lead to more product-related harm. Many fear that the new system will prioritize speed and scale over ethical safeguards and real-world consequences.
Broader Policy Shifts Reflect Less Oversight
Meta’s automation push comes amid a broader rollback of internal safeguards. It recently ended its U.S. fact-checking program and relaxed speech rules, drawing concern from insiders. These shifts coincide with CEO Mark Zuckerberg’s efforts to rebuild ties with conservative politicians, notably former President Donald Trump. Internally, staff view the automation as another way to move faster and reduce internal pushback. Documents show the AI system can approve updates without human input, provided engineers say requirements are met. Employees say this removes the friction once necessary to challenge risky product launches. Critics argue the changes reflect a deeper cultural shift: prioritizing growth over caution.
Some believe this direction is self-defeating, since Meta frequently faces backlash over issues uncovered post-launch, issues that more robust reviews might have caught. While Meta says it’s using AI safety to improve accuracy and reduce manual load, former employees fear that without strong human oversight, mistakes will multiply. Meta’s own privacy officer confirmed the company is “empowering product teams” to drive change. That empowerment now places significant trust in AI systems and engineers to assess risk, a move that some call irresponsible, given the scale of Meta’s impact and the sensitivity of its platforms.
Meta’s Justification and Future Risks
Meta says its goal is to simplify decisions while maintaining user safety. It claims AI now matches or exceeds human reviewers in some areas, freeing humans for tougher cases. However, many argue AI lacks ethical reasoning and that engineers aren’t incentivized to prioritize risk. Some insiders say the new approach ignores lessons from past missteps. Meta insists its system includes audits and optional human reviews, but critics say removing mandatory oversight is dangerous. In the EU, stricter laws may limit these changes. Still, the shift signals Meta’s intent: faster features, fewer roadblocks, even if public trust and safety are at stake.