
Intertek has launched AI², the world’s first independent, end-to-end AI assurance program. Built to help companies deploy AI safely and responsibly. The platform addresses growing concerns around ethical risks, compliance gaps, and opaque model behavior. As AI adoption accelerates across sectors, Intertek aims to fill the trust gap with services spanning governance, transparency, security, and safety. Backed by Intertek’s global network of labs and 130+ years of quality expertise, AI² offers tools for organizations to align with regulations like the EU AI Act while building smarter, safer AI systems for customers and stakeholders.
A Four-Pillar Framework for Responsible AI
Intertek AI² has a thorough structure that revolves around four key areas related to AI risks. Governed AI forms compliance roadmaps, risk management procedures, and oversight. This pillar is to make companies conform to changing standards such as ISO 42001 and the EU AI Act. Transparent AI is a form of explainability tooling, documentation support, and face transparency to the stakeholders. It makes sure the AI is a decision, whether in the sphere of finance, health, or logistics, that can be audited and comprehended. Secure AI focuses on red teaming, threat modeling, incident planning, and system hardening, but makes cybersecurity efforts unique to AI vulnerabilities.
Safe AI aims at testing, bias identification, validation, and integrity of models. This guarantees performance with a wide variety of populations and operating environments. These pillars, taken together, enable businesses to scale AI adoption without losing user trust, operational integrity, and legal compliance. The program is vendor-neutral and encompasses the whole AI lifecycle, including the pre-deployment examination stage to post-launch surveillance. Intertek is promoting this as part of the critical infrastructure to organizations with increased exposure to AI supervision, customer perceptions, and complex governance.
Global Momentum for AI Governance and Risk Mitigation
With AI now embedded in critical systems, finance, healthcare, automotive, and government, calls for regulation and accountability have intensified. Intertek’s AI² program enters this landscape as enterprises grapple with fragmented rules, public distrust, and uncertain liability in AI failure cases. CEO André Lacroix emphasized that AI is reshaping industries “at an unprecedented pace.” Companies must “power ahead with smarter, safer and trusted AI solutions.” As countries implement AI-specific laws and auditing requirements, Intertek sees an opening to become the go-to third party for AI trust certification, similar to how it led past safety and quality standards.
The platform’s integration with Intertek’s Assurance, Testing, Inspection, and Certification (ATIC) infrastructure means customers can embed AI² directly into global operations. The company also underscores its neutrality, positioning itself as a reliable, standards-based counterbalance to proprietary vendor solutions. By offering tools that help companies identify bias, audit explainability, and preempt regulatory violations. AI² aims to standardize what “good AI” looks like across geographies and industries. Its launch reflects rising demand from insurers, regulators, and boards of directors for verifiable AI accountability, something beyond internal risk assessments or PR commitments to “ethical AI.” Intertek is betting that AI² becomes a new industry baseline.
AI² as a Trust Layer for the AI Economy
Intertek AI² isn’t just about compliance; it’s a commercial enabler. With AI systems under fire for bias, black-box behavior, and security flaws, businesses risk reputational damage without verifiable oversight. AI² offers that external trust layer. It aligns with major regulations, integrates into complex value chains, and helps clients scale AI responsibly. In a world where regulators are catching up fast and customers demand transparency. Intertek is positioning AI² as essential infrastructure for the next phase of the AI economy. One where accountability is not optional, and “responsible AI” isn’t just branding.