
OpenAI’s World project, formerly known as Worldcoin, is preparing for a United States launch despite mounting privacy and regulatory concerns. Using iris-scanning technology, World Network plans to provide users with unique digital identities that prove their humanness across platforms. Although it promises privacy and security, the project has been banned or investigated in countries such as Spain, India, and Brazil over data protection issues. As the World Network expands into five US states, experts warn that America’s patchwork of biometric laws and uneven enforcement could leave users vulnerable to discrimination, surveillance, and exploitation.
Regulatory Patchwork Poses Challenges
On April 30, OpenAI CEO Sam Altman announced that World would establish iris-scanning hubs in Atlanta, Austin, Los Angeles, Miami, Nashville, and San Francisco. At these locations, individuals can verify their humanness by scanning their irises, creating unique biometric markers tied to their digital identities. World Network claims this process protects privacy while preventing fraud, but critics argue that relying on sensitive biometric data amplifies risks. Unlike many AI endeavours, World directly collects and stores users’ irreversible iris scans, data that, if compromised, cannot be changed like a password.
The absence of existing comprehensive federal biometric law complicates its US rollout. California’s Biometric Information Privacy Act and Texas’s Capture or Use of Biometric Identifier law offer some protections, but enforcement varies. In California, private individuals can sue for violations, while in Texas, only the attorney general may enforce rules, leaving users without private recourse. Users in Georgia, Tennessee, and Florida must rely on federal regulations requiring transparency and fairness, with no specialized safeguards for iris scans. This mosaic of laws raises concerns that data handling and protections will differ drastically across states.
Global Backlash and Adoption Opportunities
Worldcoin’s iris-scanning technology has faced bans and investigations worldwide. Authorities in India, South Korea, Colombia, and Argentina are probing their data collection practices, while Spain imposed a ban in March 2024 and Hong Kong followed in May 2024. Brazil’s data protection authority outlawed World Network in January 2025, citing irreversible biometric capture and risk of exploitation. In Germany and Kenya, regulators ordered the deletion of thousands of user records and imposed fines for non-compliance. These actions underscore global unease with biometric projects that lack strong consent mechanisms.
Despite global pushback, the World Network has secured notable partnerships. On May 1, Match Group’s Tinder began trialling World ID in Japan, allowing users to assert their authenticity on the leading dating app. Ethereum Foundation co-director Tomasz Stańczak praised the project’s robust privacy features, though some critics find its iris-based approach dystopian. In the United States, success on apps like Tinder, Bumble, or Hinge could onboard more than 7.8 million monthly users overnight. However, America’s fragmented privacy laws, varying enforcement priorities, and active litigation over biometrics in states like Illinois and New York present significant legal and regulatory hurdles.
World Network: Precedent in the Balance
As OpenAI’s World project enters the United States, it confronts both potential and peril. While partnerships with global platforms signal demand for secure human verification, America’s fragmented privacy landscape, coupled with active litigation and uneven enforcement, casts a long shadow. As activists and regulators debate the ethics of biometric identity, users face difficult trade-offs between convenience, security, and autonomy. Whether the World can win trust will depend on its ability to navigate varied legal regimes and uphold transparent practices. Its outcome may set a precedent for biometric systems worldwide.