
Silicon Valley can’t stop talking about Cluely, a startup that has reportedly hit $7 million ARR just a week after launching its enterprise AI product. Created by Roy Lee and Neel Shanmugam, the tool delivers real-time notes, context, and suggestions in meetings, something businesses are scrambling to adopt. But behind the buzz is a murky origin: Cluely evolved from a tool literally built to cheat coding interviews. That legacy raises real questions. Is Cluely simply a brilliant pivot, or is it just tech-washed fraud in a new suit? In a post-deepfake world, success doesn’t always equal legitimacy.
A Success Story… Built on Cheating?
Cluely’s rise is as fast as it is uncomfortable. Originally launched as Interview Coder, a tool explicitly designed to bypass online technical tests, it got its founders suspended from Columbia University in early 2025. Viral outrage and fascination turned it into a brand. Within months, Lee and Shanmugam raised over $20 million from top firms like Andreessen Horowitz and rebranded with a cheeky slogan: “This feels like cheating.” Now, Cluely’s new enterprise version is attracting paying clients and generating hype. But that raises a deeper ethical concern: can a company that started with deception be trusted at scale?
Critics argue that its core premise, AI that helps people “look smarter” in real-time, is just a more polished form of digital dishonesty. Anti-cheating platforms like Truely and Proctorio already target Cluely, claiming it’s being used to fake expertise during interviews and exams. Meanwhile, the founders don’t seem fazed. Lee has even suggested smart glasses or brain implants to evade detection, remarks that blur the line between visionary and villain. Cluely might be technically brilliant, but if it empowers fraud under the guise of productivity, how long before regulators or customers push back?
Threats, Hype, and a Crumbling Illusion?
Cluely’s trajectory raises a classic Silicon Valley dilemma: disruption or deception? The product works. Live notetaking, proactive suggestions, and invisible UI give users a real edge in meetings and interviews. But that very edge, when used without disclosure, can cross ethical lines. Is it a productivity assistant or a ghostwriter with plausible deniability? The threat is now coming from all sides. On July 3, 2025, a rival named Pickle launched Glass, an open-source clone of Cluely’s core features. It immediately gained traction with developers, potentially undermining Cluely’s paid model. Worse, Glass is free, transparent, and ethically cleaner.
On the other side, anti-cheat tech firms are moving fast. Validia’s Truly sets off alerts if Cluely is detected, especially during interviews or online exams. The backlash isn’t theoretical; companies and universities are already flagging users. And let’s not ignore Cluely’s past. The same founders who built a cheating tool are now asking enterprise clients to trust them with confidential meeting data. Even with top-tier VC backing and rapid ARR growth, this house of cards could collapse if the narrative shifts from “genius AI” to “systemic scam.” Cluely may be selling real software, but what it’s enabling might be too close to fraud for comfort.
Scam or Smart? The Jury’s Out
Cluely sits on a razor’s edge between game-changing innovation and ethically questionable software. Its features are undeniably useful, but so is a lie detector-proof mask. As real-time AI becomes more common, the question isn’t whether Cluely works. It’s about whether it should exist in its current form. With competitors closing in and watchdogs circling, its next chapter depends on transparency, accountability, and maybe a shift away from “cheat culture” branding. Scam or reality? Maybe it’s both. Maybe it’s a glimpse into the blurry future of trust, truth, and tech.