
A former software engineer at OpenAI has shared rare insights into the company’s internal engineering culture. After 14 months working on Codex, he describes a fast-moving, code-first environment where ideas rise from the bottom up and leadership measures impact by shipped code. Python dominates the monorepo, with FastAPI and Pydantic driving backend services. Despite codebase sprawl and scaling pains, teams move fast. OpenAI has ballooned to over 3,000 employees but remains secretive due to external scrutiny. Infrastructure costs are driven mostly by GPUs, and sprints like Codex’s 7-week build reflect extreme output. The company balances moonshot AI research with high-velocity product delivery.
Engineering Reality at Scale and Velocity Inside OpenAI
The former engineer’s account paints OpenAI not as a polished corporate AI lab, but as a startup stretched to the edge of control. Slack, not email, runs the show. Reporting lines shift often. Anyone who ships first sets the technical standard. This makes OpenAI feel chaotic but productive, with technical leadership earned through action, not titles or decks. The engineering ethos values pushing production code fast, even in an overgrown monorepo.
Python is the dominant language across the stack, with FastAPI powering most backend services and Pydantic managing data contracts. Despite the repo’s wild inconsistency in code style, this combo allows rapid scaling. Rust and Go show up only at networking or infrastructure edges. The company has been hiring aggressively from Meta and elsewhere to shore up infrastructure talent, especially as GPU spending far exceeds any other cost.
One standout anecdote is the Codex sprint: a tiny team shipped the public-facing coding agent in just 7 weeks, working 16-hour days while juggling real life, including one engineer parenting a newborn. The pace is intense, and breaking tests or duplicated libraries are common tradeoffs. Still, OpenAI’s momentum comes from maintaining velocity at all costs, even as its scale balloons past 3,000 engineers.
What It Reveals About AI Culture, Ambition, and Risk Appetite
OpenAI’s internal structure shows how today’s leading AI firms blend high-risk, high-speed engineering with industrial-scale research. This ex-employee’s account makes clear that OpenAI operates more like a hyper-growth startup than a legacy tech firm. Everything from documentation to team organization is in flux. There’s no long approval chain; what matters is who ships. This cultural lean toward autonomy and raw speed mirrors OpenAI’s market position as both a research hub and viral product builder.
The Python-heavy stack and reliance on FastAPI and Pydantic hint at a shared principle: ship now, scale later. These tools offer just enough structure to handle thousands of daily changes across the monorepo. The cost? Frequent breakages and technical debt. But that tradeoff is intentional. The organization banks on iteration velocity and fast hiring to keep up. It’s a bet that aligns with how Codex, ChatGPT, and other releases exploded into public use almost overnight.
Crucially, this culture supports OpenAI’s ability to ship real-world AI products, tools that aren’t just research papers but viral consumer software. This rare view under the hood shows how tight loops between safety, infrastructure, and application layers help the company avoid stagnation and outpace competitors in AI productization.
Is this what the future of engineering looks like
This engineer’s experience inside OpenAI exposes the tension and power of AI development today: relentless speed, cultural chaos, and strategic focus on shipping usable intelligence. It also raises real questions about sustainability, both in team health and technical stability, as the company races to define the AI era. The blend of open-stack pragmatism, mega-scale research, and public deployment gives OpenAI a unique edge, but it’s not without cost. The insight here is not just how AI is built, but how the people building it are stretched. If this is the blueprint for next-gen engineering, is it something other teams should emulate