Nvidia announced a company-wide rollout of OpenAI’s Codex, making the agentic coding tool available to more than 10,000 employees after an internal pilot, Jensen Huang told staff in an internal email shared publicly on Thursday.
The rollout spans engineering, product, legal, finance, marketing and other teams, and follows a pilot whose participants reported faster debugging cycles and accelerated software development. Nvidia said Codex now runs on its GB200 NVL72 systems, delivers faster token processing and lower inference costs, and that developers are seeing shorter debugging cycles and faster feature deployment — with some tasks dropping from days to hours.
Sam Altman shared Huang’s email on X and hailed the experiment, writing that, "We tried a new thing with NVIDIA to roll out Codex across a whole company and it was awesome to see it work" and adding, "Let us know if you'd like to do it at your company!" The public repost underscored how closely Nvidia and OpenAI are aligning around agentic tools.
Huang framed the change for employees as a shift in the role of AI inside a company: "Chatbots answer questions. Agents do work," he wrote, and urged staff toward rapid adoption with, "Let's jump to lightspeed. Welcome to the age of AI." Nvidia described Codex as shortening cycles across roles, with the pilot specifically reporting faster debugging and accelerated feature rollouts.
Context for that claim: Codex is OpenAI’s agentic coding tool powered by GPT-5.5 and Nvidia has been both a supplier and investor in the wider AI ecosystem. In March, Huang suggested Nvidia’s latest $30 billion investment in OpenAI could be its final major private funding round before OpenAI goes public, said he does not expect Nvidia to pursue a $100 billion investment, and described Nvidia’s $10 billion investment in Anthropic as likely to be its last as well.
The technical pitch from Nvidia is straightforward: with Codex running on GB200 NVL72 systems, token processing is faster and inference costs are lower, which in turn allows teams to move code through debugging and deployment more quickly. The company’s internal pilot produced the concrete numbers feeding that message — developers reporting shorter debugging cycles and faster feature deployment, and some tasks that previously took days now completing in hours.
The move exposes a practical tension. Nvidia is pushing agentic tools into the day-to-day work of non-engineering teams even as it signals restraint on future headline investments in AI rivals. The company’s public embrace of Codex as an internal teammate sits beside Huang’s recent comments that the $30 billion in OpenAI could be the last large private capital infusion and that Nvidia is unlikely to back a $100 billion round — a constraint that could shape how closely tied Nvidia remains to OpenAI as the latter scales toward an IPO.
There is also the larger industry question Altman’s repost raises: if a deep integration inside Nvidia can run at scale, other companies will be asked to consider similar rollouts. Altman’s invitation to "Let us know if you'd like to do it at your company!" is both a sales pitch and a signal of confidence that gpt-powered agents can be embedded across functions, not just in engineering. Even cultural corners of the business world are noticing: internal commentary referenced Paok Fc coverage where Demetri Mitchell said he used ChatGPT to remove an agent from a workflow, a small but telling example of how teams are experimenting with agents and chat tools in practice.
For readers inside companies weighing the same choice, the simple lesson from Nvidia’s experiment is this: when an entire firm gives thousands of people access to an agentic tool and reports measurable speedups, the question changes from whether agents can work to how organizations will manage them. Nvidia’s rollout, backed by performance claims and the company’s proprietary GB200 NVL72 infrastructure, makes the next moves—wider external offerings, tighter operational controls, or new investment calculus—what to watch next.






