OpenAI, the creator of ChatGPT, announced a landmark multiyear partnership with semiconductor giant Broadcom on Monday to co-develop and deploy custom artificial intelligence accelerators—a move aimed at reducing its reliance on dominant chip suppliers such as Nvidia.
The multibillion-dollar collaboration will power OpenAI’s next generation of AI clusters with 10 gigawatts of OpenAI-designed hardware, embedding the company’s frontier model expertise directly into the chips for higher performance and energy efficiency.
Under the deal, OpenAI will lead chip architecture and design, while Broadcom will handle development, manufacturing, and integration with its advanced networking technologies, including Ethernet, PCIe, and optical connectivity. The accelerators will leverage standards-based Ethernet for scalable AI data centers—an approach that promises cost-effective, flexible infrastructure to meet soaring global demand.
Deployments are scheduled to begin in the second half of 2026, with full rollout across OpenAI facilities and partner sites targeted by 2029.
“This partnership with Broadcom is a critical step in building the infrastructure needed to unlock AI’s potential and deliver real benefits for people and businesses,” said Sam Altman, OpenAI’s co-founder and CEO. “Developing our own accelerators expands the ecosystem of partners building the capacity required to push the frontier of AI for the benefit of all humanity.”
Hock Tan, Broadcom’s president and CEO, called the alliance “a pivotal moment in the pursuit of artificial general intelligence.” He praised the deal’s scale and OpenAI’s leadership role in AI innovation since ChatGPT’s debut. “We are thrilled to co-develop and deploy 10 gigawatts of next-generation accelerators and network systems to pave the way for the future of AI,” he said.
The announcement comes as OpenAI faces mounting compute demands, driven by over 800 million weekly active users and rapid enterprise adoption of its models and APIs. While the company continues to source GPUs from Nvidia and AMD, insiders say developing custom silicon will allow OpenAI to optimize performance and cost for its proprietary workloads—accelerating breakthroughs in model training and reasoning.
Greg Brockman, OpenAI’s co-founder and president, underscored that strategic advantage: “By building our own chip, we can embed what we’ve learned from creating frontier models directly into the hardware, unlocking new levels of capability and intelligence.”
Broadcom, already a key supplier of networking systems to hyperscalers, sees the partnership as a natural fit. Charlie Kawwas, Ph.D., president of Broadcom’s Semiconductor Solutions Group, said: “Custom accelerators integrate seamlessly with standards-based Ethernet networks to deliver performance and cost efficiency for next-generation AI infrastructure.”
Analysts view the move as one of the boldest challenges yet to Nvidia’s near-monopoly in AI chips, which has strained global supply chains and inflated costs for AI builders. With training clusters now consuming power on the scale of nations—10 gigawatts could supply roughly 8 million U.S. homes—the OpenAI–Broadcom pact signals a new phase of massive, vertically integrated investment in sovereign AI hardware.
As OpenAI pushes toward its goal of artificial general intelligence, the partnership marks a turning point in the semiconductor landscape—where software innovation and custom silicon converge to power the next wave of intelligent systems.