Open AI Partnered with Broadcom and TSMC to Build the First Chip, Scaling Back Foundry Ambition

Photo of author

By Hiba Akbar

OpenAI is ready to build its first custom AI chip with Broadcom and TSMC.

Key Features

  • OpenAI builds the first AI inference chip, scaling back foundry network aspirations.
  • Broadcom supports OpenAI in chip design and securing TSMC for production.
  • OpenAI diversifies its chip supply by integrating AI chips from AMD alongside Nvidia.

Showcasing the OpenAI logo.

Reuters reported that sources said OpenAI joined hands with Broadcom and Taiwan Semiconductor Manufacturing Company TSMC to build the AI-supporting in-house chip. They took this step to diversify chip supply and reduce infrastructure costs while adding Nvidia chips and AMD chips to meet its rising demand.

OpenAI contemplated creating everything in-house and seeking funding for a costly proposal to build a network of chip manufacturing plants known as “foundries.”

According to an anonymous insider source,

“OpenAI temporarily abandoned their ambitious foundry plans due to the time and cost required to construct a network. Instead, the business aims to focus on internal chip design.” 

For the first time, the company’s strategy demonstrates how a Silicon Valley startup uses industry alliances and a combination of internal and external strategies. It secures chip supply and control costs comparable to those of its larger competitors, Amazon, Meta, Google, and Microsoft. 

OpenAI’s decision to source from a variety of chipmakers to construct its own chip may have far-reaching implications for the IT sector.

According to sources, OpenAI and Broadcom have been sharing the same space for months on the creation of the company’s first AI processor with an inference focus. As more AI applications are adopted, scientists anticipate that the need for inference chips will outstrip the current demand for training chips.

Broadcom has served the industry for years and supports businesses like Alphabet. Google not only provides design components that allow for the quick flow of information on and off semiconductors, but it also refines chip designs for production. This is critical in AI systems, which require tens of thousands of processors to work together.

According to two of the sources, OpenAI is currently deciding whether to build or purchase more components for their chip design, and they may collaborate with other partners.

The business has assembled a chip team of approximately 20 people, led by famous engineers who formerly created Tensor Processing Units (TPUs) at Google, including Richard Ho and Thomas Norrie.

OpenAI uses generative AI, which means it requires a lot of computational resources to train and run its systems. It is known as the largest purchaser of Nvidia GPU graphics processing units, which train models using AI chips. It does more than just train models so that AI may learn from data and make inferences; it also uses AI to make predictions or judgments based on fresh information.

Nvidia maintains an 80% market share for GPUs. However, escalating costs force huge firms like Microsoft, Meta, and now OpenAI to consider in-house or external options.

AMD’s new M1300X is also beating Nvidia, with the chip expected to generate $4.5 billion in AI chip sales by 2024 after its launch in the fourth quarter of 2023.

Sources added, 

“OpenAI has been cautious about poaching talent from Nvidia because it wants to maintain a good rapport with the chip maker it remains committed to working with, especially for accessing its new generation of Blackwell chips.”

New investments and launches are on the way with the increasing AI innovation. Recently, the Nikkei reported that Toyota and NTT are set to enhance AI self-driving with a $3.3 billion R&D investment. 

Don’t miss the latest AI and tech updates and news; keep in touch!

If you’re interested in contributing, submit your guest post and Write for Us.