Skip to main content

OpenAI to Launch Its First Custom AI Chip with Broadcom and TSMC

·465 words·3 mins
OpenAI Broadcom TSMC AI Chip Custom AI Hardware Inference
Table of Contents

OpenAI, the U.S.-based AI startup behind ChatGPT, is reportedly developing its first self-designed AI chip, in partnership with Broadcom and manufactured by TSMC, with plans to launch in 2026, according to Reuters sources.

The chip will focus on AI inference workloads, allowing OpenAI to efficiently run its AI models and respond to user requests, rather than training new models—a domain still dominated by Nvidia GPUs.

Collaboration with Broadcom and TSMC
#

OpenAI has been working closely with Broadcom for several months to co-develop the chip. While the company previously explored building its own fabrication facilities to diversify supply and reduce costs, it has shifted focus to internal chip design with partners, considering it a faster and more practical path.

TSMC will provide manufacturing capabilities, potentially producing OpenAI’s first custom AI chip before 2026. OpenAI, Broadcom, and TSMC have not publicly commented on the project.

Market Context and Demand
#

The demand for AI inference chips is growing rapidly as more companies deploy AI models for complex, real-world tasks. Unlike training chips, inference chips focus on executing pre-trained models efficiently. Analysts expect that inference will eventually dominate AI workloads, highlighting the importance of custom chips optimized for these operations.

Investors have responded positively to the news:

  • Broadcom’s stock rose 4.2% to $179.24, with a 54% gain year-to-date.
  • TSMC’s U.S.-traded shares also increased by over 1%.

Broadcom, a leading ASIC designer, has previously developed custom chips for major clients including Google, Meta, and TikTok owner ByteDance.

Why OpenAI Is Choosing This Path
#

OpenAI’s services currently rely heavily on Nvidia GPUs. To scale efficiently and reduce dependence on third-party hardware, the company is seeking customized AI chips optimized for its workloads.

While building an in-house fab or a network of foundries remains a potential future option, partnering with Broadcom and TSMC enables OpenAI to accelerate chip development and production, meeting growing demand without the long lead times and capital expenses of establishing fabrication plants.

Strategic Implications
#

OpenAI’s custom AI chip is part of a broader strategy:

  • Data Center Investments: OpenAI plans to expand data center infrastructure to house these chips.
  • Reducing Hardware Dependence: Custom chips reduce reliance on Nvidia and AMD hardware for inference tasks.
  • Scaling AI Services: With GPT and other models growing in popularity, inference efficiency becomes critical to deliver real-time AI capabilities.

OpenAI CFO Sarah Friar commented on Bloomberg TV:

“It’s a challenging initiative from a capital perspective, but we’re learning a lot. In infrastructure, your capabilities determine your destiny.”

Conclusion
#

OpenAI’s first in-house AI chip, co-developed with Broadcom and manufactured by TSMC, marks a significant step in scaling AI infrastructure. By focusing on inference workloads, OpenAI aims to improve efficiency, reduce hardware dependency, and meet the growing demand for real-time AI services, positioning itself for the next phase of AI innovation.

Related

OpenAI获得的DGX B200的具体信息
·53 words·1 min
OpenAI DGX B200
台积电要用5nm先进封装HBM4内存芯片
·20 words·1 min
TSMC HBM4
Broadcom Dissatisfied with Intel's 18A Chip Process
·576 words·3 mins
News Intel 18A Broadcom