OpenAI Turns to Google’s AI Chips to Diversify Beyond Nvidia GPUs
OpenAI Rents Google’s TPUs to Reduce Nvidia Dependence
OpenAI has begun renting Google’s tensor processing units (TPUs) through Google Cloud to power ChatGPT and other AI models, marking a significant shift from its traditional reliance on Nvidia GPUs.
- Previously, OpenAI sourced Nvidia chips primarily via partnerships with Microsoft and Oracle for AI model training and inference.
- Renting Google’s TPUs is part of OpenAI’s strategy to diversify infrastructure and reduce dependence on Microsoft-managed data centers and Nvidia hardware.
Cost and Capacity Benefits of Google’s TPUs
- TPUs rented through Google Cloud are expected to lower inference costs, offering a more affordable alternative to Nvidia GPUs.
- OpenAI’s expanding computational needs make Google Cloud’s services a key component of its scaling efforts.
Competitive Dynamics in AI Hardware Supply
- Google’s decision to rent out its TPUs to OpenAI reflects evolving competition and collaboration dynamics in AI.
- However, Google reportedly does not rent its most powerful TPUs to OpenAI yet.
- Google’s TPUs are also used by Apple and AI startups like Anthropic and Safe Superintelligence, marking a trend of externalizing previously internal-only AI hardware.
Market Outlook: Alphabet Stock
- Alphabet (GOOGL) carries a Strong Buy consensus with an average price target of $200.06, suggesting a 12.1% upside.
- Despite this, Alphabet stock is currently down about 6% year-to-date.