Nvidia CEO Jensen Huang announced a major expansion of the company’s AI accelerator roadmap, projecting 65% year-over-year revenue growth for 2026 and confirming new data center chip designs set for mass production by Q2. The update triggered immediate rallies across semiconductor and tech sectors.
- Nvidia projects $98 billion in 2026 revenue, up 65% YoY
- H200 GPU production begins June 2026; B200 launch in Q2 2027
- 40% improvement in inference efficiency, 70% higher memory bandwidth
- 45% of projected 2026 revenue from non-GPU products
- NVDA shares rose 12% in after-hours trading
- XLK index gained 3.4%, AMD up 7.2%, VIX declined 4.8%
Nvidia CEO Jensen Huang delivered a pivotal update during the company’s investor day, revealing a comprehensive expansion of its AI chip development pipeline. The announcement included the confirmation of the H200 Tensor Core GPU’s accelerated deployment timeline, with volume production beginning in June 2026, followed by the next-generation B200 architecture slated for Q2 2027. Huang emphasized that these chips would deliver a 40% improvement in inference efficiency and a 70% increase in memory bandwidth over current models. The company’s revised financial guidance projects $98 billion in annual revenue for fiscal year 2026, up from $60 billion in 2025—a 65% increase driven by sustained demand across cloud providers, enterprise AI deployments, and autonomous systems. This forecast underscores a strategic shift toward higher-margin, custom silicon solutions, with over 45% of projected revenue expected from non-GPU products like networking and AI software platforms. The market responded swiftly, with NVDA shares surging 12% in after-hours trading, marking the largest single-day gain since late 2023. The broader semiconductor index XLK rose 3.4%, while AMD saw a 7.2% uptick amid renewed investor interest in AI chip competition. The VIX index dipped 4.8% as implied volatility declined, signaling reduced market uncertainty around AI infrastructure supply chains. Analysts now expect Nvidia to maintain over 50% market share in the AI accelerator segment through 2027, with demand outpacing supply for the next 18 months. The updated roadmap also includes partnerships with major cloud providers, including AWS and Microsoft Azure, to co-develop next-gen inference clusters optimized for large language models.