There’s no disputing the dominance of NVIDIA (NASDAQ:NVDA) driving the AI revolution for now and the foreseeable future as they make even faster advances with their chips. However, having a 12-month backlog on its next-generation architecture Blackwell GPUs is causing many computer and technology sector customer to look elsewhere in the meantime.
Enter Marvell (NASDAQ:MRVL) and its custom-designed application-specific integrated circuit (ASIC) AI chips on which the hyperscalers are loading up.
These chips are designed for specific tasks, often offering better performance at a lower cost for certain AI workloads. Marvell’s CEO Matthew Murphy proclaimed they are entering a “new era of growth” driven by demand for its ASICs.
Growing Its Niche Moat for the Hyperscalers
While NVIDIA's AI dominance is driven by the embedded nature of its full-stack ecosystem comprised of its powerful GPU architecture (Hopper and Blackwell), its software platform (CUDA) and development kits, networking solutions, partnerships and a loyal and growing developer community, Marvell’s ASICs aren’t replacing NVIDIA’s Hopper AI chips. Instead, they are enhancing specific AI workloads customized for the hyperscaler’s needs.
Custom Designed ASICs for Microsoft, Amazon, and Google Cloud
Marvell is generating rich data center revenue growth from four custom ASICs.
The Maia-2 ASIC is a customer AI accelerator designed for Microsoft Corporation's (NASDAQ:MSFT) Azure platform specifically for training and inference workloads.
The Axion ASIC is custom-designed for Alphabet's (NASDAQ:GOOGL) Google Cloud. It's a custom CPU specifically designed for Google's data centers to optimize specific workloads for its cloud platform.
The Trainium is custom designed for Amazon.com Inc's (NASDAQ:AMZN) Amazon Web Services (AWS) for the purposes of training large and complex machine learning models and powering its AI services.
The Inferentia is also a custom-designed ASIC for Amazon. Like the name hints, it's specifically made to be an AI inference accelerator for AWS. It's designed to operate already-trained AI models efficiently and optimized for inference at scale.
Marvell competes with Broadcom (NASDAQ:AVGO) Inc. NASDAQ: AVGO and their ASICs. Broadcom helped develop the Google Tensor chip and reportedly designs ASICs for OpenAI.
Growth Spurt Began in Q3: Data Center Revenue Surge 98% YoY in FQ3
Marvell reported its fiscal third quarter of 2024 EPS of 43 cents, beating consensus estimates by 2 cents. Revenues rose 7% YoY to $1.52 billion, firmly beating consensus estimates of $1.46 billion but grew 19% sequentially. Non-GAAP gross margin was 60.5%. Data center revenue rose 98% YoY and 20% quarter-over-quarter(QoQ) to $1.1 billion, accounting for 73% of total revenue, up from 40% in the year-ago period.
Non-Data Center Revenue Began Recovery in FQ3
Non-data center revenue started its recovery in FQ3. Enterprise and carrier revenue rose 4% QoQ. FQ4 growth is expected in the mid-teens. While automobile and industrial revenue fell 22% YoY, it rose 9% QoQ. FQ4 revenue is expected to grow in the low-to-mid single-digit percentage QoQ growth rate. While consumer revenue was down 43% YoY, it rose 9% QoQ. Marvell expects it to decline in FQ4 in the mid-teens QoQ seasonally.
Marvell Issues Upside FQ4 Guidance
For fiscal Q4 2025, Marvell expects to report EPS of 54 cents to 64 cents, versus 52 cents consensus estimates. Revenue is expected between $1.71 billion and $1.89 billion, versus $1.65 billion consensus estimates.
CEO Matt Murphy commented, "The exceptional performance in the third quarter and our strong forecast for the fourth quarter are primarily driven by our custom AI silicon programs, which are now in volume production, further augmented by robust ongoing demand from cloud customers for our market-leading interconnect products. We look forward to a strong finish to this fiscal year and expect substantial momentum to continue in fiscal 2026."