Subscribe
Sign up for timely perspectives delivered to your inbox.
Portfolio Manager Richard Clode comments on nVIDIA’s latest earnings announcement and discusses the factors supporting more sustainable returns for the company.
The market debate around dominant AI chipmaker nVIDIA since the second half of 2023 has been more focused around the trajectory of growth in 2025 and beyond, rather than short-term news flow, such as yesterday’s very strong quarterly results, which included a 265% jump in quarterly revenue from a year earlier.1 Since the CES annual consumer technology trade show earlier this year and overnight during results, nVIDIA has gone a long way to convince the market that the company’s stellar performance is not about to plateau anytime soon.
Key vectors to that statement include generational transitions in compute, the breadth of the customer base, ongoing supply constraints, their chip roadmap and the potential to reignite a business in China:
1 Re-architecting trillions of dollars of datacentre infrastructure
Management has talked about US$1 trillion of currently installed datacentre infrastructure that must be re-architected to support a world of accelerated compute and generative AI. This datacentre infrastructure also needs to grow to US$2 trillion over the next five years, according to nVIDIA.
2 Broadening customer base
nVIDIA’s customer base is expanding from traditional cloud hyperscaler concentration to enterprises across multiple industries, with management calling out multi-billion-dollar businesses in automotive, healthcare and financial services. Adding to this is a unique feature of generative AI – the desire of governments globally to build sovereign AI infrastructure and capabilities.
3 Ramping up supply & an accelerating roadmap
nVIDIA is ramping up supply as well as pulling in their chip roadmap to meet this demand and feed an insatiable appetite for compute power. Exponentially larger language transformer models trained on ever greater data sets are required to unlock new functionality and revenue-generating potential across online advertising, recommender engines (advanced data filtering systems that predict which content, products, or services a customer is likely to consume or engage with), content creation, drug discovery and AI Copilots.2 Key supply bottlenecks related to the production of AI chips such as CoWoS advanced packaging3 and high-bandwidth memory (HBM)4 are being addressed with the qualification of new suppliers. nVIDIA has also accelerated its chip roadmap with a new H200 product that is being ramped up now, which offers twice the inferencing performance of the H100 while the next generation B100 is expected to launch in the second half of 2024. The superior performance of nVIDIA’s latest chips is enabling the company to increase its pricing, driving content and ASP (server-side script engine for building web pages) improvements as a key vector of growth.
4 A future in China again?
Tightened US semiconductor export restrictions have curtailed nVIDIA’s business in China to only a mid-single digit percentage of their datacentre business, but the company is now sampling new compliant chips in China, with the potential to reignite their datacentre business in the region.
Confidence in these growth vectors has continued to drive up market consensus earnings estimates, which have risen by more than 400% in the past year,5 surpassing the commensurate rise in the share price, which has helped to keep valuations in check. Even with the continuing hype around AI, unlike in 2020, this time round the return of the cost of capital given the end of the era of cheap funding, has meant that we are seeing the market reward companies that are delivering strong fundamentals. As long-established technology equities specialists, this is very much welcome as our experience has proven that share price performance supported by real profits and cashflows are much more sustainable than hype alone.
1nVIDIA earnings announcement, three months to 28 January 2024.
2 AI Copilot: an intelligent virtual assistant that leverages large language models (LLMs) to facilitate natural, human-like conversational interactions, supporting users in a wide variety of tasks.
3 CoWoS: ‘Chip-on-Wafer, Wafer-on-Substrate’ collectively refers to stacking chips and packaging them onto a substrate, which reduces the space required for chips and reduces power consumption and costs.
4 High-bandwidth memory (HBM): boosts bandwidth by shrinking down the size of memory chips and stacks them in a more efficient design form.
5 Bloomberg as at 21 February 2024, nVIDIA median consensus earnings per share estimates. There is no guarantee that past trends will continue, or forecasts will be realised.