Please ensure Javascript is enabled for purposes of website accessibility Enabling AI: Tailwinds for semiconductors and data centres - Janus Henderson Investors - UK Institutional
For institutional investors in the UK

Enabling AI: Tailwinds for semiconductors and data centres

Portfolio Managers Richard Clode and Guy Barnard discuss the latest nVIDIA product launch, and provide examples of how tech and property companies are coming together to enable and benefit from the ongoing demand for genAI.

Richard Clode, CFA

Richard Clode, CFA

Portfolio Manager


Guy Barnard, CFA

Guy Barnard, CFA

Co-Head of Global Property Equities | Portfolio Manager


27 Mar 2024
6 minute read

Key takeaways:

  • nVIDIA continues to deliver enhanced AI performance, speed and reduced energy consumption.
  • Data centre REITs are benefiting from strong demand as a result of rising data centre infrastructure growth and spending.
  • Tech and property companies are partnering up, creating multiple growth pathways to meet growing genAI demand.

With generative AI (genAI) still in its infancy, there are wide ranging and exciting existing and new use cases spurring increasing demand and investment from corporates and governments. However, all of this needs AI infrastructure – the hardware (eg GPUs, servers, data storage) and software (programming languages, platforms, tools). Equally as important, are the data centres to house the servers, requiring large amounts of energy to power and cool them. The firms behind these critical AI enablers are partnering with each other in the race to harness the benefits of AI.

The tech view (Richard)

nVIDIA’s recent GTC event in San Jose was taglined ‘the conference for the era of AI’ and it did not disappoint, with a huge swathe of innovation announced. The centrepiece was the launch of the next-generation chip architecture – Blackwell – named after the first Afro-American mathematician to be inducted into the National Academy of Sciences. The new chip boasts a mammoth 208 billion transistors compared to its predecessor’s 80 billion,1 but achieves this by stitching together two separate GPU dies given the size of each die has reached the physical reticle limit of an ASML lithography tool (the maximum size that a lithography tool can etch to build a chip).

nVIDIA continues to challenge the fading of Moore’s Law via innovative advanced packaging as well as significantly leveraging higher bandwidth memory. However, the company believes that the colossal complexity of generative AI can only be tackled via a full stack solution. This pulls together nVIDIA’s proprietary high-performance networking and interconnectors such as NVLink, as well as software with a second-generation transformer engine that enables large language models to be trained and inferenced with lower precision calculations. This capability significantly accelerates performance as well as reduces power consumption.

At GTC, nVIDIA took this full stack solution to the next level by unveiling the GB200 NVL72 (a server and workstation that uses GPGPUs to accelerate deep learning applications). Traditionally, nVIDIA has sold boards comprising eight GPUs, but the new product is a full rack solution comprising 72 GPUs as well as 36 Grace Hopper ARM CPUs – all interconnected via thousands of NVLinks. This solution delivers unprecedented compute power, enabling a trillion-parameter large language model (LLM) to be trained 4x faster using only 25% of the power compared to an nVIDIA H100 Tensor Core GPU. More impressively, AI inferencing can be done 30x faster. The sheer horsepower of compute consumes a lot of power but nVIDIA is transitioning from air to liquid cooling. This enables the rack to increase its compute density without overheating, reducing the floor space required, which is a key driver of overall power consumption in data centres. Compared to a H100 air-cooled infrastructure, a GB200 can deliver 25x the performance at the same power while also reducing water consumption.

nVIDIA recently revealed that demand from sovereigns (governments) was as large as the largest three hyperscalers combined. This is a key driver of why nVIDIA’s CEO believes that the current installed base of data centre infrastructure of US$1 trillion can grow to US$2 trillion in the next four to five years, with annual data centre infrastructure spending growing from US$250 billion to US$500 billion in the longer term.2 The full server rack offering from the GB200 NVL72 is currently the ideal solution for sovereigns that lack the technological capability to design their own AI training or inferencing clusters, and are more willing to buy off-the-shelf.

The property view (Guy)

The growth of AI and machine learning applications, which require substantial computational power and data storage, is the latest of many factors driving the need for advanced data centre capabilities from potential tenants. Data centre REITs are seeing surging demand from customers looking to secure their ambitious growth plans in AI and digital infrastructure. In fact, new leasing activity in the US during 2023 totalled the same amount in the entirety of the three years prior (2020-2022).3

Some forecasts for future AI-driven demand growth are staggering: Morgan Stanley forecasted that growth in the public cloud market will surge from US$500 billion+ currently to a US$2.5 trillion opportunity by 2032.4 Boston Consulting Group forecasts that the future demands of AI computing will result in a threefold increase of electricity consumed by data centres by 2030 – this is equivalent to the electricity consumed by a third of US homes in a year.5

Equinix, the world’s largest data centre REIT, is an example of a company that has benefited from the continued growth in data centre demand. Founded 25 years ago, Equinix has developed the world’s most interconnected data centre platform and is a truly global company, with more than half of its revenues generated outside the US. The company also aims to be global climate-neutral by 2030, and has achieved more than 90% renewable energy coverage in recent years.

Equinix recently announced a partnership with nVIDIA to offer the chip firm’s supercomputing systems to corporate clients on an outsourced basis. This makes it easier for Equinix customers to operate “private AI” computing systems allowing for customised use cases and better control over proprietary data, rather than relying on cloud-based computing from the likes of Amazon (AWS) and Microsoft (Azure), which can raise data privacy concerns for some users. nVIDIA has trained Equinix staff to build and run their systems, which will open up a new level of service for Equinix to provide to their tenants – and ultimately a new revenue stream.

We believe this is one example of multiple growth pathways for data centres as the world looks to compete and stay relevant in a rapidly-changing technological era.

In summary

The broadening demand for genAI is creating both opportunities and risks, with potentially significant gains for its key enablers. This, combined with the shift to new technologies to enable a lower power footprint, creates multiple vectors of opportunity for active investors across multiple sectors, including technology and property.

In a follow-up article we will take a deeper dive from a geopolitical and sustainability perspective into how the growth of AI and data centres is fuelling semiconductor trade tensions and the risks from regulatory reform given the insatiable power needs of AI, as well as the implications this has for investors.

These themes are also explored in an upcoming article by the Global Sustainable Equity Team entitled, “Data center boom: Navigating the power crunch.”

1 nVIDIA press room, 18 March 2024: NVIDIA Blackwell Platform Arrives to Power a New Era of Computing.

2 Jensen Huang keynote speech at GTC 2024.

3 UBS Data Center Hawk 24 February 2024.

4 Morgan Stanley, IDC. Morgan Stanley Research, Global Telco, Tech & Infrastructure, 27 February 2024.

5NuScale NonProprietary presentation. Data from Boston Consulting Group: The Impact of GenAI on Electricity. How GenAI is Fueling the Data Center Boom in the U.S., 13 September 2023.

AI inferencing: refers to artificial intelligence processing where knowledge from a trained neural network model is used to infer a result.

Full rack solution: typically pertains to data centres, server hosting, and networking environments. It refers to renting or purchasing a complete rack of server equipment and services in a data centre. This solution includes the physical rack space, the servers and hardware housed within that space, networking equipment (like switches and routers), power management systems, and cooling mechanisms to ensure the equipment operates efficiently and reliably.

Full stack solution: refers to a comprehensive approach to software development that covers all layers of an application or project. This includes both the front-end and back-end components, as well as any other layers necessary for the application to function fully.

GenAI: generative AI refers to deep-learning models that train on large volumes of raw data to generate ‘new content’ including text, images, audio and video.

GPU: a graphics processing unit performs complex mathematical and geometric calculations that are necessary for graphics rendering.

GPU die: refers to the bare piece of silicon of which the processing unit (a GPU or CPU) is comprised. Die is often interchanged with ‘chip’.

GPGPU: general-purpose graphics processing units leverage the power of GPUs, which are conventionally used for generating computer graphics, to carry out tasks traditionally done by central processing units (CPU).

Hyperscalers: companies that provide infrastructure for cloud, networking, and internet services at scale. Examples include Google, Microsoft, Facebook, Alibaba, and Amazon Web Services (AWS).

LLM (large language model): a specialised type of artificial intelligence that has been trained on vast amounts of text to understand existing content and generate original content.

Moore’s Law: predicts that the number of transistors that can fit onto a microchip will roughly double every two years, therefore decreasing the relative cost and increasing performance.

IMPORTANT INFORMATION

There is no guarantee that past trends will continue, or forecasts will be realised.

Technology industries can be significantly affected by obsolescence of existing technology, short product cycles, falling prices and profits, competition from new market entrants, and general economic conditions. A concentrated investment in a single industry could be more volatile than the performance of less concentrated investments and the market.

REITs or Real Estate Investment Trusts invest in real estate, through direct ownership of property assets, property shares or mortgages. As they are listed on a stock exchange, REITs are usually highly liquid and trade like shares.

Real estate securities, including Real Estate Investment Trusts (REITs) may be subject to additional risks, including interest rate, management, tax, economic, environmental and concentration risks.

These are the views of the author at the time of publication and may differ from the views of other individuals/teams at Janus Henderson Investors. References made to individual securities do not constitute a recommendation to buy, sell or hold any security, investment strategy or market sector, and should not be assumed to be profitable. Janus Henderson Investors, its affiliated advisor, or its employees, may have a position in the securities mentioned.

 

Past performance does not predict future returns. The value of an investment and the income from it can fall as well as rise and you may not get back the amount originally invested.

 

The information in this article does not qualify as an investment recommendation.

 

There is no guarantee that past trends will continue, or forecasts will be realised.

 

Marketing Communication.

 

Glossary