Please ensure Javascript is enabled for purposes of website accessibility Approaching AI’s Age of Reason - Janus Henderson Investors - Australia Professional Adviser
For financial professionals in Australia

Approaching AI’s Age of Reason

Portfolio Manager Denny Fish explains that by exhibiting reasoning, artificial intelligence (AI) models can be deployed to solve increasingly complex problems, benefiting businesses and scientific enquiry and injecting productivity gains throughout the global economy.

Denny Fish

Portfolio Manager | Research Analyst


Feb 3, 2025
7 minute read

Key takeaways:

  • As they enter the “test-time inference” phase, AI models are increasingly capable of exhibiting reasoning to execute complex tasks.
  • This development should greatly improve the accuracy of AI models, resulting in productivity gains for businesses and Ph.D.-level output in the sciences.
  • The computing intensity required for inference should sustain AI-related capital expenditure, benefiting advanced chip and infrastructure companies, while the more advanced AI models could be a boon for software, internet, and services companies.

At the time of its release just over two years ago, we stated that OpenAI’s artificial intelligence (AI) platform, ChatGPT, could represent the most significant technological innovation in generations. Ensuing advancements in AI have only reinforced this view. Less noticed has been a more recent development that we consider equally revolutionary: AI models beginning to exhibit reasoning.

Given the speed of innovation – and our expectations for what could be a pivotal 2025 – we see this as an opportune time to provide an update on AI and how it may fundamentally change the way we interact with technology and reshape the global economy.

A look back

We have come very far, very fast. ChatGPT was quickly embraced as a horizontal platform that could alter the utility of technology. Use cases have started to mature across functions, ranging from customer support and content creation to coding and marketing. Not limited to services, AI is also being deployed in biology and life sciences. Other foundational models have joined ChapGPT, including Claude, Google’s Gemini, Llama, and Mistral. These platforms have been in competition during AI’s training phase, and thus far the scaling laws – the function of data inputs and computing capacity yielding magnified results – have held.

Enter test-time inference

After an intense training phase, many AI models are on the cusp of transition to their more operational inference phase. This has proven more interesting than expected. The ultimate goal is for these models to eventually achieve artificial general intelligence (AGI). This pursuit took a step function forward in late 2024 with the release of OpenAI’s Strawberry platform, which marries reasoning and memory with large language models (LLM).

What does this mean? Rather than interpreting inputs and predicting a “next step,” AI models instead think through problems iteratively to identify the best solution or path forward. Models can now learn from each iteration, with every additional step producing data that can be referenced in future uses, which should lead to exponentially greater accuracy. This advancement is called test-time inference, and we believe it will be especially useful for highly complex functions such as math, physics, coding, and other applications where the best answer is prioritized over the fastest one.

Test-time inference is akin to a computer program developed by Alphabet’s DeepMind to play board games. Unlike other programs, DeepMind’s AlphaGo was not preloaded with information; it could only learn iteratively through playing games and was quickly able to defeat humans.

An example from a professional setting is an AI startup that sought to deploy the technology to execute paralegal work. It quickly advanced to associate-level tasks, and one can presume that an AI partner is not far behind. Within academia, research projects that previously marshalled a few graduate researchers can now deploy dozens of AI bots.

These advancements in reasoning have moved the goalposts. Initially, the expectation was that scaling laws would diminish as the AI training phase matured. Instead, a new set of scaling laws has emerged, owing partly to the data produced within test-time inference. Staying with the academic example, within the past two years, AI platforms have evolved from high-school student to university student and now deliver results on par with the work of a Ph.D. The speed at which this has occurred has enabled us, for the first time, to have visibility into what it will take to achieve AGI – a breakthrough that could transpire within the next three years.

A mad scramble?

Thus far, AI’s rollout has been relatively orderly. Recent advancements may change that. Rather than a tapering of capital expenditure as the training phase matures, investment levels may be sustained as AI platforms scramble to procure sufficient computing capacity to operate test-time inference.

This stage necessitates platforms being closer to the customer, and already the next generation of AI innovators is being funded to build upon these foundations. Grasping the magnitude of the opportunity, software companies are seeking to integrate AI into their offerings, and services companies are actively exploring ways to leverage AI to grow their businesses and improve efficiencies. The kicking the tires phase has ended, and the time for outcomes and monetization has arrived.

Going mainstream

The possibility remains that, as the AI training phase matures, scaling laws might exhibit diminishing returns, thus lowering the demand for capex-intensive infrastructure.

The more likely scenario is new, complementary scaling laws taking hold that we believe would keep AI-related investment high. Fortifying this argument is the AI opportunity shifting from the $650 billion software market to the multi-trillion-dollar services sector. Demand for computing power should be further supported as AI incorporates more ambient and multi-modal inputs such as voice and images.

With AI’s potential becoming increasingly visible, players outside of services and research are staking their claim. Governments are developing “sovereign AI” to protect data, increase economic returns, and maintain a degree of technological independence. Within industry, enterprises are seeking to deploy AI to optimize manufacturing processes, design factories, and integrate efficiency-enhancing bots across operations.

An investor’s perspective

With computing intensity likely increasing with the deployment of test-time inference, recent impressive investment levels by hyperscalers could be sustained over the next couple of years. This is favorable for the producers of graphics processing units (GPUs), application-specific integrated circuits (ASICs), and other segments integral to AI infrastructure. The introduction of the next generation of – and considerably more powerful – GPUs should only reinforce this trend.

Within software, the shift from AI training to reasoning should benefit both infrastructure and applications software companies. The latter could see greater demand as providers harness AI to deliver solutions for high-value business processes and workflows. Internet companies are also likely to feel tailwinds given their ownership of LLMs and global footprint.

Outside the tech sector, power supply is an issue that must be addressed given AI platforms’ energy intensity. Hyperscalers are considering multiple solutions to avoid energy bottlenecks, including deploying dedicated power sources onsite, co-locating near nuclear plants, and studying the feasibility of small-module nuclear reactors and fuel cells.

AI has commanded investors’ attention since ChatGPT’s release. The earliest iterations could seem both sophisticated and rudimentary, sometimes delivering dubious, if not comical, outputs. An intense training phase has honed these models, and the advent of reasoning could result in capabilities that were considered theoretical just a few months ago.

The speed at which this is occurring means there are few segments of the global economy and financial markets that won’t feel the impact of AI’s deployment. While secular in nature, we think it’s reasonable for investors to expect the AI theme to translate into monetization over the near- to mid-term.

IMPORTANT INFORMATION

Technology industries can be significantly affected by obsolescence of existing technology, short product cycles, falling prices and profits, competition from new market entrants, and general economic conditions. A concentrated investment in a single industry could be more volatile than the performance of less concentrated investments and the market as a whole.

Energy industries can be significantly affected by fluctuations in energy prices and supply and demand of fuels, conservation, the success of exploration projects, and tax and other government regulations.

Concentrated investments in a single sector, industry or region will be more susceptible to factors affecting that group and may be more volatile than less concentrated investments or the market as a whole.

All opinions and estimates in this information are subject to change without notice and are the views of the author at the time of publication. Janus Henderson is not under any obligation to update this information to the extent that it is or becomes out of date or incorrect. The information herein shall not in any way constitute advice or an invitation to invest. It is solely for information purposes and subject to change without notice. This information does not purport to be a comprehensive statement or description of any markets or securities referred to within. Any references to individual securities do not constitute a securities recommendation. Past performance is not indicative of future performance. The value of an investment and the income from it can fall as well as rise and you may not get back the amount originally invested.

Whilst Janus Henderson believe that the information is correct at the date of publication, no warranty or representation is given to this effect and no responsibility can be accepted by Janus Henderson to any end users for any action taken on the basis of this information.