Beyond the Hype Cycle

What seven decades of AI’s rises and falls teach us about surviving the next downturn.

A Seasonal Bubble

Articles arguing that AI is a bubble are increasing. At first there were questions about the valuation of AI investments, such as Nvidia, which is now the world’s most valuable company. Of late there is a shift to doubting the capabilities of the technology. If you are deploying AI, how do you protect yourself?

AI develops in cycles, often referred to as seasons. Over time, the optimism of an AI summer gives way to a few years of winter. Then interest is revitalised by technological developments that are not directly related to AI. These allow researchers to solve problems posed at the end of the previous summer and a new one begins.

Alan Turing’s 1950 Turing Test paper is often treated as the birth of artificial intelligence. Frank Rosenblatt simulated supervised learning in 1957. Theory became practice because of Boolean logic, early digital computers and wartime code-breaking.

The first AI summer started at the 1956 Dartmouth Conference. It was here that John McCarthy coined the term artificial intelligence. For the next two decades the focus of research was on Symbolic AI, logic-based reasoning, early natural language processing and game playing. Achievements included solving algebra puzzles, playing checkers and proving mathematical theorems.

In Symbolic AI, knowledge is represented by symbols and uses rules to manipulate them to draw conclusions, make decisions and solve problems. It is precise logic that is easy to debug and has no need for huge datasets.

The problem is that humans must translate expertise into formal rules. This limits what can be known because there is no way to code intuition, or deal with ambiguous data. New knowledge must be updated by hand. Despite early promise, Symbolic AI stalled. There is never any guarantee that experts are working on a winning technology.

By the mid-1970s, the failure to meet natural language and robotics goals led to a loss of investment. This lasted until around 1980. In the meantime, the advent of microprocessors and early personal computers planted the seeds for AI’s revival.

A Second AI Summer

The second AI summer ran through till 1987. It was based on expert systems and enabled by affordable minicomputers and workstations, relational databases to hold large rule sets and improved programming. AI was deployed in industry for diagnosis, configuration and planning.

Expert systems suffered from several of the same failings as Symbolic AI. They were narrow, expensive to maintain and industrial projects again under-delivered. Once more investment dwindled. Meanwhile, scaling of integrated circuits allowed rapid performance improvements in personal computers. Networking expanded with the early internet, Ethernet in offices and the rise of client-server architecture.

An Exponential AI Summer

The mid-1990s saw a resurgence in machine learning. Rule-based AI gave way to data-driven statistical approaches. Probabilistic models and deep learning experiments were enabled by Support Vector Machines. The period through to 2008 saw an explosion of digital data, affordable high performance PCs and servers, and open-source systems, such as Linux and Python.

By now AI had momentum. There was barely a break before the fourth AI summer which persists today. This is the deep learning boom of neural networks, big data and GPU[1] acceleration. Landmarks include breakthroughs in image generation in 2012 and the development of transformers from 2017, which allows generative AI.

At the same time emerging technologies benefit AI. Cloud computing platforms allow scale and are why Microsoft, Amazon and Google dominate today. Smartphones generate massive datasets and fast broadband enables AI-powered apps at scale. ChatGPT is a direct result of these developments.

Now fears of a new AI winter are rising. Knowledge based on probability is not suitable for all decision-making, and is why LLMs hallucinate and give different answers to the same question. If an answer cannot be relied upon, there are mission critical business decisions that AI cannot be trusted to take. This may limit the uptake of AI Agents.

There are also concerns about the high cost of compute and a lack of profitability of AI projects. If the big tech companies reduce capital expenditure then the valuations of AI companies could collapse.

This is more of an investment problem than a technological one. Over seven decades, different strands of AI have run their course. Sometimes this is because of a lack of enabling technology and at others it is inherent in the type of AI.

Progress is never linear. The success that has been accelerating for 30 years is often referred to as exponential. This means it keeps getting faster.

It is unlikely to last forever. There may be limits to generative AI that speed and size alone cannot address. Yet unlike in earlier times, there are multiple strands of AI in progress. These include predictive and prescriptive analytics, reinforcement learning, neuro-symbolic AI, computer vision, causal inference models and natural language understanding without generation.

The dream of AI will not die, even if ChatGPT proves the most spectacular company failure. The fallout were this to happen would be significant. It is one reason why at MSBC we build systems that operate with a range of models. We recommend that data and processes are stored so that a company failure cannot deny you access.

Whether or not an AI bubble bursts, the technology will continue to develop. Meanwhile, businesses must make sure they are not exposed to single points of failure.

Questions to Ask and Answer

  1. Where is my data stored?

  2. Is it backed up with the same company?

  3. What would happen if ChatGPT ceased to exist?

Find out more. Hit reply to this newsletter and ask about Sustainable AI.

[1] Graphics Processing Unit – a technology in which Nvidia dominates

Reply

or to participate.