What AI Can and Cannot Do

Errors predicting trading patterns are of less consequence than those impacting people’s lives.

Shiny and New

My wife and I were excited to find our home. The downstairs is light and spacious, the finishings to our taste and we hurried to make an offer. Over time we’ve uncovered the snaggles behind the scenes. What was once shiny and new, is in need of repair through repeated use.

The longevity of products depends on how well they are made. Machine learning and AI tools depend on the quality of data. Surveys suggest 60% of development time is spent preparing it. Our experience building OTAS for equity markets was more like 80%. Even then there were constraints on what we could do.

The predictive trading tool benefits from the rules that govern stock exchanges. They dictate a number of important variables. The constraint is how much you can afford to pay for data and the computing power to act fast. As OTAS supports traders, we controlled costs by operating at human speed.

The investing tool combines fundamentals, news and sentiment. It was longer range and less accurate than trading, and its outcomes subject to more factors and chance occurrences. In many industries there are complex conditions that mean particular outcomes are unable to be predicted.

Hype and Over Automation

Narayan and Kapoor coined the term AI snake oil for when companies peddle solutions to problems that AI does not and cannot solve. They cite examples of software predicting whether criminals will reoffend, with one product having a 137 question survey. The resulting model still knows nothing of individual motivations and chance events that affect actual outcomes. As a result, the authors claim AI fails.

The many examples they give of predictive AI failures are data science issues. Data is incomplete or poorly understood, engineers unfamiliar with statistical techniques and founders unaware of the limits of computer science. The resulting products are human knowledge in silicon, rather than artificial intelligence.

The problem AI Snake Oil uncovers is bloated promises from commercial companies to naïve customers, often in the public sector, desperate to cut costs. There is nothing unique to AI in this, snake oil salesmen are as old as commerce itself.

Hedge funds are capable of making billions from being right 52% of the time. Judicious scaling of bets turns a small advantage into big wins. Being wrong almost half the time is a cost of being in business.

Services that determine people’s life chances, cannot make decisions with a 52% success rate. Automating processes that do not work compounds problems, as seen in the racial bias of data on likely reoffenders, or the age bias in automated hiring software. The issue is pressure to perform, rather than failures in artificial intelligence.

Be Careful with Copilots

Using a general purpose AI model for specific tasks has a high risk of failure. Wherever possible, you should adapt foundational models with your own data. If you haven’t figured out what your data can tell you, be cautious about conclusions drawn by probabilistic models, when there is no explanation for why outcomes are correct.

This applies when using copilots. These are foundational models, fine-tuned for types of tasks, but not necessarily the ones you want doing. If the results don’t seem right, then they probably aren’t. All the data in the world on reoffenders, would not stop applications replicating bias in the arrest records of law enforcement officers.

I advise managers not to delegate by dictating a process that they have not tested. If you do task people with finding solutions, no doubt you check their results before putting them into production. The challenge with AI is that few people possess the skills to challenge data scientists. Whether the expertise is in-house, or hired through ChatGPT or an AI copilot, always question whether outcomes make sense.

There is a race to adopt AI to develop solutions that human intuition has failed to find. This leads to hype cycles and automating roles that are not ready to be lost. If you make data-driven decisions where you only need be right a majority of the time, then machine learning with skilled data scientists is for you.

If there are legal or regulatory consequences of being wrong, automate what already works, or work with a partner with the skills to develop your solution. Buying off-the-shelf answers to problems you cannot solve yourself may cause problems. If so, like my wife and I, cross your fingers and hope that those who built behind the scenes did a professional job.

Questions to Ask and Answer

  1. Do I understand what the data in my business is capable of telling me?

  2. Do my data-driven processes deliver consistent, desirable outcomes?

  3. Have I bought AI to solve a problem I otherwise could not solve?

When you are ready there are three ways I can help:

  1. Schedule a call to chat or ask questions about AI with my tech team.

  2. Resolving Team Conflicts: A free email course tackling an issue that no one teaches you as a manager.

  3. The Profit Through Process Planner: My flagship course on how to design and invigorate a business that scales. I share 30 years of experience of researching, investing in and running companies, intermingled with the science and stories of business.

Reply

or to participate.