How to Get the Most from AI

The hidden habits that turn AI hype into real returns.

AI is Working

Three-quarters of large US companies that track the data say they make positive returns on AI1 . Given it’s a struggle to get a consistent response from ChatGPT, what are these corporations doing differently?

Around a third of the AI technology budget is spent on research and development. This is for custom capabilities that might replace humans. That does not mean, however, jobs will be lost.

The research is necessary given the way AI works. It is an iterative process to get an LLM to the level where it can solve problems. Understanding this has implications for our everyday use of ChatGPT.

Common Frustrations with LLMs

In July, Google and OpenAI announced they had won gold medals at the International Maths Olympiad. Last month, a paper2 claimed similar results using several off-the-shelf models. With careful prompting, the researchers dramatically improved the LLMs’ core performance.

Generative AI is built on transformer technology. This uses everything that has gone before to predict the next item. After each prompt, models re-read the tokens in the current context window. In long threads, earlier content may be truncated and models can struggle to use mid-conversation details.

In effect, an LLM recreates the response to every prompt. The longer you chat, the more likely the probability function is to generate an erroneous answer. The likelihood increases with truncation, poor prompt quality and task difficulty. This is why LLMs lose the thread of long conversations.

There is also a limit on how much a model can reason. If you ask it to solve complicated maths puzzles like the ones in the IMO, it will run out of test-time compute (tokens x steps x model calls) before finishing. The solution is to break down an answer into stages.

The researchers created a verification and refinement pipeline. They had the model critique its own work. They then had it write a bug report. Then it used the bug report to correct its solution. The process was repeated until the answer was accepted. This worked for five of the six IMO problems, which is enough for a gold medal.

If you ask an LLM to solve complicated maths problems with an unstructured single prompt, it will fail. It will try multiple solutions and go down rabbit holes. It will run out of compute. Ask it to try again and the same things will happen.  

If, however, you ask it to focus on a specific solution it will produce an answer. Give it several steps and it will improve that answer. You don’t need to be a computer science professor to do this.

Improving Your Sales Process

Christopher S Penn describes a similar process for improving sales. First use an LLM to create an ideal client profile from a handful of LinkedIn profiles for your target customers. Then have it review your website, PowerPoint, or product offering. Load both results into another prompt and ask the LLM how well your product meets the needs of the target.

That’s enough to rewrite your product pitch. Then ask the LLM to work as an expert sales coach and create selling frameworks that will work best for your product with the target profile. The gold medal step is to have the LLM compare these techniques to your performance on recorded calls. It will be brutal, but it will make you better.

The problem with single prompt responses is that LLMs are not focused. Humans can be similarly unfocused when presenting and selling. We also go off on unnecessary detours, provide spurious detail and get lost in data. In both cases sticking to the task in hand is the correct way to proceed.

LLMs are trained on huge bodies of data but not the data for the task that you want solved. For that reason you must source the right data. If you don’t have it then you can use the LLM to generate it. This is what a step-by-step solution does. Whether you are verifying and refining maths puzzles or a sales process, the idea is the same.

An Abundant Mindset

This is what large corporations are doing in the research and development of AI. They are iterating ways of doing specific tasks in the workplace. The positive returns come once the AI costs less to do the task than a human.

Yet this does not mean lower employment. A job is more than a task, or even a series of tasks. And there is no fixed amount of work to be done.

Geoffrey Hinton is known as the godfather of AI. In 2016, he recommended stopping training radiologists. AI could already do a better job of spotting patterns of disease. Yet today radiologists have risen to the second highest-paid medical speciality in the US, earning almost 50% more than in 2016. Many more people can now have scans thanks to AI. Radiologists handle more patients, liaise with more doctors and decide on more next steps.

The corporations generating positive returns from AI adopt an abundant mindset. The technology is about expanding the pie rather than dividing it differently. Once in this mindset, it is easier to focus on how to get the most from LLMs.

 Questions to Ask and Answer

  1. How often do you change topics during a conversation with AI?

  2. Do you ask for a single answer or a choice of options?

  3. Do you always start a new topic in a fresh chat?

Reply

or to participate.