
How to Solve a Maze
I am not a mathematician. My last formal education in the subject was my first year at university. Since then, I have worked extensively with statistics in both capital markets and fintech. I consider understanding probability to be a superpower that is too often missing in those who lead us.
But probability is not the answer to everything. This is one of the reasons why mathematicians are frustrated with AI proofs. When an answer is delivered by a brute force, probabilistic solution, it does not create reusable learning in a form humans can apply.
Consider a human and an AI model solving a complicated maze. The human takes their time, mapping the journey. By the time they finish they have a practical guide for solving mazes. Always keep the outside wall on your right-hand side.
In contrast, the model is trained on millions of similar mazes. It does not use geometry or mapmaking, but reaches the exit in seconds. When asked how it did it, the reply is I turned 4,000 times based on the texture of the walls. This does not help you solve the next maze.
I am thinking about this in the context of ownership of AI solutions and how we craft contracts.
The Challenge of Intellectual Property
One of the thorniest parts of contractual negotiations is ownership of intellectual property. When you buy a software product, you are buying the means to solve a problem. The data and the outcome are yours, but the means of getting from one to the other is owned by the supplier.
When you have a custom solution built by an outside consultant the situation is different. The process is designed for your business and may contain trade secrets. You want to own the IP once you have paid the consultant.
Now consider the same process delivered by an AI model. The outcome is successful, but no one is sure how this was delivered. If you own the IP, what are you getting?
The core position in the AI industry is that AI-generated code is cheap to produce. Much of it is derived from patterns in the training data and the same problem solved twice will produce different results. The idea of owning a unique, defensible codebase is weakening.
Enforcing ownership becomes impractical when outcomes are easy to replicate and the advantage of a solution is temporary. As with the mathematical proof, an AI solution provides an answer, but not a method you can reuse. Your commercial advantage lies in data, distribution and speed of execution. The solution has value, but ownership of it as IP has less than it once did.
The mistake is assuming that IP has gone away. Value has instead transferred from reusable logic to accumulated context. This includes data, workflow design and human judgement layered into the system. A contract must reflect where the value sits.
In practice, IP breaks down into access to data, visibility into how decisions are made, and the ability to adapt the system without starting again.
If you are building or buying an AI solution there are several issues to consider. A working demo proves little about the long-term value. If a system cannot be extended by the business buying it, then owning it as IP has limited value. As the buyer you are dependent on whoever built the solution.
To test for transferability, ask how the system adapts to a new but related problem. If the answer is unclear, you are renting an outcome rather than buying a capability.
The purpose of a contract is to assign control. In an AI system, this lies in data, prompts and configurations. Spelling these out may be more valuable than general IP clauses. A solution is only extendable if the contract says so.
A system that you cannot control behaves like a service, even if you own it on paper. This does not mean that AI removes the issue of IP. Rather it exposes where the real work is done and the value created.
The value of a proof lies in what it teaches you. The same test applies to AI systems. If nothing carries forward, ownership is largely beside the point.
Questions to Ask and Answer
1. Do I know how my AI systems delivered an outcome?
2. If I re-run the system do I get the same result?
3. Are my data, prompts and rules protected by my contracts?
