Our Robot Future is Closer Than We Think

Will a billion agents or a billion robots have a bigger impact on our lives?

What will be the most visible impact of AI? This question came up at lunch last week. We were debating ten-year spans when people tend to underestimate change, rather than two when we expect too much.

The most common answer was the impact on employment. People in their 20s feel the job market becoming more difficult. AI plays a role, even though I believe cyclical economic factors are having the greatest effect. Perception becomes reality however, and firms may pull back on hiring because they perceive competitors doing the same. This way we get a self-fulfilling prophecy.

The question might be rephrased as, if we get a billion AI agents and a billion robots, which will have more impact? The robots will be more visible. Will they also be more impactful?

Remote Working and De-urbanisation

Singapore-based Manus believes that AI gives everyone the opportunity to access cloud computing. It is no longer a privilege reserved for engineers and those with big budgets. Last week Manus launched Wide Research, including an orchestration layer that ties together chains of general-purpose agents. Its intention is that everyday users can automate step-by-step processes.

Chaining is a way to get to a billion agents before we reach an equivalent number of robots. This will impact our work lives. Yet is it a sea-change, or part of a long process of adoption that kicked off with the industrial revolution?

I’d argue that working from home is a more profound change. For centuries people travelled for paid work. They congregated in cities, which are the most effective multipliers of human endeavour, because of network effects from living cheek-by-jowl. That changes with remote work.

Consider the consequences of reversing urbanisation. Science fiction depictions of future cities are typified by endless skyscrapers, multiple modes of transport and high density living. Yet if agents make it possible to do most tasks while working from anywhere, then the downsides of city living may create a far different future. The job market is where impact starts rather than finishes.

Five Levels of Robot Autonomy

A decade or so ago I went to a charity awards ceremony. Halfway through proceedings an immense robot entered the room. People scattered. One by one they crept back, daring the fierce blue gaze to turn on them.

Our partners at Scan demonstrated a robot at this year’s London Tech Week. This did little more than its giant predecessor, but people wanted to engage and take videos. Robots are still a novelty that captivate a room.

Fast forward another decade and robots may be everywhere. They’re chopping food in kitchens and doing the laundry. They sweep the yard and clean the bedroom. People who still attend an office are greeted by robot receptionists, watched over by security robots and eat food prepared in the robot canteen. We don’t even notice when a robot enters or leaves the room.

Last week Dylan Patel and SemiAnalysis published the five levels of robot autonomy. This mirrors the five stages of software agents that OpenAI champions and which are expected to culminate in artificial general intelligence. Most use cases for robots are industrial, but by the last level we have robots capable of doing fine motor skilled tasks, such as plumbing and electrical work.

Improvements in industrial robots are not visible to most of us. We might notice goods getting cheaper, as they have done for decades, but everyday encounters with robots will have to wait till the last levels of development.

Level one robots automate factories today. They do not require AI. They might be considered the pinnacle of the industrial engineering age, rather than the birth of autonomous robots.

Level two allows for pick and place workflows. Robotic arms sort components and correct themselves when they make mistakes. A small scale example was on display at the NVIDIA-HP stand at London Tech Week.

Level three allows for autonomous mobility. Robots read their environment and navigate obstacles and uneven terrain. They escape the factory and populate construction sites, oil and gas refineries and patrol industrial plant. At MSBC, we are already exploring using robots for site security and escorting visitors in sensitive areas.

Level four incorporates low-skill manipulation. Robots begin to appear in the home performing simple tasks such as folding clothes. Line cooks, logistics and machining workflows are automated.

At the final level five, robotic hands are capable of acting with varying degrees of force. They unscrew jars, lift delicate lids and perform most domestic chores. Combined with the vision and mobility of earlier stages, the robots are now everywhere.

There are doubts whether data and vision alone are enough to get us to this level. Tactile sensing is required. Consider how easy it is to rummage in a pocket for a phone. Yet this remains well beyond the robots of today. Only the exponential development of AI will get us there.

Nothing to See Here

I believe our robot future will prove the most impactful use of AI. Optimists expect it may be just a decade away. We will know it is here when we no longer notice. My kids cannot imagine a life without mobile phones, let alone computers. Their children will stare in similar disbelief that parents grew up without robots.   

Questions to Ask and Answer

  1. What pick and place tasks do I have in my business?

  2. Which simple tasks require vision and navigation?

  3. To what degree is touch essential in my workflows?

Find out more. Hit reply to this newsletter and ask about Physical AI.

Reply

or to participate.