Saturday, February 14, 2026

On Our "Virtual Route 99" (Special W-End Edition): On Artificial Intelligence, Robots & The Future


On this special edition of our Virtual Route 99 on the Tech Scene ((and as we extend best wishes to all on this Valentine's Day 2026): our team is pleased to present thoughts from Dr. Peter Diamandis as we also hereby note an admonition from the Microsoft AI Chief, Mustafa Sulemian during a conversation he had with the Editor of the Financial Times, recently: 


I just spent the afternoon at Figure headquarters in San Jose with Brett Adcock and David Blundin, and I’m still processing what I saw.

We’re not talking about concept robots. We’re talking about fully autonomous humanoid robots running neural networks end-to-end, doing kitchen work, unloading dishwashers, organizing packages – for hours at a time, with no human intervention.

Today? Figure’s robots are doing 67 consecutive hours of autonomous work. One error in 67 hours. That’s not a demo. That’s a product.

And here’s what most people don’t understand: the gap between “doing one task really well” and “doing every task a human can do” is collapsing at exponential speeds.

Let me explain why…

NOTE: Brett has been a past Faculty Member at my Abundance Summit, where leaders like him share insights years before the mainstream catches on. In-person seats for the 2026 Summit next month are nearly sold out. Learn more and apply.

The Death of C++ and the Rise of the Neural Net

When I first visited Figure, they had several hundred thousand lines of C++ code controlling the robots. Handwritten. Expensive. Brittle.

Every new behavior required engineers to anticipate edge cases, write more code, test it, debug it. It was the software equivalent of teaching a toddler to walk by writing an instruction manual.

In the last year, Figure deleted 109,000 lines of C++ code.

All of it. Gone.

What replaced it? A single neural network that controls the entire robot: hands, arms, torso, legs, feet. Full-body coordination. Real-time planning. Dynamic response to unexpected situations.

This is Helix 2, their latest AI model, and it’s a fundamentally different approach to robotics.

Here’s why this matters: neural nets learn from experience, not instructions.

You don’t code a robot to “grab a cup.” You show it thousands of examples of grasping objects—different shapes, weights, materials—and the neural net extracts the underlying patterns. It learns what “grasping” is at a representational level.

And once it understands grasping? It can generalize to objects it’s never seen before.

Brett put it simply: “If you can teleoperate the robot to do a task, you can train the neural net to learn it.”

That’s the unlock. If the hardware is capable—if the motors, sensors, and joints can physically perform the movement—then the AI can learn it from data.

Compare that to traditional robotics, where you’d need to write thousands of lines of code for every single new task. That approach doesn’t scale. Neural nets do.

The implication: Every robot in the fleet learns from every other robot’s experience. When one Figure robot masters folding laundry, every Figure robot on the planet instantly knows how to fold laundry.

Humans don’t work like this. Robots do.

Hardware Built Around the Brain

Most people think you design the robot first, then figure out the AI.

Figure did the opposite.

Brett’s team looked at the neural network architecture they wanted to run and asked: “What hardware do we need to make this work?”

That’s why Figure 3 exists. It’s not an incremental upgrade. It’s a complete redesign built around Helix.

Here’s what changed from Figure 2 to Figure 3:

  • 90% cost reduction in manufacturing

  • ~20 pounds lighter (135 lbs total)

  • Palm cameras for occluded grasping

  • Tactile sensors in every fingertip

  • Passive toe joint for better range of motion

  • Soft-wrapped body to eliminate pinch points

  • Onboard inference compute (no cloud dependency)

And critically: designed for data collection at scale.

Because here’s the thing — if you’re betting on neural nets, you’re betting on data. The more diverse, high-quality data you collect, the better the robot generalizes.

Figure built their robot to be a data-gathering machine. Every sensor, every camera, every interaction feeds back into the training loop.

And they’re not using off-the-shelf parts. They manufacture their own actuators, hands, battery systems, embedded compute—everything.

Why? Because the technology readiness of existing robotics components is too low. If a vendor’s motor fails in the field, you’re stuck waiting for them to fix it. If you built it yourself, you iterate overnight.

This is vertical integration at its finest. And it’s the only way to move fast enough to win.

The Manufacturing Ramp: From Thousands to Millions

Walking through Figure’s Baku (manufacturing facility) was surreal.

Four production lines. Capacity for 50,000 robots per year when fully ramped.

But Brett’s not stopping there. He’s already planning the next facility. Tens of thousands. Then hundreds of thousands. Then millions.

And here’s the kicker: Figure will use its own robots to build more robots.

They’re putting humanoids on the production lines this year. Robots assembling robots. Robots testing robots. Robots packaging robots.

Why? Because if you’re trying to scale to a billion units, you can’t rely on human labor. You need an exponential manufacturing curve, and the only way to get there is recursive self-improvement.

Think about it: every improvement Figure makes to the robot’s dexterity, speed, and reliability makes it better at building the next generation of robots.

It’s a flywheel. And once it starts spinning, it’s nearly impossible to stop.

Brett estimates they could ship a billion robots today if the AI were fully general-purpose. The demand is there. The capital markets (via leasing models) can finance it. The constraint is solving general robotics.

And that’s exactly what they’re working on.

General Robotics: The Only Milestone That Matters

Here’s the thing about humanoid robots that most people—and most companies—don’t get:

Teleoperation is not impressive. Open-loop behaviors are not impressive. One-minute demos are not impressive.

What’s impressive is closed-loop, autonomous work in unseen environments over long time horizons.

Let me break that down.

Closed-loop means the robot is continuously sensing its environment and adjusting in real-time. It’s not replaying a pre-programmed sequence. It’s thinking.

Autonomous means no human in the loop. No remote operator in Tennessee. The robot is making decisions on its own.

Unseen environments means you can drop the robot into a random Airbnb or factory floor it’s never visited, and it figures out how to navigate and work there.

Long time horizons means hours, days, weeks of continuous operation. Not 30-second clips stitched together in post-production.

This is what Brett calls “general robotics,” and it’s the only milestone that matters.

If you can’t do this, you don’t have a product. You have a very expensive remote-controlled toy.

Figure’s current benchmark: four to five hours of continuous neural network operation in logistics, kitchen work, and manufacturing tasks.

Their 2026 goal: Drop a robot into an unseen home and have it do useful work for days with minimal human intervention.

Once they hit that, the game is over. Because if the robot can generalize to any home, it can generalize to any environment. Factories. Warehouses. Hospitals. Senior care facilities. Mining operations. Space stations.

The hard part isn’t building a robot that can do one thing well. The hard part is building a robot that can do everything a human can do.

And Figure is closer than anyone else on the planet.

The Timeline: When Will You Have One?

Everyone wants to know: when can I buy a Figure robot for my home?

Brett’s answer: Not yet. And I’m not going to ship slop.

Here’s his roadmap:

2026: Alpha testing in homes. A small number of robots doing long-horizon work (cleaning, organizing, laundry, dishes) in real households. The goal is to measure human interventions – how often does someone need to step in and help?

Right now, industrial deployments see occasional errors. The target for home deployment is orders of magnitude better.

2027-2028: Scaled home pilots. Tens, then hundreds, then thousands of units. Iterative design based on real-world feedback. Safety validation. Privacy validation. Reliability validation.

2028-2029: Mass production and broad availability.

Why so cautious?

Because Brett learned this lesson at Archer (his eVTOL company): you don’t ship safety-critical systems until they’re ready.

A humanoid robot in your home is around your kids, your pets, your elderly parents. If it drops a pot of boiling water, that’s catastrophic. If it steps on your cat, that’s catastrophic. If it gets hacked and streams your private conversations to the internet, that’s catastrophic.

So Figure is taking the time to get it right.

And honestly? I respect the hell out of that.

Because when Figure does ship, they’ll have a decade head start on everyone else in terms of safety track record, reliability data, and customer trust.

That’s a brand moat you can’t buy.

The Business Model: Leasing Humanoids Like Humans

Here’s the beautiful part of Figure’s go-to-market strategy:

They’re leasing robots the same way you lease humans — by the hour.

Think about it. You don’t “buy” an employee. You pay them a salary. You lease their time and capability.

Figure is doing the same thing. You don’t buy a $20,000 robot. You pay ~$300/month to lease one. That’s $10/day. Forty cents an hour.

Compare that to minimum wage ($15-20/hour in most US states). A Figure robot is 50x cheaper and works 24/7 with no breaks, no benefits, no turnover.

And because it’s a lease, Figure retains ownership. They can remotely update the software. They can recall and upgrade units. They can monitor performance and safety in real-time.

It’s the Apple model applied to robotics. And it’s going to print money.

Brett estimates the market for humanoid robots is half of global GDP: roughly $50 trillion annually. Because half of all economic activity is human labor.

And here’s the thing: the demand is already there.

Figure has signed multiple commercial clients. They’re deploying robots into factories, warehouses, and logistics operations this year. These aren’t pilots. These are revenue-generating contracts.

The bottleneck isn’t demand. It’s supply. It’s solving general robotics and scaling manufacturing fast enough to meet the orders.

When that happens? This becomes the largest economy in the world.

What Comes Next: The Age of Abundance

Let me paint the picture of where this is going.

2030: Every household in the developed world has access to a humanoid robot. You lease it for $300/month. It does your laundry, cleans your house, organizes your kitchen, runs errands.

You come home from work and your robot has already meal-prepped dinner based on your biometric data from your wearable. It knows you’re low on magnesium, so it adjusted the recipe.

2035: There are 10 billion humanoid robots on the planet. Five billion in homes. Five billion in commercial and industrial settings.

The cost of goods and services collapses. Why? Because labor is no longer a constraint. Robots mine the materials, manufacture the products, transport them, and deliver them to your door.

You want a custom piece of furniture? A robot designs it, fabricates it, and assembles it in your garage overnight. Cost: materials + energy. Labor: free.

2040: Robots are building robots. Robots are designing robots. Robots are optimizing supply chains, managing logistics, exploring asteroids, constructing orbital habitats.

Humans are freed from drudgery. We do what we’re best at: creating, exploring, connecting, imagining.

This is the age of Abundance.

And it starts in 2026.

Brett Adcock and his team at Figure are building it. Right now. In San Jose.

I’ve seen it. I’ve walked the floors. I’ve watched the robots work.

And I’m telling you: this is real.

The future doesn’t care if you believe in it. It’s coming anyway.

The only question is: are you ready?

To an Abundant future,

Peter

No comments: