← Back to Library

Fully autonomous robots are much closer than you think – sergey levine

The conversation about robots has shifted dramatically. For decades, we imagined them as specialized tools—machines built for single tasks like welding or assembly. But Sergey Levine sees something different emerging: a general purpose robot that could eventually do what any human does in a home.

Levine is a co-founder of Physical Intelligence and a professor at UC Berkeley, widely considered one of the world's leading researchers in robotics, reinforcement learning, and AI. His company aims to build robotic foundation models—general purpose systems that could control any robot to perform any task. This isn't just incremental improvement. It's a fundamentally different vision for what robots could become.

Fully autonomous robots are much closer than you think – sergey levine

The goal isn't just to fold laundry or clean a kitchen. The actual target is far more ambitious: a robot that understands complex commands like "I wake up at 7:00 a.m., I need dinner made at 6 p.m., make sure my laundry is ready by Saturday, and check in with me every Monday to see what I want done next." That prompt isn't a single action. It's months of continuous execution. The robot needs to understand the physical world, possess common sense, pull in additional information when needed, handle edge cases intelligently, improve continuously, and fix its own mistakes.

That's the real challenge. And according to Levine, we're much closer than most people think.

Where Robotics Actually stands today

The basic building blocks are already in place. Robots can fold laundry, clean kitchens, make coffee, and handle other dexterous tasks that require sophisticated grip control. These capabilities sound simple, but they're actually remarkable—folding a box with grippers is genuinely difficult for humans to do with their own hands.

But here's what matters: these aren't the end goal. They're confirmation that the basics work. The real test comes next—when we start giving robots increasingly complex commands and broader responsibilities.

The trajectory follows something we've already seen with AI coding assistants. Initially, they could only complete small functions—maybe half of a function signature correctly. As they improve, users become comfortable giving them more agency. Eventually, they'll run entire workflows.

Levine estimates we'll see single-digit years before robots reach this level of capability. His median estimate is around five years.

Why the timeline might be shorter than expected

There's something unusual about this prediction compared to other AI breakthroughs. With large language models, we saw a remarkable system emerge—something that feels like it passes the Turing test, seemingly capable of doing knowledge work across the economy. Yet the revenue numbers tell a more modest story: around 20 to 30 billion per year, far less than the 30 to 40 trillion in knowledge work the economy actually does.

Robotics might follow a different path. The key difference is feedback loops.

When an AI assistant answers a question wrong, the user often doesn't even know it happened. There's no obvious signal that something went wrong. But when a robot folds laundry and makes a mistake, it's immediately apparent. The person can reflect on what happened, figure out why it failed, and do better next time.

This creates natural supervision signals. People interacting with robots have strong incentives to assist in ways that make the system succeed. The physical world provides constant, clear feedback about whether tasks are being completed correctly. That makes continuous improvement far more practical than in pure software systems.

The flywheel starts when robots reach basic competence—able to do something people actually want done. Once deployed in the real world, they'll collect experience and leverage that experience to get better. The timeline isn't about some dramatic moment where suddenly all robots are everywhere. It's about when the flywheel begins spinning.

Critics might note this vision assumes we solve fundamental challenges around reliability, safety, and common sense reasoning—problems that have stumped AI researchers for years. The five-year timeline is optimistic, and synthesizing these capabilities requires exactly as much intellectual effort as any breakthrough.

What to watch

The interesting question isn't whether robots will eventually become capable. It's about scope: what tasks we'll give them permission to do as they improve? Initially, the scope might be narrow—a particular task like making coffee or folding laundry. But as their capability grows and they develop more common sense, we'll hand them greater responsibilities.

The economic impact in five years could be profound if this works. Unlike LLMs, which have seen massive capability improvements but modest revenue growth, physical robots deployed in the real world might finally close that gap between capability and actual value delivered.

"What you really want from a robot is not to tell it like, hey please fold my t-shirt. What you want from a robot is to tell it like, 'Hey, robot, you're now doing all sorts of home tasks for me.'"

This vision represents the most ambitious attempt to create general purpose machines that can operate in the messy, unpredictable physical world rather than just digital spaces. The five-year timeline might sound aggressive, but the reasoning behind it—specifically how continuous feedback enables faster improvement than pure software—suggests it's not unreasonable.

The biggest vulnerability is execution risk: synthesizing these capabilities requires solving hard engineering problems that may take longer than expected. But if Levine and his team are right about the flywheel effect, five years might actually be conservative.

What happens when robots can run a house as well as a human housekeeper? That's what to watch for in the coming years—and whether they truly unlock the productivity gains that have eluded other AI breakthroughs.

Deep Dives

Explore these related deep dives:

Sources

Fully autonomous robots are much closer than you think – sergey levine

by Dwarkesh Patel · Dwarkesh Patel · Watch video

Today I'm chatting with Sergey Levven who is a co-founder of physical intelligence which is a robotics foundations model company and also professor at UC Berkeley and just generally one of the world's leading researchers in robotics RL and AI. Sergey, thank you for coming on the podcast. >> Thank you and thank you for the kind introduction. >> Let's talk about robotics.

So before I pepper you with questions, I'm wondering if you can give the audience a summary of where physical intelligence is at right now. You guys started a year ago. Yeah. >> And what does the progress look like?

What are you guys working on? >> Yeah. So, physical intelligence aims to build robotic foundation models. And that basically means general purpose models that could in principle control any robot to perform any task.

we care about this because we see this as a very fundamental aspect of the AI problem. Like the robot is essentially encompassing all AI technology. If you can get a robot that's truly general, then you can do, hopefully a large chunk of what people can do. And where we're at right now is I think we've kind of gotten to the point where we've built out a lot of the basics.

And I think those basics actually are pretty cool. Like they work pretty well. We can get a robot that will like fold laundry and that will go into a new home and like try to clean up the kitchen. But in my mind, what we're doing at physical intelligence right now is really the very early beginning.

It's just like putting in place the basic building blocks on top of which we can then tackle all these like really tough problems. >> And what's a year-by-year vision? So, one year in now, I got a chance to watch some of the robots and they can do pretty dextrous tasks like folding a box using grippers and it's like I don't know, it's like pretty hard to fold the box even with like my hands. if you had to go year by year until we get to the full like robotics explosion, what is happening every single year?

What is the thing that needs to be unlocked etc. So there are a few things that we need to get right. dexterity obviously is one of them and in the ...