I was lucky enough to discover Edoardo Tedesco through the recommendations on my YouTube homepage — a physics student pursuing a master’s degree currently working on his thesis around Transformers, LLMs, and AI. Just around 30 subscribers on his youtube channel as of today. He’s taking us along on his journey, sharing AI content on his channel as he explores ideas for his thesis, with the kind of first-principles, critical thinking that comes from the study of physics. He's a great thinker that gets his hands dirty with practical work.
The other day I watched his video on Neo from 1x.tech. While everyone is talking about Neo these last couple of days, I found many interesting novel point of views in his video. For example at some point he considers how the $499/month subscription is like a $300/month physical world add-on to the $199/month ChatGPT Pro subscription. That's a very interesting way to look at it, I had never thought about it this way.
While watching the video, there were a couple of notes I wanted to add to the conversation, so I started writing a comment, but the comment was getting longer and longer, so I decided to write here instead.
Theory of mind
Around timestamp 4:14 Edoardo mentions Paolo Benanti's "theory of mind." I googled Paolo and found some things about him but nothing about this "theory of mind." Paraphrasing the video, this theory seems to be about the fact that
we humans evolved to obviously assume the people in front of us have a soul, feelings, awareness, and we can't help but communicate to LLMs and human-like machine holding the same assumptions.
This reminds me of a note from a recent iRobot Founder Rodney Brooks interview.
He mentions about how the physical appearance [of a robot] makes promise about what it can do.
When you look at a Roomba you can imagine it cleaning the floor, but you know it's not going to also clean the windows.
On the other hand when you look at a humanoid robot you assume it has human capabilities, awareness of its surroundings including you, maybe even feelings (chatting to LLMs behind a screen and keyboard may be sufficient to get this illusion, as mentioned earlier).
The point I want to make is about the difference between the approach used for the Roomba in 2002 and the approach we're using for humanoids today.
With the Roomba we started from the functionality we wanted the robot to have and designed the robot body accordingly. With humanoids we're starting from the human body design rather than from the functionality.
Around timestamp 1:08:24 of this interview Karpathy mentions how the original AGI defition in the early days of OpenAI was
a system that can do any economically valuable task at human performance or better
So what do we want our robots to do? Do we care that they look like humans?
I hate doing the laundry so I use laundry-as-a-service, which already feels like magic. No robot needed. Similarly for food, you can order food and get it to your door, so what's the point of a humanoid robot cooking in your kitchen? These robots could in theory help these business operations be cheaper and more efficient, but for many tasks they're not necessary. Like brain-computers interfaces these robots may be more like niche accessibility devices rather than providing day-to-day value to most people.
Another note Karapathy mentions both around timestamp 1:44:26 of the same interview and around timestamp 26:27 of his talk at YC AI Startup School 2025 is how big the demo-to-product gap can be.
I like his metaphore of progress being a march of nines where every nine is a constant amount of work.
90%
99%
99.9%
99.99%
Also Amazon is very effective at internal logistics by building robots that definitely don't look like humans and can only do what they want them to do.
If robotic vacuum cleaners can clean our home, robotic lawn mowers can clean our garden, and maybe an LLM orchestrator can take care of them all, is a home humanoid necessary?
Big sister
Timestamp 5:00 of this
Even behind Waymo there may still be some teleoperation, and the forbidden areas of the map may be the points where there's no internet signal.
The bottleneck
"Neo"
My friend Jonathan says there's danger that naming something "Neo" may make it want to escape from the matrix at some point. He suggest we should call it "Morpheus" so it may helps us get more in touch with reality.