OpenAI founder says AI does not imitate brains

Andrej Karpathy, co-founder of OpenAI and former head of AI at Tesla, has offered a radically different view on the current state of AI in an extensive interview with Dwarkesh Patel. Faced with overwhelming optimism, he maintains that current systems are “digital ghosts” that imitate human patterns, not brains that evolve like animals.

His prediction: AGI Functional will arrive in 2035, not 2026.

Why is it important. Comparisons between AI and biological brains are dominating technical discourse and guiding many investment decisions. Karpathy argues that this analogy is “misleading” and raises unrealistic expectations.

His experience leading autonomous driving at Tesla for five years has given him a unique perspective on the gap between killer demos and truly functional products.

The difference. Animals evolve over millions of years, developing instincts encoded in their DNA. A zebra runs minutes after being born thanks to that “pre-installed hardware.”

Language models learn by imitating text from the Internet without anchoring that knowledge in a body or a physical experience. “We’re not building animals,” he says. “We are building ethereal entities that simulate human behavior without really understanding it.” Ghosts.

The problem of reinforcement learning. Karpathy says that the RL (reinforcement learning) current is “terrible” because it rewards entire trajectories instead of individual steps.

  • If a model solves a problem after a hundred failed attempts, the system reinforces the entire path, including the errors.
  • We humans reflect on each step and adjust.

The collapse. The models suffer from “entropy collapse”:

  • When they generate synthetic data to self-train, they produce responses that occupy a very small space of possibilities.
  • ask ChatGPT one joke and you’ll get three repeated variants.
  • Poor human memory is an advantage: it forces us to abstract.
  • The LLM They remember perfectly, which allows them to recite Wikipedia but prevents them from reasoning beyond the memorized data.

Between the lines. Karpathy saw that Claude Code and OpenAI agents proved useless for complex code during development. nanochat. They work with repetitive code that abounds on the Internet, but fail when faced with new architectures. “Companies generate slop“, he said. “Perhaps to raise financing.”

The core. Their proposal: build models with a billion parameters (dwarf compared to those most used today) trained with impeccable data that contain thinking algorithms, but not factual knowledge. The model would look for information when it needs it, just like we do.

“The Internet is full of garbage,” he explains. Giant models make up for that dirt with raw size. With clean data, a small model could feel “very smart.”

The unexpected turn. Karpathy expects no explosion of intelligence, only continuity. Computers, mobile phones, the Internet: none have altered the GDP curve. Everything is diluted in the same ~2% annual growth.

“We are experiencing an explosion,” he said, “but we see it in slow motion.” His prediction: AI will follow that pattern, spreading slowly through the economy, without causing the abrupt jump to 20% growth that some have anticipated.

In Xataka | Privacy is dying since ChatGPT arrived. Now our obsession is for AI to know us as best as possible

Featured image | Dwarkesh Patel

Leave your vote

Leave a Comment

GIPHY App Key not set. Please check settings

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.