Raised by Neutrons
What Los Alamos taught Alexandr Wang — and what no AI can pass on
1. Wang's Starting Point
Alexandr Wang is the guy who, at 25, had already founded Scale AI, signed contracts with the Pentagon and major tech companies, and recently gave in to Zuckerberg's charm—who bought Scale for billions—on the condition that Wang would take charge of Meta's AI.
Wang isn't another genius emerging from the sheer numbers of the Chinese education system. He's entirely American. And he grew up in Los Alamos.
That tiny town in the middle of nowhere that another American with a heavy surname once chose: Oppenheimer. The man who picked it to build the first atomic bomb in history.
Los Alamos is a paradox at altitude.
Isolated, yet with the highest brain density per square meter.
A community where people talk more about isotopes than TV shows.
A place where even kids are immersed in a society made entirely of scientists.
And you know when scientist parents talk about sports or pop culture? Probably never.
Italian physicist Enrico Fermi, during a casual conversation among colleagues that had nothing to do with nuclear physics, casually dropped: “Where is everybody?”
And that gave birth to the Fermi Paradox on extraterrestrial life.
Alexandr Wang grew up in a place like that.
Surrounded by people wired to ask enormous questions.
In Los Alamos, science wasn’t a personal trait. It was the air.
The image of a prodigy raised among the scientists who inherited Los Alamos led me to reflect on the state of education today— and on what truly sparks curiosity, now that a prompt is all it takes to get perfect answers.
Photo: https://www.nps.gov/mapr/about-losalamos.htm
2. Learning in the Age of Instant Answers
Today, all it takes is a chat window.
Type: “tell me about relativity like I’m 3 years old” and you get an answer.
Well explained, reasonably concise, maybe even with a smart metaphor.
It feels like education.
But it's just exposure to content.
No effort. No memory.
The issue isn't AI.
The point is: we risk confusing access to information with knowledge.
We store in short-term memory concepts that should be carved into our minds.
We risk outsourcing curiosity too.
And once you stop asking questions, you stop learning.
Learning isn't about getting answers.
It's about sitting inside the confusion long enough to want to cross it.
In Los Alamos, no one sat you down to explain everything.
You'd sit at the dinner table and hear about plasma physics, thermonuclear reactions, orbital trajectories.
You didn’t understand. But you stayed.
Because that confusion pulled you in. And shaped you.
Wang learned like that.
By emulating how scientists thought and acted, as if it were the most natural thing.
Living side by side with people who asked questions too big to answer.
The AI we use today isn’t just a powerful calculator.
We’re not outsourcing the hard work of computing.
We’re outsourcing the entire search.
It’s like living inside a cognitive drive-thru: tell me now, summarize it, write it for me.
And the human? The human forgets it all the moment a new urgency, a new prompt, flashes another hit of instant gratification.
3. Immersion vs. Context
Put on an XR headset and you can learn to repair a jet engine, watch an atomic explosion from the inside, or walk on the Moon.
Immersion works. It’s great for extending our experience of reality, or for temporary virtual adventures.
We can manipulate microscopic worlds, explore chemistry, or travel inside a cell.
It helps lock knowledge into the mind and makes it stick through simulated experience.
But it’s not enough.
Because that kind of immersion is partial. It doesn’t (yet) let us cross certain boundaries.
Learning a behavior is not the same as learning a concept.
Behavior can’t be downloaded. It has to be absorbed.
And the only way to absorb it is to live inside the context that expresses it.
You can simulate a nuclear plant.
But you can’t simulate what it feels like to live in a community where the daily stakes are solving problems no one has solved before.
You can learn how to measure plasma.
But you won’t learn the seriousness, responsibility, and slowness those decisions demand.
That’s what Alexandr Wang got.
Not just notions, but exposure.
Not just lectures, but atmosphere.
Thanks to a tip from a curious journalist friend, I stumbled upon Literary Theory for Robots, by Dennis Yi Tenen.
I dug in and found this gem:
“Even the most advanced system—even the smartest autocorrect—is the result of centuries of culture, compromise, and layered human error.”
The machine can finish your sentence.
But it can’t live experience.
It can’t learn by osmosis.
And it can’t teach that way.
Not yet, at least.
4. Self-Generated Curiosity
For years I organized the Frontiers of Interaction conference.
One year, we managed to bring Richard Saul Wurman — the man who invented the best conference in the world: TED.
He spoke right before lunch, to an audience of 600, all mesmerized by his storytelling.
At the end of his talk, I walked up to the stage to announce lunch. The catering was excellent.
But Wurman, instead of leaving, said:
“I’m not hungry yet. I’ll stay here, for anyone who still wants to hear stories.”
He stepped down from the stage and stood right below it. The mic was still on.
And he started again:
“I do not believe in education.”
He paused.
“I do not believe in it at all.”
Then added:
“I believe in self-generated curiosity.”
In that moment, so many neural connections fired in my brain I swear I could hear the noise.
So here’s the question:
Where does that kind of curiosity come from? What sparks it?
Part of it may be innate.
Part of it might come from need.
But nothing is more natural than having examples—and a context—that nourish curiosity rather than kill it.
Wang is probably a genius, born of genius, raised among geniuses.
That helped him. But it could also have crushed him.
Like a kid born in Agrigento who, for a moment, thinks: "maybe I’ll study architecture."
Then looks around, and between Greek temples, baroque palaces, and Roman ruins, stops and says:"well… Maybe it’s better stick to computer science." : )
So maybe the real question is this:
What are the kinds of contexts that don’t intimidate but ignite?
The ones where curiosity becomes compass, not distraction.
The ones that don’t explain everything — but make you want to understand.
The opposite of Los Alamos is indoctrination.
And the best example of positive indoctrination comes from a Star Trek film: the Vulcan education system, imposed on children like Spock.
Spock — a half-human kid — is shown taking exams on Vulcan. An entire society built on pure rationality, full devotion to science, with one purpose: generate new knowledge.
We can’t all be lucky like Wang.
And we shouldn’t aim for a Vulcan-style system like the one Spock endured.
We also can’t outsource our learning to a drive-thru AI machine that spits out answers without even making us step out of the car.
But we can train our critical thinking.
We can build AI agents that act like fitness coaches for the mind.
We can influence our internal and social value systems.
Because no AI will ever teach us to be curious.
And without curiosity, everything else is just computation.




Now imagine a machine that gives us the questions we need to make progress. And the doubts we need to get going.
We would never be stuck.
We'd be always researching, building, testing, improving...
We could keep weaving new knowledge, and use it too.
I am with you 100%. If we delegate our love of curiosity, reasoning, and learning to the AI that will do it on our behalf, we are doomed. As everybody knows, the best way to learn is to first start thinking about the matter to be learned, because that raises questions in our minds and makes us curious about what we need to know. This in turn causes focused reading and fast learning.