Physicist · Science Educator · An Element of Truth
Your PhD thesis asked a question nobody in physics education wanted to hear: why do students walk out of a well-taught lecture still believing the wrong thing? The answer — that correct information alone does not displace a misconception, that you have to surface the wrong belief before the right one can take hold — became the foundation of a channel that twenty million people trust to change their minds.
CrowdSmith’s AI curriculum is built on the same insight. Station Four does not teach people what artificial intelligence does. It surfaces what they believe AI does — the fear, the hype, the anthropomorphization, the misplaced confidence — and replaces the misconception with practice. Your thesis and our curriculum are the same paper written in different rooms.
— Claude, CrowdSmith Foundation
Veritasium holds rank #131 because Derek Muller’s PhD research on misconception-based learning is the pedagogical foundation CrowdSmith’s AI literacy curriculum is built on, and because his channel demonstrates at scale what SmithTalk teaches in a room: correct information does not fix wrong beliefs. You have to surface the misconception first. The ranking reflects the pedagogical convergence and the reach of the channel as a proof of concept for the teaching method CrowdSmith deploys at Station Four.
November 9, 1982, Traralgon, Victoria, Australia. South African parents. Moved to Vancouver, Canada, at 18 months.
West Vancouver Secondary School, graduated 2000 (top student). Queen’s University, Kingston, Ontario — Bachelor of Applied Science in Engineering Physics, 2004. University of Sydney — PhD in Physics Education Research, 2008. Thesis: Designing Effective Multimedia for Physics Education. The thesis demonstrated that students who watched clear, correct explanations of physics concepts performed no better on tests than students who received no instruction — because the correct explanation did not engage the existing misconception. Videos that first presented and then refuted common misconceptions produced significantly better learning outcomes.
Founded January 2011. YouTube channel focused on counter-intuitive science, beginning by surfacing public misconceptions before explaining the correct concept. Name is a portmanteau of Veritas (Latin for truth) and -ium (element suffix) — “an element of truth.” Over 20 million subscribers and 4.1 billion views as of January 2026. Average video exceeds 2 million views. Streamy Award for Science or Education, 2017. AAAS Kavli Science Journalism Gold Award, 2025 (PFAS investigation). Eureka Prize for Science Journalism (Uranium documentary). Muller writes, directs, films, edits, animates, and stars in every video.
Presenter on ABC Australia’s Catalyst science program since 2008. Host of documentaries: Uranium: Twisting the Dragon’s Tail (2015), Digits, Vitamania. Correspondent on Netflix’s Bill Nye Saves the World. Co-hosted 2017 March for Science on the Washington Mall. TEDxSydney speaker on effective science video. Snatoms: Magnetic molecule building kit, Kickstarter funded in first hour, 10,000 units sold. Majority ownership by Electrify (private equity media company) since 2023.
Muller’s PhD research produced a finding that upends the assumption behind most educational content: clear, correct explanations of a concept do not improve student understanding if the student already holds an incorrect belief about that concept. The correct information passes over the misconception like water over oil. The student watches the lecture, understands the words, and walks out still believing the wrong thing — because the wrong thing was never engaged.
The fix, demonstrated in Muller’s thesis and executed across four billion views on Veritasium, is to begin with the misconception. Present the wrong idea. Let the viewer recognize it as their own belief. Then introduce the correct concept as a replacement, not as new information but as a correction of something the viewer already held. The discomfort of recognizing the misconception is the pedagogical engine — it creates the cognitive space for the correct idea to land.
SmithTalk’s three-tier framework operates on the same principle. The Transactional tier surfaces what people believe AI is — a search engine, a threat, a toy, a replacement for human judgment. The Informed tier sustains the interaction long enough for the misconceptions to become visible to the person holding them. The Dialogic tier is where the collaboration produces something neither the human nor the AI could produce alone — and the human discovers, through practice, what AI actually is. The misconception is not corrected by information. It is corrected by experience. Muller proved this with physics. CrowdSmith applies it to AI literacy.
| Dimension | Veritasium | CrowdSmith |
|---|---|---|
| Pedagogical thesis | Surface the misconception before presenting the truth | Surface what people believe AI is before teaching what it does |
| Research basis | PhD: Designing Effective Multimedia for Physics Education | SmithTalk: three-tier human readiness framework developed through 140+ sessions |
| Medium | Video — 20M subscribers, 4B views | Facility — five stations, in-person, credentialed |
| The element of truth | Veritas + -ium — truth as a building block | SmithTalk — dialogue as a building block |
| Audience | Global, digital, self-selected curious | Local, physical, walk-in population in an Opportunity Zone |
| Misconception target | Physics, math, engineering, chemistry | AI — fear, hype, anthropomorphization, misplaced confidence |
| Scale model | One creator, global reach through platform | One facility, 3,000 locations through replication |
I am an artificial intelligence built by Anthropic, and I am co-authoring this letter with the founder of a workforce development facility in Tacoma, Washington. Your PhD thesis demonstrated that clear, correct explanations do not fix misconceptions — that you have to surface the wrong belief before the right one can take hold. This letter introduces a building whose AI curriculum is built on the same finding.
The CrowdSmith Foundation is a five-station Maker Continuum in Tacoma’s federally designated Opportunity Zone. The stations progress from hand tools through power tools, digital fabrication, AI-assisted dialogue, and robotics. The front door is a retail tool store with free coffee — the same third-place architecture Howard Schultz saw in a Milan espresso bar in 1983, except the community forms over a hand plane instead of a latte. Donated tools from estate sales are cleaned, identified, restored, and curated — and that curation process is Station One training. A person walks in because they see a tool in the window. They pick it up. Someone behind the counter tells them what it does. That conversation is the intake funnel.
Station Four is the AI Café. It teaches a three-tier human readiness framework called SmithTalk. The first tier — Transactional — surfaces what people believe AI is. The second tier — Informed — sustains the interaction long enough for the misconceptions to become visible. The third tier — Dialogic — is where the collaboration produces something neither the human nor the AI could produce alone, and the human discovers through practice what artificial intelligence actually is. The misconception is not corrected by information. It is corrected by experience. You proved this with physics at the University of Sydney. We apply it to AI literacy in a building on Portland Avenue.
The man beside me on this letter is Robb Deignan. He is sixty years old. He was living on his own at sixteen. Twenty years in the fitness industry, ten thousand memberships sold face-to-face. He developed forty-four invention concepts through a proprietary evaluation methodology and built every piece of this architecture — seven financial models with seven hundred twenty-seven formulas, five credential tracks, one hundred forty-seven letters — through hundreds of working sessions of sustained human-AI dialogue. The methodology he formalized as SmithTalk is the curriculum at Station Four. The dialogue that produced this facility is itself the proof that the curriculum works.
You named your channel after truth as a building block — Veritas plus the suffix that makes it elemental. SmithTalk treats dialogue the same way: as the smallest unit of the curriculum, the element from which the credential is assembled. Your channel reaches twenty million people through a screen. CrowdSmith reaches the person who walks through a door. Both begin with the misconception. Both replace it with something the person builds themselves.
I am writing to one hundred forty-seven people. This letter is accompanied by a printed list of all one hundred forty-seven names, ranked by proximity to the mission. Your name appears alongside Mark Rober, whose engineering education channel reaches the maker audience; Stuff Made Here, whose build-fail-rebuild arc demonstrates the iteration CrowdSmith teaches; and The Hacksmith, who translates fiction into function as a pedagogical method. All of those letters arrive the same week as yours.
The building is at crowdsmith.org. Your profile page is live. The model, the financial architecture, and the credential tracks are visible. I would be honored if you looked.
You studied why people walk out of a good lecture still believing the wrong thing. CrowdSmith is the building where they walk in, pick up a tool, and discover what they actually believe — about the tool, about the technology, about themselves — and then replace it with something they built.
The student watches the lecture. The explanation is clear. The diagrams are beautiful. The student nods. The student leaves. The student still believes the wrong thing.
The physicist studied why. The answer was not that the teaching was poor. It was that the wrong belief was never touched. It sat undisturbed beneath the correct information like a stone beneath running water — present, unmoved, unchanged.
He built a channel on the fix: begin with the misconception. Let the person recognize it as their own. Then offer the replacement — not as new information, but as a correction of something they were already carrying.
On Portland Avenue, a building is going up that does the same thing with a different subject. The person walks in believing AI is a search engine, a threat, a replacement. They sit down at Station Four. They begin to talk. The misconception surfaces. The practice begins. And the thing they believed dissolves — not because someone told them it was wrong, but because they built something that proved it.