Steven Sloman on Causal Reasoning, Community Knowledge, and Why We Think We Understand More Than We Do
- Moksh Vashisht
- 4 days ago
- 4 min read

Steven Sloman’s writing and research revolve around a strikingly simple idea: we don’t think alone. And yet, he emphasizes, that realization came surprisingly late in his career. “My early mentors were all cognitive psychologists,” he says. “They shaped my interest in how people think”—not how communities think. As an undergraduate, he worked in Endel Tulving’s famed memory lab. In graduate school, he trained with Gordon Bower, David Rumelhart, Lance Rips, and Amos Tversky—giants of human reasoning research.
But it was only after years of studying individual cognition that Sloman confronted a sobering truth: “There were real limitations on people’s ability to understand how things work—objects, the social world, the political world.” That realization led him to the central insight behind The Knowledge Illusion: most of what we think we know resides not in our heads but in our communities.
The communal mind, Sloman argues, is the engine behind human achievement. “Humans are incredible in our ability to innovate,” he says. “We’ve gone to the moon and built iPhones… all of cultural success is a product of the community of knowledge.” Specialization allows each of us to contribute a small piece to a much larger puzzle.
But that same communal reliance also leads us astray. When a community forms a theory with no factual basis—whether a conspiracy movement, a misguided belief system, or even a football team with bad strategy—its members can become trapped inside a reinforcing bubble. “Whenever a group’s thinking is wrongheaded… the community of knowledge is leading them astray.”
A core problem, Sloman says, is that people rarely pause to check whether they actually understand something. “Humans often decide and then think,” he explains. “We make the decision and then rationalize it.” The antidote is pause and verify: slow down and examine whether your explanation holds up. But this only works when people have time, resources, and enough humility to question their own certainty.
Causal reasoning—Sloman’s specialty—is particularly vulnerable to illusion. The biggest failure mode? “People fill in details they can’t remember,” he says. When a toaster breaks, for example, most of us don’t really know what’s going on. Good reasoning requires recognizing those gaps. “To find out if you have adequate information, it really helps to try to explain how things work,” Sloman says. Attempting a genuine explanation reveals both what you know and what you don’t.
Another bias is our obsession with single-cause theories. “We love single-factor explanations,” he says. Complex events—elections, economic shifts, political realignments—almost always have multiple causes, but people gravitate to one. “I can’t tell you how many people have told me their theory of why Trump won the last election,” he notes. Each person has one answer. “I’d put all my life savings down on the hypothesis that there are many reasons he won.”
So how do we teach deeper thinking? Sloman is surprisingly skeptical of the classroom as the main solution. “Classrooms are great places to learn theories and frameworks,” he says. “But when it comes to getting people to make better decisions at the moment of decision… the best thing is an environment that provides the right information and nudges at that moment.”
He argues that the real leverage lies in reshaping everyday discourse. Conversations should normalize questions like: “What else might be going on?” or “What would this cause have to interact with for the result to happen?” Creating norms of collaborative reasoning—what Sloman calls “changing the nature of discourse”—is more powerful than any critical thinking class.
Lately, Sloman has been studying “adversarial cooperation,” the idea that groups with diverse viewpoints outperform homogeneous ones. “That way you test each other and get rid of bad ideas early,” he explains. Diversity isn’t a moral ideal here—it’s a functional one.
The political implications of his work are profound. In his 2013 research, Sloman found that asking people to explain policy mechanisms reduced extremism. But later work revealed a caveat: issues rooted in sacred values don’t budge. “Abortion is a good example,” he says. People don’t think in terms of consequences but in terms of fundamental moral commitments. Explanation can’t disrupt sacred beliefs because explanation is a consequentialist exercise—something sacred-value thinkers refuse to engage in.
Sloman is also watching how AI reshapes the knowledge illusion. Large language models can generate fluent explanations that sound authoritative. That fluency, he warns, may actually inflate people’s sense of understanding: “By virtue of hearing an explanation, you believe you understand it.” But when asked to provide truly novel causal reasoning, LLMs still struggle: “Generating causal structure is not their strength… though they’re getting better.”
As the interview ends, Sloman offers a surprisingly personal definition of success. “Success is when I get the feeling of satisfaction that makes me glow a little bit,” he says. It might come from a student, a child, a joke, or a long-sought accomplishment—but it’s always tied to the small, human moments that make struggle worthwhile.
The 6Degrees team extends its heartfelt thanks to Steven Sloman for his generosity, clarity, and decades of work illuminating how—and why—we think the way we do. His research reminds us that understanding is a collective achievement, and that wisdom begins with recognizing the limits of what any one mind can know.




Comments