Mental models for smart decision making
Charles Munger is more than just a series of zeros in a bank account (estimated around $1.42 billion). As Warren Buffet’s right-hand man, he’s also a brilliant thinker. Given his credentials, envisioning Munger poring over numerous reports, filtering information, and making complex decisions is easy.
How does he, at 93 years old, think better and more rationally than us in our 25 (or 30) year-old crisis? Munger condenses a massive amount of information into a finite number of fundamental knowledge clusters. These clusters apply to numerous real-world scenarios, simplifying decision-making.
Charles Munger uses mental models.
How to Utilize Knowledge
In a 1990 speech, Munger detailed his concept of a ‘latticework of mental models.’ In his words, you’ll never truly know something if you only remember isolated facts. If facts aren’t connected by some knowledge, they’re not in an applicable form.
In other words, if the dots don’t connect, you can’t extract information. Anyone who works with graphs knows what I’m talking about. Those who just memorize facts and regurgitate them are the ones who fail in school — and life. When a teacher asks why something works, the student lists memorized features instead of connecting them to form an interesting answer.
To avoid accumulating useless data and isolated facts, you need mental models (plural). Having just one model is like having only a hammer in your toolbox: every problem begins to look like a nail. Besides having various models, they should originate from different disciplines because no single discipline encompasses all knowledge.
It sounds like a lot of work, but Munger suggests that 80 or 90 mental models can solve 90% of your problems.
Why Compress Knowledge into Models?
What’s in your toolbox? Mine has 30 precision wrenches, 4 pliers, 7 other tools not worth naming, 10 screwdrivers, 5 wrenches, and a tube with at least 200 different screws. Your mental models are your toolbox. So yes, you need dozens of well-founded, applicable ideas — principles.
But why go to all this trouble? When faced with a scenario, our initial reaction is to seek information that confirms our thoughts. Even when they don’t, we mold them to fit the situation. This is called availability heuristic. If you only have a hammer, everything looks like a nail.
To reduce the chance of reasoning errors and biases, we need a broader view. If you need to tighten a screw, you need a screwdriver in your toolbox.
But will something that worked in the past continue to work in the future? No. And thinking so is extremely narrow-minded. Some models will be used more than others, but none are infallible. As Benjamin Graham, Warren Buffet’s mentor, said:
“You can get into much more trouble with a good idea than a bad idea because you forget that good ideas have limitations.”
To counter this global model tendency, Munger recommends adopting more models and expanding your repertoire of ideas.
Where to Seek This Knowledge?
By studying and deeply learning the principles behind what you already know. Physics laws, guiding concepts in biology, the most useful theories in psychology… Even Law, with thousands of laws, is governed by a few principles — understand them, and the rest becomes simple.
If you take 100 best-selling books with “new” ideas, you’ll likely find a single ancient principle behind them. It’s just wearing a different mask. Like the Pokémon Ditto — it changes shape but remains a Ditto.
Latticework of mental models is about understanding these principles and creating a knowledge tree. Building this tapestry of models is a long task, but with each new model, each opportunity to apply them, your understanding deepens, and your decisions become smarter.
1. Inversion
“I just want to know where I’m going to die so I can never go there” — Charles Munger
The mathematician Carl Gustav Jacob Jacobi once said, “invert, always invert.” He used the idea in mathematics, but it can be transposed to our daily problems. Complex problems should not be approached head-on but inversely.
For example, Marcus Aurelius said, “what stands in the way becomes the way.” When faced with an obstacle, what do we do? We complain and try to confront it. Rarely do we think that the obstacle can be seen inversely: as an opportunity, not a difficulty.
Want to know how to live a happy life? Instead of thinking about what makes a good life, think about how to avoid a sad life. Want to be more innovative? Don’t try to come up with brilliant ideas, think about what might be hindering innovation. Want to be healthier? Think about what you should stop doing.
Aim to avoid stupidity because it’s easier than having a brilliant idea.
2. Black Swan
When something unexpected happens, we act as if it was impossible. “X couldn’t have happened because I didn’t think X could happen!” Just because we think something couldn’t happen doesn’t mean it couldn’t have been predicted. We act as if we’ve encountered a black swan.
As Nassim Taleb says: First, the black swan is an outlier, as it lies outside the realm of regular expectations, with nothing in the past to point convincingly to its possibility. Second, it has an extreme impact. Third, despite being an outlier, human nature makes us develop explanations for its occurrence after the event, making it explainable and predictable.
Empirical beliefs stated all swans were white. Then they discovered Australia — home to black swans. Initially, the discovery was shocking. A black swan was not possible. Then it was accepted, and an explanation for its existence was created.
Knowing a system’s patterns allows you to extrapolate knowledge in many cases, but not all. There will be unpredictable outcomes. But just because you couldn’t predict the result doesn’t make it a black swan. It’s more likely you ignored some information than that an impossibility arose.
Thinking everything is a black swan closes your mind to possibilities and ignores feedback (“ah, okay, it wasn’t like that”), preventing you from learning how things work. Just because it wasn’t what you expected doesn’t mean the outcome wasn’t predictable.
3. Hanlon’s Razor
If you think the world is against you, you’re not alone. When something goes wrong, we blame some grand universal conspiracy against our lives (because the universe likes to joke with us, poor mortals).
The car won’t start? The universe wants you to be late. Your upstairs neighbor won’t stop making noise? They must be trying to drive you crazy. The espresso machine isn’t working? The coffee is sabotaging your life and addiction.
When something happens, we create an explanation for the event. Usually, one involving a grand malicious conspiracy. And most times (almost always, let’s be honest), it’s not the truth.
Hanlon’s Razor states: “Do not attribute to malice what can be adequately explained by negligence.” It’s a tool for quick decisions, less drama, and rational thinking. According to Murphy’s Law, things will go wrong. The initial reaction is to blame someone and create an explanation that justifies the scenario. Hanlon’s Razor is the antidote to this kind of thinking.
As Socrates said, no one does evil believing they are doing evil. People think they are helping — even if just themselves. You won’t know the intention behind an act, and assuming malice motivated it will only create undesirable situations for both parties.
Hanlon’s Razor contains your reaction — instead of reacting with anger, respond with education. Or, as the Stoics would say, always respond with kindness.
On the flip side, this mental model can lead us to act naively. So, don’t trust mafia members and always consider context, experience, logic, and evidence, if possible.
4. Occam’s Razor
“If you eliminate the impossible, whatever remains, no matter how improbable, must be the truth” — Sherlock Holmes
My advisor was a true adherent of Occam’s Razor. Among all the models we developed, his choice always lay in the one with the fewest parameters. Occam’s Razor proposes that the simplest solution is usually the correct one.
We should stop looking for complex solutions to simple problems. Focus on what works given the circumstance. Extremely useful when we need to make quick decisions (like Hanlon’s Razor), decide what’s true without much proof, or simplify something.
My dissertation was practically an application of Occam: a new variable generated only by subtractions was the best solution, not a more complex model.
Of course, like every mental model, Occam’s Razor is not infallible — and not recommended for moments of dangerous or important decisions.
Deciding to eliminate three ToDo List apps and replace them with a paper agenda is one thing, but deciding whether to invest in company X or Y or to quit your current job for another is entirely different.
If there’s a possibility of using logic, personal insights, or scientific methods, use them. Decisions should be supported by evidence, not simplicity. In Einstein’s words, “things should be as simple as possible, but not simpler.”
These four mental models are the beginning of your tapestry. Now, go seek other models and start applying them.