Monocausal explanations for everything bad

Sum is a book that provides 40 mutually incompatible explanations about what happens after you die. Derek Sivers adapted this format to 27 explanations for how to live. Here are 8 monocausal explanations to why things are bad. None are true, and all are true.

Principle agent problem

Power naturally centralizes, but people can only have a linear direct impact. The world is complex, and so each person can only deal with a slice of it. Leaders make large decisions, but must delegate most decision-making to underlings. And by most, I mean "nearly all". The failure mode for a micromanaging leader of a large organization is not that they stay involved in everything, it's that the organization eventually grinds to a halt.

Life as an underling isn't too bad, except that your incentives and the incentives of your boss aren't exactly the same. You may agree on the same values, but what's optimal for you often isn't optimal for your boss. And this goes up the chain: your boss's optimal isn't optimal for their boss. The CEO's optimal isn't optimal for the board, and the board's optimal isn't optimal for the shareholders.

This is true literally everywhere. If everyone had aligned incentives, many problems would evaporate. Since what's good for you isn't what's good for others, many organizations pick choose explicit metrics to optimize in a way that approximates their true objectives. They pick metrics that coax responsibility down the stack to be aligned towards the central goals.

This scales far beyond personal agents. Companies have incentives and rewards. It's common in tech to transfer value from a "legacy" industry, rather than create any new value for people. And companies will, nearly exclusively, do what's best for them, not their users or society.

We shouldn't be shocked by Mark Zuckerberg, above. He's following the natural incentives that exist: his personal wealth is tied to Facebook's network strength, and Facebook desires a monopoly on the social graph. And we're no better.

Goodhart's Law

Reality has no objective function, so whenever we choose metrics to optimize for, we'll explicitly not be optimizing for something that actually matters. This is Goodhart's Law, but it goes deeper than most people imagine. There are two flaws: objectives can be gamed in a cheaper way than actually doing the best thing, and objectives can be steered towards what's best for the people choosing the objectives.

I had a friend at a company that measured software engineering productivity using "lines of code committed". For those outside of tech, this is akin to measuring a house builder's productivity by "number of 2x4's placed in a home". Creating this incentive actually removes any benefit the original metric even measured. When you're compared to other programmers with lines of code, people find very creative ways to producing a lot of lines of code, without any particular care for functionality. Things became worse for software engineering productivity, so they switched to measuring the number of tickets closed. History repeats itself.

These metrics usually make sense, and are not bad. However, they can't possibly actually optimize for the nebulous thing we actually collectively want. This is core to the AI alignment problem, but extends far beyond AI and evaluating actual objective functions.

When I worked at a large tech company, I was directly responsible for well over 20x my own (quite high) annual compensation in resources — salaries of my team, operational expenses, etc. And I did my best, from what I can tell. But my own incentives were to show impact, to make my team and boss look good, and meet objectives that I had an influence in shaping. Meeting these objectives demonstrated that I was a good and effective leader, and should be promoted and compensated accordingly. Or at least, those were my incentives.

If you've never had the pleasure of setting OKRs for a large organization, here's how it often goes. Everyone comes up with "objective" performance indicators that they can be measured against. These are often informed by levels above them ("Expand use of AI" may be a mandate — okay, how can we do that?), but are scoped to each layer in an organization. As such, everyone wants something that nominally rewards hard work and effectiveness, but also rewards themselves. They want to look good! So whether goals are sandbagged or not, they're naturally planned to be locally obtainable so everyone looks good.

Even if you can end up with good objectives, optimizing for any set of metrics means all the ones not included become more important.

We've seen the issues of measuring teacher performance with test scores. Or, companies that optimize for short-term growth at their own long term expense. Pick a metric, and it ceases to be a good metric.

Reality tunnels and dualistic fixation

Assuming you're middle aged, your life experience represents roughly 0.0000000001% of the total alive human experience. If the total alive human experience was the size of an Olympic swimming pool, you'd have direct access to less than a tenth of a teaspoon — a few drops, nearly nothing. Much of this experience overlaps, but a significant portion is unique per person or group. You don't really know what it's like to be someone else near you, much less someone in a very different life circumstance. It doesn't matter how much empathy you have. You're helplessly limited.

And yet, in democracies, each person votes as if they somewhat understand society's needs. In any commerce-oriented society, people make purchasing decisions with limited information. It's easy to pollute when you're not the one seeing the pollution.

I call this limitation a "reality tunnel". Reality is vast, wide, and expansive, yet we're only able to gaze through it with blinders — looking through a tunnel and inferring what the rest is probably like.

It's actually worse than that: we often have no idea we're looking through a tunnel. Our perception is one of "me" observing "the world" directly, when actually our experience resides inside a fabrication of dualism. This denies connection and fixates hard boundaries between people and things.

It's easy to optimize for ourselves, individually, when we have such a limited and distorted perspective. We can't possibly take into account everything, and so decisions can't possibly consider what may be a global optimal — or at least, globally better. So we resort to one-weird-tricks that somewhat align incentives in a lossy manner, such as capitalism where everyone's optimizing for themselves in a way that directionally benefits others.

Embedded growth obligations

Imagine a world where there was no growth. Populations fluctuate based on a balance of resource availability. This was the experience of humanity for most of our history. We didn't really innovate, things didn't change much. What little innovation that happened is observable over hundreds of generations — each generation had almost no impact.

In this world, tribes are small. There's accountability for your actions based on relationships, and incentives were for survival. This is actually important because exponential systems rely on continued growth to survive, and thus necessarily collapse. As growth becomes required for a system to survive

Yes, there was minimal innovation, but also minimal self-caused existential risk. It's hard to shoot yourself in the foot if you never had a gun.

Is growth bad, then? Well, it certainly gives us a lot. We can spend money today that we'll (or other people) will earn in the future. Things get better over time!

Let's look at one example to see where it falls apart. Home ownership is good for society: ownership aligns incentives to care for their homes and community, and it serves as a way to slowly build wealth. So, to optimize for home ownership, we ended up with many policies that resulted in and required growth. Home equity is a great wealth-building tool if it doesn't just store value, but grows value. We can decrease risk by having 30 year mortgages (a very weird product) both financially and risk subsidized by the government. And thus, prices go up. Real estate becomes an asset class (read: exponential returns over time), meaning it will structurally outgrow wages.

When we talk about bad systematic incentives, most are tied to something that needs to grow to survive.

We're now surrounded by institutions that must grow to continue to exist. This includes the number of PhD candidates who will never get professorships, and cities that must grow in order to survive. This is massively problematic, because requiring growth means more growth is needed over time — we're dealing with an exponential system. And exponential systems blow up.

From the great philosopher Visakan Veerasamy, the big lesson of survivor bias is that you should optimize for being a survivor.

Local optima gradient descent

We want what's better, for ourselves and by extension, others. This isn't just for moral platitudes; there's good evidence that increasing wealth truly makes the world better for everyone.

But exploring better options is exceedingly expensive. We can't deconstruct the US government to try liquid democracy for a few years, with the intention of returning to our current systems and institutions if it doesn't work out well.

In fact, it's possible to perceive exploration as a random process that occasionally gets lucky, and the lucky + better systems survive and out-compete inferior systems. Most of us would agree that this is a slow and haphazard process.

Exploration is hard for several reasons. It requires taking on risk, potentially implementing systems that are significantly worse, even if they're on the way to a better optima. It also has change friction: systems aren't being built and torn down every day, and this process is expensive by itself. We saw this with COVID: when many jobs were temporarily paused due to lockdowns, this made exploration temporarily cheaper. Many workers found other options, and many companies learned to operate with different processes. This coalesced into a new local optima, and when the world was ready to return, things couldn't revert back easily. We're seeing this resistance actively with remote workers not wanting to return to the office once they experienced a different way of working.

So, naturally, we end up in local optimal conditions that work decently enough for enough people. But, if we're honest, we all know that things can't possibly be optimal. Things could be so much better. Our governments are exceedingly inefficient, and even some of the most profitable companies are filled with people that are hardly working.

This is actually a huge political problem: we want value creation, but also must incentivize value capture, which directs too much in some places, so we tax in order to reverse this and direct money to effective commons. Yet, we can't trust centralized institutions to create effective tax policy, collect the money, and distribute it effectively. Some tax programs have good ROI, but in general, they're terribly ineffective. They're just marginally more effective at achieving the outcomes we want—particularly in the commons—than alternatives.

But because exploring alternatives is so hard, we're stuck. Each local optima has local winners that accumulate power and most certainly don't want change. Our institutions are crufty and aging. Many of our laws governing the internet were written before the internet had exploded in popularity. Many of our politicians are luddites.

Ideas exploring alternatives, such as The Network State, are relegated to fringes of society, and are not broadly taken seriously. It's just too hard to fix anything.

Prisoner's dilemma and Moloch Beauty Wars

A lot about the world is zero-sum. We have a fixed amount of attention, so total attention is fairly fixed. We can increase wealth in general, but in a relative sense, wealth still needs to be allocated to a fixed group of people.

In nearly every situation, what's good for me (in isolation) optimally isn't optimal for you. Yet there is often a course of action that benefits us both, but neither of us is incentivized to corporate.

It's cheap to pollute the epistemic commons because I don't have to deal with it. News has become entertainment because that's what is the best economic model. In many markets, we act within beauty wars that most of us agree are bad, but we can't escape.

We've come up with a few hacks that serve as band aids. We have social and moral structures that embed deeply in our firmware that promote cooperation. We publicly exalt people who make cooperative decisions. Altruism is good!

But all other incentives are for defection, and these other incentives often win. I know someone—a good friend and moral person—who works for a company that I find fairly immoral due to their impact to society. The friend even acknowledges that the company indeed is a net negative for society — they have a regulatory moat, and lobby aggressively to keep it to allow them to rent seek. "It's bad, but hey, it's a good job."

It's easy to point fingers, so I'll point at myself. I worked at a large tech company whose revenue is nearly exclusively from targeted ads. Ads are more effective when people are invasively tracked, and coupled with an "effective" ad platform is immense unchecked power. There was an idea that good ads are actually a service to consumers, because they find better products more effectively. But, if we're honest, that's mostly internal moral justification. I didn't work on ads, but in many ways was incentivized to be ambivalent about the wider impacts their policies had on culture and technology. It paid my paycheck.

Algorithms that power social feeds are optimized for engagement, over any other social cost. Making people too depressed would likely impact long term engagement, so there's sometimes forces to engage people with "good" content. But it's the same game with the same rules. For years, Netflix famously optimized for "watch minutes". Is this good for us all, or for their subscription rates?

Institutions with power care about their own survival, and thus become corrupted over time as they optimize for themselves. Altruism, rooted in these social "hacks", doesn't scale indefinitely.

It makes me wonder to what extent people involved, especially loosely, in truly horrific historical events felt this way. Fighting against systems that benefit us is really hard. And, for many systems, we probably won't have that much of an impact, and our contribution barely helps… right?

All of these maligned incentives against the common good are a form of pollution that prevents us from actually innovating and solving problems. Governments are inefficient because they're filled with people who can micro-defect again and again. But it reaches everywhere: cooperation has never been solved effectively, so we all defect in minor ways in all situations.

Broken causal chains

Causality is a hard problem. We see this throughout the sciences, which attempt to separate independent variables in a way to understand causes. Causality is exceedingly hard. We have a replication crisis in the social sciences because it's far easier to do what looks like science than establish causality, for real.

We are inside the graph, so can't look around objectively. We get a snapshot of a dynamic system that is constantly adapting. We can only locally observe what's happening, so don't have a direct link between cause and effect.

For example, a policy may promote higher education by creating subsidized lending facilities. This policy has the immediate impact of increasing access and attendance to universities. The higher attendance expands the schools, and because capital is easier to access, incentivizes colleges to optimize for the present experience of students (not their long term financial health). Huge new classrooms, fancy dorms, and expensive food halls are built. Layers of administrative staff are created to create a concierge educational experience. This increases the cost of tuition, which places more long term burden on less wealthy students. It used to be possible to work to pay for college, and in many places, it isn't anymore. Over time, the education itself degrades, since less-qualified people are entering college. So, we end up with policy proposals of debt forgiveness, that further places no incentive pressure on price.

I've looked at many systems that resulted in bad outcomes, and often, each step along the way made sense. They're all optimized for the local point-in-time, instead of the eventual outcome. Want to know why Google has had so many chat apps that they can't keep alive? Local incentives that made perfect sense zoomed in, but lacked overall strategy.

It's near impossible to solve for a systemic outcome when each link is so nebulously attached to others. "Unintended consequences" are the natural outcome of being inside a causal chain, without the ability to observe from the outside.

Complexity hides reality from us

Reality is complex, so there's two natural attractors: living in the present naturalistically, or developing psychotechnologies that model reality in a compressed and abstracted way. For example, coordinating with people you don't know is hard, but if we both believe in and support a certain currency, we can economically transact without needing to trust each other too much.

So, we continue complexifying the world. Most people have no idea how to change the oil in their car, and if we actually had to survive in the wilderness, most of us wouldn't make it. Milton Friedman famously lectured about how no one knows how to build a pencil.

We specialize, and then specialize more. Each person knows enough about their domain, and then relies on abstractions for everything else. These abstractions are addressable and serve as interfaces to engage with reality. I know if I have car issues, I can take it to my car mechanic. My car mechanic has probably never heard of HTML and JavaScript, but that's okay — he can still use websites.

On its surface, this seems like a happy system, but embedded in these abstraction layers is a lot of power. When you can no longer do things for yourself, you're dependent on others and a system that you had no design authority over. We see this especially in marginalized groups: they must interface with a system that wasn't designed for them, where most of the power is out of their hands. When the world is exceedingly complex, corrupt systems evolve to maximize their own benefit while appearing benign. It's easy to get poor people to take on subprime debt or pay finance fees when they don't really understand it. Society has become complex enough that people are being left behind.

This problem impacts everyone, because we can't directly optimize our lives. A leader delegates responsibility, but also knowledge. They no longer have to know how the system actually works. It's no surprise that the US President that ran on a platform of "draining the swamp" was arguably more reliant on insiders, lobbyists, and the rest of the swamp than other recent Presidents. It's simply not possible for even a competent leader to actually understand the political complexities of their domain, and make the best decisions.

As the world gets more complex, more and more complexity gets abstracted away, both to simplify the world, and to hide power.


One nice part of this writing exercise is that it encourages a metarational way of thinking: each cause can be considered separately and exclusively, and all cannot possibly be true at the same time. Each is partially true, and many are actually different sides of the same issue.

I passed this around to a couple friends, and one prompted me to add a kind disclaimer: I'm actually exceedingly optimistic about the world and humanity, and think about these problems primarily as a way to approach solving them.

Warm thanks to Jake Orthwein for the idea, which was a tweet that lodged in my brain for over a year.

Stay connected

I send out occasional updates on posts, interesting finds, and projects I'm working on. I'd love to include you. No tracking, one-click unsubscribe.