Reasons to dislike reasons

🎉 I am writing these notes in Brick, a magical mystery no-bullshit publishing platform. Turns out writing goes much faster when I don't have to hit “Publish” or do git commit.

You can use it too — check it out at

i hate reasons

i hate having to give reasons, i hate the whole framework of reasons and having reasons or needing them

i think it's too easy to delude yourself (i think i just did a reason ew ew and double ew)

i'm good at coming up with reasons

— @SimianSunny

There are good things about reasons, and there are bad things about reasons.

To some people, the good things are obvious, and the bad things are very much not obvious. Hence this post.

Very briefly: many correct and good things are not legible. They either can't be explained easily at all, or at least can't be explained in the framework your interlocutor uses. The demand to be legible will reduce both productivity and happiness—but especially happiness.

To understand everything written here, you need to find a feeling in yourself. This feeling is: "what you're doing to me is bad, and I can't even explain why, not in a way you would understand, and perhaps I myself don't know why, either". Or you can try from the other side—"this is good and I don't know why". Or both.

The end goal of learning about legibility is to figure out a healthy attitude towards wanting things you can't explain.

But first, a warning

A bunch of people saw this on Reddit and went "he says reasons are bad, but reasons are good, so he is wrong". This is not what I intended, and a failure to apply synthesis.

Reasons are both bad and good. They are good in obvious ways, and bad in less obvious ways. Once you know that they are bad and good, you can start figuring out

  • how to resist when somebody tries to use reasons on you and you're in danger of being hurt by them,
  • and how not to hurt everyone around you when using them.

I.e. I am not claiming reasons should go away, I am saying "here are things about reasons that are bad" to an audience that is (likely) convinced reasons are good. The result should be a better understanding of reasons' strengths and weaknesses, not a rejection of reasons altogether.

Okay, let's go.

Actually no, let's not go. I ended up producing a summary of everything. Here it is.

  • Seeing Like a State: Since now you are aware how whole cities can be ruined by using "superior reasons", I want you to keep this in mind whenever you have any subordinates or people who otherwise depend on your decisions. I also want you to keep this in mind whenever you analyze the actions of someone who is higher than you on the chain of command.
  • The Secret Of Our Success: Just a bunch of examples of 'local knowledge', because more examples is better.
  • The Meridian of Her Greatness: You know how top-down decisions can be suboptimal, but now I also want you to know how you might miss that they are suboptimal (and damaging as well).
  • The Control Group Is Out Of Control: You probably think this whole thing doesn't apply to science and science-like things, but actually it kinda does, so watch out.
  • Epistemic Learned Helplessness: If you want me to do a thing, your stated reasons appear valid, and my gut feeling says it's dangerous and unwise but I can't put my finger on why, sometimes the best option is to tell you to go to hell.
  • The Structure Of Scientific Revolutions: If you want a more nuanced view of how you should operate, at least when doing science, please read this. Maybe you'll get inspired to apply a similar approach in other areas of your life, too.

Okay, now definitely let's go.

Seeing Like A State

First, read Scott's review of Seeing Like A State, a book about how rational and reasonable ideas turn out to be disasters time and again.

Scott starts with the story of “scientific forestry” in 18th century Prussia. Enlightenment rationalists noticed that peasants were just cutting down whatever trees happened to grow in the forests, like a chump. They came up with a better idea: clear all the forests and replace them by planting identical copies of Norway spruce (the highest-lumber-yield-per-unit-time tree) in an evenly-spaced rectangular grid. Then you could just walk in with an axe one day and chop down like a zillion trees an hour and have more timber than you could possibly ever want.

This went poorly. The impoverished ecosystem couldn’t support the game animals and medicinal herbs that sustained the surrounding peasant villages, and they suffered an economic collapse. The endless rows of identical trees were a perfect breeding ground for plant diseases and forest fires. And the complex ecological processes that sustained the soil stopped working, so after a generation the Norway spruces grew stunted and malnourished. Yet for some reason, everyone involved got promoted, and “scientific forestry” spread across Europe and the world.

And this pattern repeats with suspicious regularity across history, not just in biological systems but also in social ones.

What is the alternative to rational and reasonable ideas? One of them is metis, "practical wisdom", knowledge that can't be explained in any way that would sound reasonable to you, and yet it works. But since you don't see how exactly it works, you still think that you can do much better by applying logic and thinking, and the result is shit.

Or it's the other way round. Somebody else thinks you should do something because it seems logical to them. And then your life is shit.

The Secret Of Our Success

To get a bit more appreciation for tradition in general, read Scott's review of The Secret Of Our Success.

Henrich wants to debunk (or at least clarify) a popular view where humans succeeded because of our raw intelligence. In this view, we are smart enough to invent neat tools that help us survive and adapt to unfamiliar environments.

Against such theories: we cannot actually do this. Henrich walks the reader through many stories about European explorers marooned in unfamiliar environments. These explorers usually starved to death. They starved to death in the middle of endless plenty. Some of them were in Arctic lands that the Inuit considered among their richest hunting grounds. Others were in jungles, surrounded by edible plants and animals. One particularly unfortunate group was in Alabama, and would have perished entirely if they hadn’t been captured and enslaved by local Indians first.

[...] Nor is it surprising that they failed. Hunting and gathering is actually really hard. Here’s Henrich’s description of how the Inuit hunt seals:

"You first have to find their breathing holes in the ice. It’s important that the area around the hole be snow-covered—otherwise the seals will hear you and vanish. You then open the hole, smell it to verify it’s still in use (what do seals smell like?), and then assess the shape of the hole using a special curved piece of caribou antler. The hole is then covered with snow, save for a small gap at the top that is capped with a down indicator. If the seal enters the hole, the indicator moves, and you must blindly plunge your harpoon into the hole using all your weight. Your harpoon should be about 1.5 meters (5ft) long, with a detachable tip that is tethered with a heavy braid of sinew line. You can get the antler from the previously noted caribou, which you brought down with your driftwood bow.

The rear spike of the harpoon is made of extra-hard polar bear bone (yes, you also need to know how to kill polar bears; best to catch them napping in their dens). Once you’ve plunged your harpoon’s head into the seal, you’re then in a wrestling match as you reel him in, onto the ice, where you can finish him off with the aforementioned bear-bone spike.

Now you have a seal, but you have to cook it. However, there are no trees at this latitude for wood, and driftwood is too sparse and valuable to use routinely for fires. To have a reliable fire, you’ll need to carve a lamp from soapstone (you know what soapstone looks like, right?), render some oil for the lamp from blubber, and make a wick out of a particular species of moss. You will also need water. The pack ice is frozen salt water, so using it for drinking will just make you dehydrate faster. However, old sea ice has lost most of its salt, so it can be melted to make potable water. Of course, you need to be able to locate and identify old sea ice by color and texture. To melt it, make sure you have enough oil for your soapstone lamp."

No surprise that stranded explorers couldn’t figure all this out. It’s more surprising that the Inuit did.

I mean, tradition sucks. It makes people do pointless things. Right? But you also somehow have to cope with the fact that apparently, tradition is a result of a very long process of figuring things out and seeing what works better and what doesn't. If you ignore that just because people haven't bothered to record why exactly it all works—or because they were wrong about why it works—well, you're missing out on a thing that works. 

To repeat: the fact that somebody is completely wrong about why something works, doesn't mean it doesn't work. Does religion make people happier? Does it help to coordinate the society? Maybe it does, maybe it doesn't. But you can't figure out whether it does or not just by correctly observing that religions are full of lies.

The Meridian of Her Greatness

Then you might read Lou Keep's The Meridian of Her Greatness, which is about how legible ways of measuring quality of life can be completely and utterly wrong about quality of life.

Lou is fun to read, but also hard to read. Don't worry if it's not your taste.

Every so often, a piece thinker trips onto the global stage and says something like: “Sure, people say that they’re unhappy, and they say that it’s the economy, but GDP is steadily growing and a lot of those people are rich. So they’re wrong.” Then Donald Trump gets elected or some country ‘exits, and the slightly clammier thinker regurgitates their argument, but this time they punctuate it with: “You dicks.”

[A quote from a book on Industrial Revolution:] "Nothing in the nature of a sudden deterioration of standards, according to these writers, ever overwhelmed the common people. They were, on average, substantially better off after than before […] and, as to numbers, nobody can deny their rapid increase. By the accepted yardsticks of economic welfare – real wages and population figures – the Inferno [of capitalism], they maintained, never existed; the working classes, far from being exploited, were economically the gainers and to argue the need for social protection against a system that benefited all was obviously impossible."

Whatever you want to say about capitalism, the Industrial Revolution was a terrible time to be poor. Anecdotally, I’ve yet to meet a single person who dismisses Marx without qualification (that isn’t a contest, comment section), and the qualification is always, always, “Well, yes, that made sense if you lived in England then. But today it’s wrong.”

[...] About 100 of [the pages of The Great Transformation] detail the near-constant immiseration and degradation of (what would become) the British working class. From all ends of that island they were drawn into cities, or factory towns, and, piece-by-piece, reduced. “Reduced how?” Simply reduced. Diminished in every way one can imagine, qualification is unnecessary.

[...] The economists are right. Wages went up. Everything went up (except people – consider the height graph foreshadowing).

The horrors also happened, of course. That part isn’t a lie.

Humans are social animals [citation needed], and we really don’t like being removed from other people we like. We also really hate when the social laws around us are changed to facilitate that. The argument pro capitalism is efficiency and wealth. That wealth is, theoretically, to give people the lives they want. But what if the lives that people really want have much more to do with social ties than they do with economic ties? And, indeed, that the process of creating a labor market destroys precisely those.

The Control Group Is Out Of Control

Scott again, not a review this time but a post of his own: The Control Group Is Out Of Control. Not about society any more, but about science. The idea is: without relying on illegible heuristics and rules of thumb, you can be led to believe pretty much anything.

Let's say there was something definitely wrong, but we tried to prove it was right—using normal scientific methods. Would we succeed? Oh. Looks like we would. Hmmmm.

Trying to set up placebo science would be a logistical nightmare. You’d have to find a phenomenon that definitely doesn’t exist, somehow convince a whole community of scientists across the world that it does, and fund them to study it for a couple of decades without them figuring it out.

Luckily we have a natural experiment in terms of parapsychology – the study of psychic phenomena – which most reasonable people believe don’t exist, but which a community of practicing scientists believes in and publishes papers on all the time.

The results are pretty dismal. Parapsychologists are able to produce experimental evidence for psychic phenomena about as easily as normal scientists are able to produce such evidence for normal, non-psychic phenomena. This suggests the existence of a very large “placebo effect” in science – ie with enough energy focused on a subject, you can always produce “experimental evidence” for it that meets the usual scientific standards.

Then the post goes on to investigate and debunk a particularly good meta-analysis that concludes that a psychic phenomenon is real.

The problem is: most normal meta-analyses are not as good as that one. So most of science is shit. How does science even work then?

The highest level of the Pyramid of Scientific Evidence is meta-analysis. But a lot of meta-analyses are crap. [The meta-analysis discussed in this post] got p < 1.2 * 10^-10 for a conclusion I'm pretty sure is false, and it isn’t even one of the crap ones. Crap meta-analyses look more like this, or even worse.

How do I know it’s crap? Well, I use my personal judgment. How do I know my personal judgment is right? Well, a smart well-credentialed person like James Coyne agrees with me. How do I know James Coyne is smart? I can think of lots of cases where he’s been right before. How do I know those count? Well, John Ioannides has published a lot of studies analyzing the problems with science, and confirmed that cases like the ones Coyne talks about are pretty common. Why can I believe Ioannides’ studies? Well, there have been good meta-analyses of them. But how do I know if those meta-analyses are crap or not? Well…

There is no answer. There isn't supposed to be an answer, the point to get you thinking: you can either be objective and unbiased—and have no defense against people who will use good science to make you believe into bullshit—or you can be subjective and then who knows how well it's gonna work out. Maybe not well. Maybe not well at all. But at least you can consider being subjective as a possibility, now.

Epistemic Learned Helplessness

A lite, readable version of the same thing is Scott's Epistemic Learned Helplessness. If you have learned that you can be persuaded by completely wrong arguments, you just don't let yourself be persuaded by arguments anymore:

A friend recently complained about how many people lack the basic skill of believing arguments. That is, if you have a valid argument for something, then you should accept the conclusion. Even if the conclusion is unpopular, or inconvenient, or you don’t like it. He envisioned an art of rationality that would make people believe something after it had been proven to them.

And I nodded my head, because it sounded reasonable enough, and it wasn’t until a few hours later that I thought about it again and went “Wait, no, that would be a terrible idea.”

I don’t think I’m overselling myself too much to expect that I could argue circles around the average uneducated person. Like I mean that on most topics, I could demolish their position and make them look like an idiot. Reduce them to some form of “Look, everything you say fits together and I can’t explain why you’re wrong, I just know you are!” Or, more plausibly, “Shut up I don’t want to talk about this!”

[...] You could consider this a form of epistemic learned helplessness, where I know any attempt to evaluate the arguments is just going to be a bad idea so I don’t even try. If you have a good argument that the Early Bronze Age worked completely differently from the way mainstream historians believe, I just don’t want to hear about it. If you insist on telling me anyway, I will nod, say that your argument makes complete sense, and then totally refuse to change my mind or admit even the slightest possibility that you might be right.

[...] I consider myself lucky in that my epistemic learned helplessness is circumscribed; there are still cases where I’ll trust the evidence of my own reason. In fact, I trust it in most cases other than infamously deceptive arguments in fields I know little about. But I think the average uneducated person doesn’t and shouldn’t. Anyone anywhere – politicians, scammy businessmen, smooth-talking romantic partners – would be able to argue them into anything. And so they take the obvious and correct defensive maneuver – they will never let anyone convince them of any belief that sounds “weird”.

The Structure Of Scientific Revolutions

Thomas Kuhn's The Structure Of Scientific Revolutions (download from LibGen) is a constructive view at the question above I asked before. "Assume trying to be completely rational and unbiased is a bad idea. However, 'anything goes' is a bad idea too. What is a useful way of being biased?"

Kuhn proposes that a good way to go about it is through paradigms. A paradigm is a set of implicit beliefs about what is worth studying, and what isn't. What counts as an explanation, and what doesn't. What is important to take into account, and what isn't. What scientific foundations we have collectively decided not to challenge for a while, so that we could milk as many useful observations as possible from the existing ones, before discarding them.

Simplest example: "p<0.05 is good enough" is one bit of a paradigm. It helped us get a lot done. If we had much stricter standards from the start, we wouldn't have gotten anywhere. But now it's starting to become less useful, so we might discard it in favor of something else, perhaps not "p"-based at all. In other words: the standard of what counts as knowledge is changing.

The role of knowledge

This is a prelude to the question: "Why does it matter what counts as knowledge and what doesn't? It's stupid that p=0.05 is knowledge and p=0.06 isn't."

What is the role of knowledge? Why arbitrarily decide that something is knowledge and something isn't, based on a +0.01 difference?

My answer is: because deciding whether something "counts" or "doesn't count" as knowledge has consequences. If it's knowledge, it will be published. Other people will get a tiny extra bit of credibility if they use your "knowledge" in their own papers. The decision whether to treat something as knowledge or not is a small part of the world-wide algorithm, "Science", aimed at improving the world.

The algorithm can change if a better one is found, and then the conditions under which something becomes "established knowledge" might change, or even the whole label might get replaced with something different. It doesn't matter.

Scaling back from science to arguments

After reading The Structure of Scientific Revolutions, you should have an idea of how even true scientific things can be incomprehensible or may seem unjustified just because they are in a different paradigm. Well, not "seem unjustified". They are unjustified.

The same thing applies to arguments.

The thing I have internalized after reading Kuhn is that my whole reasoning is based on a bunch of axioms that are.. not obviously sane. Heuristics. "Why do I trust Zvi Mowshowitz? Well, he sounds smart. But many people sound smart. In what way is he different?"

It's not even impossible to explain why I trust somebody and not somebody else, it just.. feels so hard that I can't be bothered.

The end result is that nowadays I am going around, aware that I can't prove anything to anyone if they have led a different life from the one I've led. This has changed my style of arguing, primarily in the direction of the magic phrase "This sounds reasonable, but it was tried and bad things happened so let's not / good things happened so I'll try anyway".

As well as an occasional "this won't go anywhere, let's stop". If somebody describes an idea to me and asks me "Okay, tell me why this won't work", and I know that they won't be persuaded merely by giving them a couple examples they haven't taken into account, there's nothing else I can do. "It won't work because {a description of my whole life thrown at you like a ball}." Not a thing you can do easily, although some writers are trying. Hence "this won't go anywhere, let's stop".

"So what should we do instead?"

"How to operate in a world where reasons have significant weaknesses" would be a topic of a pretty thick book. I am not writing this book. Meaningness is supposed to be this book, but it does not provide an easy-to-understand answer and is a work in progress.

I do not advocate conservatism. Blind conservatism is somewhat safer than following everything to its logical conclusion—you can rely on the rest of the society to keep you alive—but still dumb.

The ideal outcome of reading this post is that

  • you will realize reasons can be bad (not all people have realized this, or they are trying to fix the failure cases of legibility by adding more and more legible fixes on top of it),
  • integrate "reasons are good" and "reasons are bad" internally,
  • start noticing both good and bad sides of reasons while you are living your life,
  • and use this extra source of observations to develop better heuristics than you would have otherwise.

"Can you rephrase this whole thing very very briefly, please"

I can't, but someone on Reddit managed to:

I would re-state the thesis (which I may have not understood!) as something like, "acting, or insisting that others act, only in response to the most clearly-articulated reason available in the moment of action, does not always lead to preferred or even to predicted outcomes."

Which, stated that way, seems trivial in the sense that the only question I have for the author of this piece is, "yes--and?"

— /u/naraburns, Sep 16, 2020

My answer is: yeah. I genuinely think many people are terrible at spotting when they are gently nudged or even forced to "respond to the most clearly-articulated reason available in the moment of action", have no way to defend themselves, and even internally feel like they should submit to this demand. Hence this post.

Further reading

If you want to go deeper into failures of explicit reasoning—or cases where explicit reasoning is just a rationalization for how we actually make decisions—or generally anything that looks into a different direction from "how do we make explicit reasoning even more awesome":

I've also been recommended:

Robin Hogarth's book "Educating Intuition", and Arthur Reber's experiments on fictional grammars and the idea of confabulation


Surprised none of Chesterton's work shows up here. Heretics and Orthodoxy would fit in well.

But when are reasons useful?


In addition to [everything you can think of], here's one more, non-obvious case when reasons are useful, detailed in a great comment from Reddit:

It's often suggested that reasoning will allow you to solve complicated, unfamiliar problems. This is plainly incorrect, as the articles in the link illustrate. In complicated, unfamiliar problems, even if you can make an argument for a specific strategy that sounds compelling, this is nowhere near a guarantee that the it will actually work.

Second, for problems that are complicated and familiar, people often develop strategies that are effective, despite their not being able to explain how or why they work.

I think what's missing here is an account of 1) when reasoning is useful, and 2) how you actually solve complicated and unfamiliar problems, given the fact that reasoning about it doesn't work.

My answer is that complicated unfamiliar problems are nearly impossible to solve until they are made familiar. And I think that reasoning-- even when it's total bullshit-- plays an important role in the familiarization process, ESPECIALLY when familiarization is a team activity. When you have multiple people hacking at different parts of a problem, it helps to have a legible story that everybody on the team can understand about what strategies you're attempting, why they're being attempted, and how to interpret their (likely) failure.

— /u/Artimaeus332, Sep 16, 2020