Good idea, I reposted the article itself here: https://forum.effectivealtruism.org/posts/GyenLpfzRKK3wBPyA/the-simple-case-for-ai-catastrophe-in-four-steps
I’ve been trying to keep the “meta” and the main posts mostly separate so hopefully the discussions for the metas and the main posts aren’t as close together.
I like Scott’s Mistake Theory vs Conflict Theory framing, but I don’t think this is a complete model of disagreements about policy, nor do I think the complete models of disagreement will look like more advanced versions of Mistake Theory + Conflict Theory.
To recap, here’s my short summaries of the two theories:
Mistake Theory: I disagree with you because one or both of us are wrong about what we want, or how to achieve what we want)
Conflict Theory: I disagree with you because ultimately I want different things from you. The Marxists, who Scott was originally arguing against, will natively see this as about individual or class material interests but this can be smoothly updated to include values and ideological conflict as well.
I polled several people about alternative models for political disagreement at the same level of abstraction of Conflict vs Mistake, and people usually got to “some combination of mistakes and conflicts.” To that obvious model, I want to add two other theories (this list is incomplete).
Consider Thomas Schelling’s 1960 opening to Strategy of Conflict
I claim that this “rudimentary/obvious idea,” that the conflict/cooperative elements of many human disagreements is structurally inseparable, is central to a secret third thing distinct from Conflict vs Mistake. If you grok the “obvious idea,” we can derive something like
Negotiation Theory(?): I have my desires. You have yours. I sometimes want to cooperate with you, and sometimes not. I take actions maximally good for my goals and respect you well enough to assume that you will do the same; however in practice a “hot war” is unlikely to be in either of our best interests.
In the Negotiation Theory framing, disagreement/conflict arises from dividing the goods in non-zerosum games. I think the economists/game theorists’ “standard models” of negotiation theory is natively closer to “conflict theory” than “mistake theory.” (eg, often their models assume rationality, which means the “can’t agree to disagree” theorems apply). So disagreements are due to different interests, rather than different knowledge. But unlike Marxist/naive conflict theory, we see that conflicts are far from desired or inevitable, and usually there are better trade deals from both parties’ lights than not coordinating, or war.
(Failures from Negotiation Theory’s perspectives often centrally look like coordination failures, though the theory is broader than that and includes not being able to make peace with adversaries)/
Another framing that is in some ways a synthesis and in some ways a different view altogether that can sit in each of the previous theories is a thing that many LW people talk about, but not exactly in this context:
Motivated Cognition: People disagree because their interests shape their beliefs. Political disagreements happen because one or both parties are mistaken about the facts, and those mistakes are downstream of material or ideological interests shading one’s biases. Upton Sinclair: “It is difficult to get a man to understand something, when his salary depends on his not understanding it.”
Note the word “difficult,” not impossible. This is Sinclair’s view and I think it’s correct. Getting people to believe (true) things against their material interests to believe is possible but the skill level required is higher than a neutral presentation of the facts to a neutral third party.
Interestingly, the Motivated Cognition framing suggests that there might not be a complete truth of the matter for whether “Mistake Theory” vs “Conflict Theory” vs Negotiation Theory is more correct for a given political disagreement. Instead, your preferred framing has a viewpoint-dependent and normative element to it.
Suppose your objective is just to get a specific policy passed (no meta-level preferences like altruism), and you believe this policy is in your interests and those of many others, and people who oppose you are factually wrong.
Someone who’s suited to explanations like Scott (or like me?), might naturally fall into a Mistake Theory framing, and write clear-headed blogposts about why people who disagree with you are wrong. If the Motivated Cognition theory is correct, most people are at least somewhat sincere, and at some level of sufficiently high level of simplicity, people can update to agree with you even if it’s not immediately in their interests (smart people in democracies usually don’t believe 2+2=5 even in situations where it’d be advantageous for them to do so)
Someone who’s good at negotiations and cooperative politics might more naturally adopt a Negotiation Theory framing, and come up with a deal that gets everybody (or enough people) what they want while having their preferred policy passed.
Finally, someone who’s good at (or temperamentally suited to) non-cooperative politics and the more Machiavellian side of politics might identify the people who are most likely to oppose their preferred policies, and destroy their political influence enough that the preferred policy gets passed.
Anyway, here are my four models of political disagreement (Mistake, Conflict, Negotiation, Motivated Cognition). I definitely don’t think these four models (or linear combinations of them) explain all disagreements, or are the only good frames for thinking of disagreement. Excited to hear alternatives [1]!
[1] In particular I’m wondering if there is a distinct case for ideological/memetic theories that operate along a similar level of abstraction as the existing theories, as opposed to thinking of ideologies as primarily given us different goals (which would make them slot in well with all the existing theories except maybe Mistake Theory).