It’s frustrating to have people who agree with you bat for the other team.
I don’t like “bat for the other team” here; it reminds me of “arguments are soldiers” and the idea that people on your “side” should agree your ideas are great, while the people who criticize your ideas are the enemy.
Criticism is good! Having accurate models of tractability (including political tractability) is good!
What I would say is:
Some “criticisms” are actually self-fulfilling prophecies, rather than being objective descriptions of reality. EAs aren’t wary enough of these, and don’t have strong enough norms against meta/PR becoming overrepresented or leaking into object-level discussions. This is especially bad in early-stage brainstorming and discussion.
On Doing the Improbable + Status Regulation and Anxious Underconfidence: EAs are far too inclined to abandon high-EV ideas that are <50% likely to succeed. There should be far larger number of failures, weird experiments, and risky bets in EA. If you’re too willing to give up at the smallest problem, then “seeking out criticism” can turn into “seeking out rationalizations for inaction” (or “seeking out rationalizations for only doing normal/simple/predictable things”).
Using a general reference class when you have a better, more specific class available
I agree this is one of the biggest things EAs currently tend to get wrong. I’d distinguish two kinds of mistake here, both of which I think EAs tend to make:
Over-relying on outside views over inside views. Inside views (making predictions based on details and causal mechanisms) and outside views (making predictions based on high-level similarities) are both important, but EA currently puts too much thought into outside views and not enough into inside views. If you’re NASA, your outside views help you predict budget and time overruns and build in good safety/robustness margins, while your inside views let you build a rocket at all.
Picking the wrong outside view / reference class, or not even considering the different reference classes on offer. Picking a good reference class can be extremely difficult; in some cases, many years of accumulated domain expertise may be the only thing that allows you to spot the right surface similarities to put your weight down on.
It’s not that any criticism is bad, it’s that people who agree with an idea (when political considerations are ignored) are snuffing it out based on questionable predictions of political feasibility. I just don’t think people are good at predicting political feasibility. How many people said Trump would never be president (despite FiveThirtyEight warning there was a 30 percent chance)?
Rather than the only disagreement being political feasibility, I would actually prefer someone to be against a policy and criticise it based on something more substantive (like pointing out an error in my reasoning). Criticism needs to come in the right order for any policy discussion to be productive. Maybe this:
Given the assumptions of the argument, does the policy satisfy its specified goals?
Are the goals of the policy good? Is there a better goalset we could satisfy?
Is the policy technically feasible to implement? (Are the assumptions of the initial argument reasonable? Can we steelman the argument to make a similar conclusion from better assumptions?)
Is the policy politically feasible to implement?
I think talking about political feasibility should never ever be the first thing we bring up when debating new ideas. And if someone does give a prediction on political feasibility, they should either show that they do produce good predictions on such things, or significantly lower their confidence in their political feasibility claims.
I think talking about political feasibility should never ever be first thing we bring up when debating new ideas.
I think this is much closer to the core problem. If we don’t evaluate the object-level at all, our assessment of the political feasibility winds up being wrong.
When I hear people say “politically feasible” what they mean at the object level is “will the current officeholders vote for it and also not get punished in their next election as a result.” This ruins the political analysis, because it artificially constrains the time horizon. In turn this rules out political strategy questions like messaging (you have to shoe-horn it into whatever the current messaging is) or salience (stuck with whatever the current priorities are in public opinion) or tradeoffs among different policy priorities entirely. All of this leaves aside enough time to work on fundamental things like persuading the public, which can’t be done over a single election season and is usually abandoned for shorter term gains.
I don’t like “bat for the other team” here; it reminds me of “arguments are soldiers” and the idea that people on your “side” should agree your ideas are great, while the people who criticize your ideas are the enemy.
Criticism is good! Having accurate models of tractability (including political tractability) is good!
What I would say is:
Some “criticisms” are actually self-fulfilling prophecies, rather than being objective descriptions of reality. EAs aren’t wary enough of these, and don’t have strong enough norms against meta/PR becoming overrepresented or leaking into object-level discussions. This is especially bad in early-stage brainstorming and discussion.
On Doing the Improbable + Status Regulation and Anxious Underconfidence: EAs are far too inclined to abandon high-EV ideas that are <50% likely to succeed. There should be far larger number of failures, weird experiments, and risky bets in EA. If you’re too willing to give up at the smallest problem, then “seeking out criticism” can turn into “seeking out rationalizations for inaction” (or “seeking out rationalizations for only doing normal/simple/predictable things”).
I agree this is one of the biggest things EAs currently tend to get wrong. I’d distinguish two kinds of mistake here, both of which I think EAs tend to make:
Over-relying on outside views over inside views. Inside views (making predictions based on details and causal mechanisms) and outside views (making predictions based on high-level similarities) are both important, but EA currently puts too much thought into outside views and not enough into inside views. If you’re NASA, your outside views help you predict budget and time overruns and build in good safety/robustness margins, while your inside views let you build a rocket at all.
Picking the wrong outside view / reference class, or not even considering the different reference classes on offer. Picking a good reference class can be extremely difficult; in some cases, many years of accumulated domain expertise may be the only thing that allows you to spot the right surface similarities to put your weight down on.
Strong upvote for these.
It’s not that any criticism is bad, it’s that people who agree with an idea (when political considerations are ignored) are snuffing it out based on questionable predictions of political feasibility. I just don’t think people are good at predicting political feasibility. How many people said Trump would never be president (despite FiveThirtyEight warning there was a 30 percent chance)?
Rather than the only disagreement being political feasibility, I would actually prefer someone to be against a policy and criticise it based on something more substantive (like pointing out an error in my reasoning). Criticism needs to come in the right order for any policy discussion to be productive. Maybe this:
Given the assumptions of the argument, does the policy satisfy its specified goals?
Are the goals of the policy good? Is there a better goalset we could satisfy?
Is the policy technically feasible to implement? (Are the assumptions of the initial argument reasonable? Can we steelman the argument to make a similar conclusion from better assumptions?)
Is the policy politically feasible to implement?
I think talking about political feasibility should never ever be the first thing we bring up when debating new ideas. And if someone does give a prediction on political feasibility, they should either show that they do produce good predictions on such things, or significantly lower their confidence in their political feasibility claims.
I think this is much closer to the core problem. If we don’t evaluate the object-level at all, our assessment of the political feasibility winds up being wrong.
When I hear people say “politically feasible” what they mean at the object level is “will the current officeholders vote for it and also not get punished in their next election as a result.” This ruins the political analysis, because it artificially constrains the time horizon. In turn this rules out political strategy questions like messaging (you have to shoe-horn it into whatever the current messaging is) or salience (stuck with whatever the current priorities are in public opinion) or tradeoffs among different policy priorities entirely. All of this leaves aside enough time to work on fundamental things like persuading the public, which can’t be done over a single election season and is usually abandoned for shorter term gains.