saying that it’s unfeasible will tend to make it more unfeasible
Thank you for saying this. It’s frustrating to have people who agree with you bat for the other team. I’d like to see how accurate people are for their infeasibility predictions: Take a list of policies that passed, a list that failed to pass, mix them together, and see how much better you can unscramble them than random chance. Your “I’m not going to talk about political feasibility in this post” idea is a good one that I’ll use in future.
Poor meta-arguments I’ve noticed on the Forum:
Using a general reference class when you have a better, more specific class available (e.g. taking an IQ test, having the results in your hand, and refusing to look at them because “I probably got 100 points, because that’s the average.”)
Bringing up common knowledge, i.e. things that are true, but everyone in the conversation already knows and applies that information. (E.g. “Logical arguments can be wrong in subtle ways, so just because your argument looks airtight, doesn’t mean it is”. A much better contribution is to actually point out the weaknesses in the specific argument that’s in front of you.)
It’s frustrating to have people who agree with you bat for the other team.
I don’t like “bat for the other team” here; it reminds me of “arguments are soldiers” and the idea that people on your “side” should agree your ideas are great, while the people who criticize your ideas are the enemy.
Criticism is good! Having accurate models of tractability (including political tractability) is good!
What I would say is:
Some “criticisms” are actually self-fulfilling prophecies, rather than being objective descriptions of reality. EAs aren’t wary enough of these, and don’t have strong enough norms against meta/PR becoming overrepresented or leaking into object-level discussions. This is especially bad in early-stage brainstorming and discussion.
On Doing the Improbable + Status Regulation and Anxious Underconfidence: EAs are far too inclined to abandon high-EV ideas that are <50% likely to succeed. There should be far larger number of failures, weird experiments, and risky bets in EA. If you’re too willing to give up at the smallest problem, then “seeking out criticism” can turn into “seeking out rationalizations for inaction” (or “seeking out rationalizations for only doing normal/simple/predictable things”).
Using a general reference class when you have a better, more specific class available
I agree this is one of the biggest things EAs currently tend to get wrong. I’d distinguish two kinds of mistake here, both of which I think EAs tend to make:
Over-relying on outside views over inside views. Inside views (making predictions based on details and causal mechanisms) and outside views (making predictions based on high-level similarities) are both important, but EA currently puts too much thought into outside views and not enough into inside views. If you’re NASA, your outside views help you predict budget and time overruns and build in good safety/robustness margins, while your inside views let you build a rocket at all.
Picking the wrong outside view / reference class, or not even considering the different reference classes on offer. Picking a good reference class can be extremely difficult; in some cases, many years of accumulated domain expertise may be the only thing that allows you to spot the right surface similarities to put your weight down on.
It’s not that any criticism is bad, it’s that people who agree with an idea (when political considerations are ignored) are snuffing it out based on questionable predictions of political feasibility. I just don’t think people are good at predicting political feasibility. How many people said Trump would never be president (despite FiveThirtyEight warning there was a 30 percent chance)?
Rather than the only disagreement being political feasibility, I would actually prefer someone to be against a policy and criticise it based on something more substantive (like pointing out an error in my reasoning). Criticism needs to come in the right order for any policy discussion to be productive. Maybe this:
Given the assumptions of the argument, does the policy satisfy its specified goals?
Are the goals of the policy good? Is there a better goalset we could satisfy?
Is the policy technically feasible to implement? (Are the assumptions of the initial argument reasonable? Can we steelman the argument to make a similar conclusion from better assumptions?)
Is the policy politically feasible to implement?
I think talking about political feasibility should never ever be the first thing we bring up when debating new ideas. And if someone does give a prediction on political feasibility, they should either show that they do produce good predictions on such things, or significantly lower their confidence in their political feasibility claims.
I think talking about political feasibility should never ever be first thing we bring up when debating new ideas.
I think this is much closer to the core problem. If we don’t evaluate the object-level at all, our assessment of the political feasibility winds up being wrong.
When I hear people say “politically feasible” what they mean at the object level is “will the current officeholders vote for it and also not get punished in their next election as a result.” This ruins the political analysis, because it artificially constrains the time horizon. In turn this rules out political strategy questions like messaging (you have to shoe-horn it into whatever the current messaging is) or salience (stuck with whatever the current priorities are in public opinion) or tradeoffs among different policy priorities entirely. All of this leaves aside enough time to work on fundamental things like persuading the public, which can’t be done over a single election season and is usually abandoned for shorter term gains.
Thank you for saying this. It’s frustrating to have people who agree with you bat for the other team. I’d like to see how accurate people are for their infeasibility predictions: Take a list of policies that passed, a list that failed to pass, mix them together, and see how much better you can unscramble them than random chance. Your “I’m not going to talk about political feasibility in this post” idea is a good one that I’ll use in future.
Poor meta-arguments I’ve noticed on the Forum:
Using a general reference class when you have a better, more specific class available (e.g. taking an IQ test, having the results in your hand, and refusing to look at them because “I probably got 100 points, because that’s the average.”)
Bringing up common knowledge, i.e. things that are true, but everyone in the conversation already knows and applies that information. (E.g. “Logical arguments can be wrong in subtle ways, so just because your argument looks airtight, doesn’t mean it is”. A much better contribution is to actually point out the weaknesses in the specific argument that’s in front of you.)
And, as you say, predictions of infeasibility.
I don’t like “bat for the other team” here; it reminds me of “arguments are soldiers” and the idea that people on your “side” should agree your ideas are great, while the people who criticize your ideas are the enemy.
Criticism is good! Having accurate models of tractability (including political tractability) is good!
What I would say is:
Some “criticisms” are actually self-fulfilling prophecies, rather than being objective descriptions of reality. EAs aren’t wary enough of these, and don’t have strong enough norms against meta/PR becoming overrepresented or leaking into object-level discussions. This is especially bad in early-stage brainstorming and discussion.
On Doing the Improbable + Status Regulation and Anxious Underconfidence: EAs are far too inclined to abandon high-EV ideas that are <50% likely to succeed. There should be far larger number of failures, weird experiments, and risky bets in EA. If you’re too willing to give up at the smallest problem, then “seeking out criticism” can turn into “seeking out rationalizations for inaction” (or “seeking out rationalizations for only doing normal/simple/predictable things”).
I agree this is one of the biggest things EAs currently tend to get wrong. I’d distinguish two kinds of mistake here, both of which I think EAs tend to make:
Over-relying on outside views over inside views. Inside views (making predictions based on details and causal mechanisms) and outside views (making predictions based on high-level similarities) are both important, but EA currently puts too much thought into outside views and not enough into inside views. If you’re NASA, your outside views help you predict budget and time overruns and build in good safety/robustness margins, while your inside views let you build a rocket at all.
Picking the wrong outside view / reference class, or not even considering the different reference classes on offer. Picking a good reference class can be extremely difficult; in some cases, many years of accumulated domain expertise may be the only thing that allows you to spot the right surface similarities to put your weight down on.
Strong upvote for these.
It’s not that any criticism is bad, it’s that people who agree with an idea (when political considerations are ignored) are snuffing it out based on questionable predictions of political feasibility. I just don’t think people are good at predicting political feasibility. How many people said Trump would never be president (despite FiveThirtyEight warning there was a 30 percent chance)?
Rather than the only disagreement being political feasibility, I would actually prefer someone to be against a policy and criticise it based on something more substantive (like pointing out an error in my reasoning). Criticism needs to come in the right order for any policy discussion to be productive. Maybe this:
Given the assumptions of the argument, does the policy satisfy its specified goals?
Are the goals of the policy good? Is there a better goalset we could satisfy?
Is the policy technically feasible to implement? (Are the assumptions of the initial argument reasonable? Can we steelman the argument to make a similar conclusion from better assumptions?)
Is the policy politically feasible to implement?
I think talking about political feasibility should never ever be the first thing we bring up when debating new ideas. And if someone does give a prediction on political feasibility, they should either show that they do produce good predictions on such things, or significantly lower their confidence in their political feasibility claims.
I think this is much closer to the core problem. If we don’t evaluate the object-level at all, our assessment of the political feasibility winds up being wrong.
When I hear people say “politically feasible” what they mean at the object level is “will the current officeholders vote for it and also not get punished in their next election as a result.” This ruins the political analysis, because it artificially constrains the time horizon. In turn this rules out political strategy questions like messaging (you have to shoe-horn it into whatever the current messaging is) or salience (stuck with whatever the current priorities are in public opinion) or tradeoffs among different policy priorities entirely. All of this leaves aside enough time to work on fundamental things like persuading the public, which can’t be done over a single election season and is usually abandoned for shorter term gains.