Great (and difficult!) question, Jordan. I (Bob) am responding to this one for myself and not for the team; others can chime in as they see fit. The biggest issue I see in EA cause prioritization is overconfidence. It’s easy to think that because there are some prominent arguments for expected value maximization, we don’t need to run the numbers to see what happens if we have a modest level of risk aversion. It’s easy to think that because the future could be long and positive, the EV calculation is going to favor x-risk work. Etc. I’m not anti-EV; I’m not anti-x-risk. However, I think these are clear areas where people have been too quick to assume that they don’t need to run the numbers because it’s obvious how they’ll come out.
I think another common pitfall is not working through things from first principles. I appreciate that it’s challenging and that any model is unrealistic. Still, BOTECs, pre-established boundaries between cause-areas/​worldviews and our first instincts more broadly are likely to (and often do) lead us astray. Separately, I’m glad EA is so self-aware and worried about healthier epistemics, but I think we could do more to guard against echo-chamber thinking.
I agree that thinking from first principles can be great but, as I’m sure you’re aware, it’s super difficult! Do you have any thoughts on encouraging and/​or facilitating more of this kind of thinking in the community?
That’s fair. The main thought that came to mind, which might not be useful, is developing the patience (eagerness to get to conclusions is often incompatible with the work required) and choosing your battles early. As you say, it can be hard and time-consuming. So people in the community asking narrower questions and focusing on one or two is probably the way to go.
What do you think people in the EA community get wrong (or fail to sufficiently consider) when it comes to cause prioritisation?
Great (and difficult!) question, Jordan. I (Bob) am responding to this one for myself and not for the team; others can chime in as they see fit. The biggest issue I see in EA cause prioritization is overconfidence. It’s easy to think that because there are some prominent arguments for expected value maximization, we don’t need to run the numbers to see what happens if we have a modest level of risk aversion. It’s easy to think that because the future could be long and positive, the EV calculation is going to favor x-risk work. Etc. I’m not anti-EV; I’m not anti-x-risk. However, I think these are clear areas where people have been too quick to assume that they don’t need to run the numbers because it’s obvious how they’ll come out.
Is there any writing from RP or anywhere else that describes these flaws in more depth, or actually runs the numbers on EV calculations and x-risk?
Yes! I recommend starting with this and this.
Thank you!
I think another common pitfall is not working through things from first principles. I appreciate that it’s challenging and that any model is unrealistic. Still, BOTECs, pre-established boundaries between cause-areas/​worldviews and our first instincts more broadly are likely to (and often do) lead us astray. Separately, I’m glad EA is so self-aware and worried about healthier epistemics, but I think we could do more to guard against echo-chamber thinking.
I agree that thinking from first principles can be great but, as I’m sure you’re aware, it’s super difficult! Do you have any thoughts on encouraging and/​or facilitating more of this kind of thinking in the community?
That’s fair. The main thought that came to mind, which might not be useful, is developing the patience (eagerness to get to conclusions is often incompatible with the work required) and choosing your battles early. As you say, it can be hard and time-consuming. So people in the community asking narrower questions and focusing on one or two is probably the way to go.