The bar also seems very different depending on precisely what organizations/individuals within the community you’re trying to convince, and what you’re trying to convince them of (e.g. the global health side of the community seems to be unresponsive to careful, intuitive arguments that are unaccompanied by RCTs, while the AI people are very interested in such arguments).
I think most of the difficulty comes from the generically very high burden of proof for arguing that any given cause area / intervention is the most effective use of resources, not from techno-pessimism-specific disagreements (although those don’t help).
Re: “If you feel this is a bad framework, please let me know.”—yup, it seems like this framework overlooks/obscures many other potential reactions to (1).
Other thoughts:
The bar also seems very different depending on precisely what organizations/individuals within the community you’re trying to convince, and what you’re trying to convince them of (e.g. the global health side of the community seems to be unresponsive to careful, intuitive arguments that are unaccompanied by RCTs, while the AI people are very interested in such arguments).
I think most of the difficulty comes from the generically very high burden of proof for arguing that any given cause area / intervention is the most effective use of resources, not from techno-pessimism-specific disagreements (although those don’t help).
Re: “If you feel this is a bad framework, please let me know.”—yup, it seems like this framework overlooks/obscures many other potential reactions to (1).