This isn’t directly answering your question, but one thing I can easily imagine happening is: some techno-pessimists spend lots of effort pulling together evidence and arguments for one premise of techno-pessimism, rush (or leave unmade) arguments for other crucial premises, and then find it frustrating that this doesn’t change many people’s minds. My guess is there’s, say, 5+ substantive disagreements that are all upstream of typical disagreement on techno-pessimism, so I expect the bigger challenge for techno-pessimists in convincing this community will be, not necessarily having watertight arguments for any one background premise, but doing a thorough enough job at addressing the wide range of background disagreements.
I don’t know about the community as a whole, so speaking just for myself, I think I’d require at least intuitive arguments (no RCTs needed) for each of the following premises to consider working on this—I’ll leave it to you to judge whether that’s a reasonable burden of proof:
That a technologically sophisticated future would be bad overall in expectation (i.e. (1)), even under ethical views that are most common in this community
That permanently, globally halting technological progress is tractable (in a way that isn’t highly objectionable on other grounds)
That sufficiently improving the trajectory of AI is relatively intractable (or that AI in particular shouldn’t be a dominant consideration)
Bonus points: directly addressing heuristics that make techno-pessimism very unintuitive (especially the heuristic that “ban all new technologies” is so un-nuanced/general that it’s very suspect)
The bar also seems very different depending on precisely what organizations/individuals within the community you’re trying to convince, and what you’re trying to convince them of (e.g. the global health side of the community seems to be unresponsive to careful, intuitive arguments that are unaccompanied by RCTs, while the AI people are very interested in such arguments).
I think most of the difficulty comes from the generically very high burden of proof for arguing that any given cause area / intervention is the most effective use of resources, not from techno-pessimism-specific disagreements (although those don’t help).
Re: “If you feel this is a bad framework, please let me know.”—yup, it seems like this framework overlooks/obscures many other potential reactions to (1).
This isn’t directly answering your question, but one thing I can easily imagine happening is: some techno-pessimists spend lots of effort pulling together evidence and arguments for one premise of techno-pessimism, rush (or leave unmade) arguments for other crucial premises, and then find it frustrating that this doesn’t change many people’s minds. My guess is there’s, say, 5+ substantive disagreements that are all upstream of typical disagreement on techno-pessimism, so I expect the bigger challenge for techno-pessimists in convincing this community will be, not necessarily having watertight arguments for any one background premise, but doing a thorough enough job at addressing the wide range of background disagreements.
I don’t know about the community as a whole, so speaking just for myself, I think I’d require at least intuitive arguments (no RCTs needed) for each of the following premises to consider working on this—I’ll leave it to you to judge whether that’s a reasonable burden of proof:
That a technologically sophisticated future would be bad overall in expectation (i.e. (1)), even under ethical views that are most common in this community
That permanently, globally halting technological progress is tractable (in a way that isn’t highly objectionable on other grounds)
That sufficiently improving the trajectory of AI is relatively intractable (or that AI in particular shouldn’t be a dominant consideration)
Bonus points: directly addressing heuristics that make techno-pessimism very unintuitive (especially the heuristic that “ban all new technologies” is so un-nuanced/general that it’s very suspect)
Other thoughts:
The bar also seems very different depending on precisely what organizations/individuals within the community you’re trying to convince, and what you’re trying to convince them of (e.g. the global health side of the community seems to be unresponsive to careful, intuitive arguments that are unaccompanied by RCTs, while the AI people are very interested in such arguments).
I think most of the difficulty comes from the generically very high burden of proof for arguing that any given cause area / intervention is the most effective use of resources, not from techno-pessimism-specific disagreements (although those don’t help).
Re: “If you feel this is a bad framework, please let me know.”—yup, it seems like this framework overlooks/obscures many other potential reactions to (1).