[Question] Is EA compatible with technopessimism?

Lots of EA discourse focusses explicitly on the exponential capabilities of technology, and a lot more so than most other social movements. Be it longtermism or x-risk reduction or progress studies and so forth. It also focusses on opportunities that have been systematically neglected by existing social institutions—be it global aid or x-risks again. Yet EA as a whole seems to me to very techno-optimist, believing that new technology can be guided for good, extinction can be averted and so on. This belief seems to me more ideological than just scientific, and perhaps will always remain so. Hence I’m curious.

Question: Is EA compatible with technopessimism?

But for that let me try defining technopessimism first. I can broadly see it as encompassing two beliefs:

  1. New technology in our society causes more harm than good (including expected future harm and good).

  2. The more effective immediate action on learning this, is to to limit the creation of new technology itself, rather than improve policy and institutions to control it.

It is possible to believe neither 1 or 2, 1 and not 2; or 1 and 2 both. For the scope of this post, let’s assume technopessimists are people who believe both 1 and 2. I’m sure there’s also a doomerist set of beliefs here, namely that 1 is true, but no form of change is worth pursuing. I’ll also ignore that for the scope of the post. (If you feel this is a bad framework, please let me know.)

Regarding the first component, I’m sure some EAs would say that the EA community is open to it if there’s stronger proof that tech progress is a bad thing, and evidence-based reasoning is required. But I’m not sure what kind of evidence would suffice. It’s hard to use RCTs and scientifically prove that technological progress is a bad thing, at best you’ll get some policy frameworks that people will view through their usual mental filters. There’s already plenty of work on various brands of economics (neoclassical, socialist, etc.), various types of social structures and governments, and so on—it’s unlikely (albeit not impossible) we quickly develop a slam dunk case for one set of ideas being superior to another.

Regarding the second component, there is significant uncertainty shared by people in the EA community and outside of it as to how easy any form of regulation, control or guiding is. Whether it is by influencing existing institutions or building new ones, whether is via incremental changes or revolutionary ones. Yet both technopessimists (as I’ve defined) and most EAs share the belief that change is possible. The question is how to evaluate what forms of change are more tractable. There are for instance those in the AI alignment community who believe 1 but not 2 because they believe AGI cannot be regulated. But again you end up with same problem, it’s much harder to definitely prove what works.

Hence an elaboration of the original question:

Is there a reasonable burden of proof that the EA community as a whole is willing to accept in favour of techno-pessimism?

P.S. Please try to avoid arguments in favour of techno-optimism or techno-pessimism in the comment section if you can. I’m more interested in understanding what the burden of proof is, and what current EA stances are, rather than debating in favour or against them. I totally understand that this still requires expressing these arguments, I’d just prefer if the intent is descriptive rather than persuasive.