The fact that risk from advanced AI is one of the top cause areas is to me an example of at least part of EA being technopessimist for a concrete technology. So I don’t think there is any fundamental incompatibility, nor that the burden of proof is particularly high, as long as we are talking about specific classes of technology.
If technopessimism requires believing that most new technology is net harmful that’s a very different question, and probably does not even have a well defined answer.
“risk from advanced AI is one of the top cause areas is to me an example of at least part of EA being technopessimist” …assuming that particular example is a concern of such an impact primarily on humans, could that be articulated as anthropocentric technopessimism ?
On a broader sidebar, there is discussion around technology (particularly computing) in regards to ecological and other limits—e.g. https://computingwithinlimits.org
...assuming that particular example is a concern of such an impact primarily on humans, could that be articulated as anthropocentric technopessimism ?
Why would you want to describe it that way?
On reflection, I don’t think it can be called anthropocentric, no. There are four big groups of beings involved here: Humanity, Animals, Transhumanist post-humanity (hopefully without value-drift), and Unaligned AI. Three of those groups are non-human. Those concerned with AI alignment tend to be fighting in favor of more of those non-human groups than they are fighting against.
(It’s a bit hard to tell whether we would actually like animals once they could speak, wield guns, occupy vast portions of the accessible universe etc. Might turn out there are fundamental irreconcilable conflicts. None apparent yet, though.)
The fact that risk from advanced AI is one of the top cause areas is to me an example of at least part of EA being technopessimist for a concrete technology. So I don’t think there is any fundamental incompatibility, nor that the burden of proof is particularly high, as long as we are talking about specific classes of technology.
If technopessimism requires believing that most new technology is net harmful that’s a very different question, and probably does not even have a well defined answer.
“risk from advanced AI is one of the top cause areas is to me an example of at least part of EA being technopessimist”
…assuming that particular example is a concern of such an impact primarily on humans, could that be articulated as anthropocentric technopessimism ?
On a broader sidebar, there is discussion around technology (particularly computing) in regards to ecological and other limits—e.g. https://computingwithinlimits.org
Why would you want to describe it that way?
On reflection, I don’t think it can be called anthropocentric, no. There are four big groups of beings involved here: Humanity, Animals, Transhumanist post-humanity (hopefully without value-drift), and Unaligned AI. Three of those groups are non-human. Those concerned with AI alignment tend to be fighting in favor of more of those non-human groups than they are fighting against.
(It’s a bit hard to tell whether we would actually like animals once they could speak, wield guns, occupy vast portions of the accessible universe etc. Might turn out there are fundamental irreconcilable conflicts. None apparent yet, though.)