I think that many EAs’ ideas about how the “status quo could be changed to make things better” run through radically different pathways than yours -- status-quo-shattering positive effects of artificial general superintelligence that doesn’t kill or enslave us, space colonization (through the power of said AGI), brain uploading, etc. Not all of that sounds like my idea of a good time, to be honest, but it’s definitely present within EA.
I think the focus right now is on “how the status quo might become much worse” because that existential AI risk is believed to be close at hand (e.g., within a few decades), while the positive results are seen as likely if only we can get over the existential risk segment of our relationship with EA. And much of the badness of AI catastrophe is attributed to the loss of that future world that is much better than the status quo.
I think that many EAs’ ideas about how the “status quo could be changed to make things better” run through radically different pathways than yours -- status-quo-shattering positive effects of artificial general superintelligence that doesn’t kill or enslave us, space colonization (through the power of said AGI), brain uploading, etc. Not all of that sounds like my idea of a good time, to be honest, but it’s definitely present within EA.
I think the focus right now is on “how the status quo might become much worse” because that existential AI risk is believed to be close at hand (e.g., within a few decades), while the positive results are seen as likely if only we can get over the existential risk segment of our relationship with EA. And much of the badness of AI catastrophe is attributed to the loss of that future world that is much better than the status quo.