What do you perceive as “types of evidence at the top of the EA hierarchy of evidence”? I sense that the use of evidence has at least somewhat changed since early critiques of this sort.
I was mostly thinking of high quality empirical work (RCTs and the like) and fields that study/mostly operate within the status quo (orthodox economics, psychology).
Don’t get me wrong, I definitely acknowledge that EAs engage in abstract philosophical discussions, but aren’t these generally on how the status quo might become much worse (AI, XR) rather than how the status quo could be changed to make things better?
It might very well be true that even the status quo is so incredibly tough to study that it will take most of our efforts. But that seems like quite a biased way to study truth, no?
I think that many EAs’ ideas about how the “status quo could be changed to make things better” run through radically different pathways than yours -- status-quo-shattering positive effects of artificial general superintelligence that doesn’t kill or enslave us, space colonization (through the power of said AGI), brain uploading, etc. Not all of that sounds like my idea of a good time, to be honest, but it’s definitely present within EA.
I think the focus right now is on “how the status quo might become much worse” because that existential AI risk is believed to be close at hand (e.g., within a few decades), while the positive results are seen as likely if only we can get over the existential risk segment of our relationship with EA. And much of the badness of AI catastrophe is attributed to the loss of that future world that is much better than the status quo.
What do you perceive as “types of evidence at the top of the EA hierarchy of evidence”? I sense that the use of evidence has at least somewhat changed since early critiques of this sort.
I was mostly thinking of high quality empirical work (RCTs and the like) and fields that study/mostly operate within the status quo (orthodox economics, psychology).
Don’t get me wrong, I definitely acknowledge that EAs engage in abstract philosophical discussions, but aren’t these generally on how the status quo might become much worse (AI, XR) rather than how the status quo could be changed to make things better?
It might very well be true that even the status quo is so incredibly tough to study that it will take most of our efforts. But that seems like quite a biased way to study truth, no?
I think that many EAs’ ideas about how the “status quo could be changed to make things better” run through radically different pathways than yours -- status-quo-shattering positive effects of artificial general superintelligence that doesn’t kill or enslave us, space colonization (through the power of said AGI), brain uploading, etc. Not all of that sounds like my idea of a good time, to be honest, but it’s definitely present within EA.
I think the focus right now is on “how the status quo might become much worse” because that existential AI risk is believed to be close at hand (e.g., within a few decades), while the positive results are seen as likely if only we can get over the existential risk segment of our relationship with EA. And much of the badness of AI catastrophe is attributed to the loss of that future world that is much better than the status quo.