That’s fair pushback. My personal guess is that it’s actually pretty tractable to decrease it to eg 0.9x of the original risk, with the collective effort and resources of the movement? To me it feels quite different to think about reducing something where the total risk is (prob=10^-10) x (magnitude = 10^big), vs having (prob of risk=10^-3 ) x (prob of each marginal person making a decrease = 10^-6) x (total number of people working on it = 10^4) x (magnitude = 10^10)
(Where obviously all of those numbers are pulled out of my ass)
To be clear I’m not saying that the EA movement working on AI is a Pascal’s Mugging (I think it should be a top priority), I was just pointing out that saying the chance of x-risk is non-negligible isn’t enough.
That’s fair pushback. My personal guess is that it’s actually pretty tractable to decrease it to eg 0.9x of the original risk, with the collective effort and resources of the movement? To me it feels quite different to think about reducing something where the total risk is (prob=10^-10) x (magnitude = 10^big), vs having (prob of risk=10^-3 ) x (prob of each marginal person making a decrease = 10^-6) x (total number of people working on it = 10^4) x (magnitude = 10^10)
(Where obviously all of those numbers are pulled out of my ass)
To be clear I’m not saying that the EA movement working on AI is a Pascal’s Mugging (I think it should be a top priority), I was just pointing out that saying the chance of x-risk is non-negligible isn’t enough.