Good feedback—I see the logic of your points, and don’t find faults with any of them.
On AIXR as valid and what the response would be, you’re right; I emphasize the practical nature of the policy recommendation because otherwise, the argument can veer into the metaphysical. To use an analogy, if I claim there’s a 10% chance another planet could collide with Earth and destroy the planet in the next decade, you might begrudgingly accept the premise to move the conversation on to the practical aspect of my forecast. Even if that were true, what would my policy intervention look like? Build interstellar lifeboats? Is that feasible in the absence of concrete evidence?
Agree—armchair psychoanalysis isn’t really useful. What is useful is understanding how heuristics and biases work on a population level. If we know that, in general, projects run over budget and take longer than expected, we can adjust our estimates. If we know experts mis-forecast x-risk, we can adjust for that too. That’s far from psychoanalysis.
I don’t really know what the median view on AIXR within EA communities truly is. One thing’s for certain: the public narrative around the issue highly tilts towards the “pause AI” camp and the Yudkowskys out there.
On the common sense of X-risk—one of the neat offices that few people know of at the State Department is the Nuclear Risk Reduction Center or NRRC. It’s staffed 24⁄7 and has foreign language designated positions, meaning at least someone in the room speaks Russian, etc. The office is tasked with staying in touch with other nations to reduce the odds of a miscalculation and nuclear war. That makes tons of sense. Thinking about big problems that could end the world makes sense in general—disease, asteroids, etc.
What I find troubling is the propensity to assign odds to distant right-tail events. And then to take the second step of recommending costly and questionable policy recommendations. I don’t think these are EA consensus positions, but they certainly receive outsized attention.
I’m glad you found my comment useful. I think then, with respect, you should consider retracting some of your previous comments, or at least reframing them to be more circumspect and be clear you’re taking issue with a particular framing/subset of the AIXR community as opposed to EA as a whole.
As for the points in your comment, there’s a lot of good stuff here. I think a post about the NRRC, or even an insider’s view into how the US administration thinks about and handles Nuclear Risk, would be really useful content on the Forum, and also incredibly interesting! Similarly, I think how a community handles making ‘right-tail recommendations’ when those recommendations may erode its collective and institutional legitimacy[1] would be really valuable. (Not saying that you should write these posts, they’re just examples off the top of my head. In general I think you have a professional perspective a lot of EAs could benefit from)
These are all great suggestions! As for my objections to EA as a whole versus a subset, it reminds me a bit of a defense that folks employ whenever a larger organization is criticised. Defenses that one hears from Republicans in the US for example. “It’s not all of us, just a vocal subset!” That might be true, but I think it misses the point. It’s hard to soul-search and introspect as an organization or a movement if we collectively say, “not all-EA” when someone points to the enthusiasm around SBF and ideas like buying up coal mines.
Love this thoughtful response!
Good feedback—I see the logic of your points, and don’t find faults with any of them.
On AIXR as valid and what the response would be, you’re right; I emphasize the practical nature of the policy recommendation because otherwise, the argument can veer into the metaphysical. To use an analogy, if I claim there’s a 10% chance another planet could collide with Earth and destroy the planet in the next decade, you might begrudgingly accept the premise to move the conversation on to the practical aspect of my forecast. Even if that were true, what would my policy intervention look like? Build interstellar lifeboats? Is that feasible in the absence of concrete evidence?
Agree—armchair psychoanalysis isn’t really useful. What is useful is understanding how heuristics and biases work on a population level. If we know that, in general, projects run over budget and take longer than expected, we can adjust our estimates. If we know experts mis-forecast x-risk, we can adjust for that too. That’s far from psychoanalysis.
I don’t really know what the median view on AIXR within EA communities truly is. One thing’s for certain: the public narrative around the issue highly tilts towards the “pause AI” camp and the Yudkowskys out there.
On the common sense of X-risk—one of the neat offices that few people know of at the State Department is the Nuclear Risk Reduction Center or NRRC. It’s staffed 24⁄7 and has foreign language designated positions, meaning at least someone in the room speaks Russian, etc. The office is tasked with staying in touch with other nations to reduce the odds of a miscalculation and nuclear war. That makes tons of sense. Thinking about big problems that could end the world makes sense in general—disease, asteroids, etc.
What I find troubling is the propensity to assign odds to distant right-tail events. And then to take the second step of recommending costly and questionable policy recommendations. I don’t think these are EA consensus positions, but they certainly receive outsized attention.
I’m glad you found my comment useful. I think then, with respect, you should consider retracting some of your previous comments, or at least reframing them to be more circumspect and be clear you’re taking issue with a particular framing/subset of the AIXR community as opposed to EA as a whole.
As for the points in your comment, there’s a lot of good stuff here. I think a post about the NRRC, or even an insider’s view into how the US administration thinks about and handles Nuclear Risk, would be really useful content on the Forum, and also incredibly interesting! Similarly, I think how a community handles making ‘right-tail recommendations’ when those recommendations may erode its collective and institutional legitimacy[1] would be really valuable. (Not saying that you should write these posts, they’re just examples off the top of my head. In general I think you have a professional perspective a lot of EAs could benefit from)
I think one thing where we agree is that there’s a need to ask and answer a lot more questions, some of which you mention here (beyond ‘is AIXR valid’):
What policy options do we have to counteract AIXR if true?
How do the effectiveness of these policy options change as we change our estimation of the risk?
What is the median view in the AIXR/broader EA/broader AI communities on risk?
And so on.
Some people in EA might write this off as ‘optics’, but I think that’s wrong
These are all great suggestions! As for my objections to EA as a whole versus a subset, it reminds me a bit of a defense that folks employ whenever a larger organization is criticised. Defenses that one hears from Republicans in the US for example. “It’s not all of us, just a vocal subset!” That might be true, but I think it misses the point. It’s hard to soul-search and introspect as an organization or a movement if we collectively say, “not all-EA” when someone points to the enthusiasm around SBF and ideas like buying up coal mines.