I think this is a great post and I’m glad you took the time to summarize your longer post!
In my experience, the longtermist/ x-risk community has an implicit attitude of “we can do it better.” “We’re the only ones really thinking about this and we’ll forge our own institutions and interventions.” I respect this attitude a great deal, but I think it causes us to underestimate how powerful the political and economic currents around us are (and how reliant we are on their stability).
It just doesn’t seem that unlikely to me that we come up with some hard-won biosecurity policy or AI governance intervention, and then geopolitical turmoil negates all the intervention’s impact. Technical interventions are a bit more robust, but I’d claim a solid subset of those also require a type of coordination and trust that systemic cascading risks threaten.
Need to do more thinking on whether this point is correct, but a lot of what you’re saying about forging our own institutions reminds me of Abraham Rowe’s forum post on EA critiques:
EA is neglecting trying to influence non-EA organizations, and this is becoming more detrimental to impact over time.
I’m assuming that EA is generally not missing huge opportunities for impact. As time goes on, theoretically many grants / decisions in the EA space ought to be becoming more effective, and closer to what the peak level of impact possible might be.
Despite this, it seems like relatively little effort is put into changing the minds of non-EA funders, and pushing them toward EA donation opportunities, and a lot more effort is put into shaping the prioritization work of a small number of EA thinkers.
I think this is a great post and I’m glad you took the time to summarize your longer post!
In my experience, the longtermist/ x-risk community has an implicit attitude of “we can do it better.” “We’re the only ones really thinking about this and we’ll forge our own institutions and interventions.” I respect this attitude a great deal, but I think it causes us to underestimate how powerful the political and economic currents around us are (and how reliant we are on their stability).
It just doesn’t seem that unlikely to me that we come up with some hard-won biosecurity policy or AI governance intervention, and then geopolitical turmoil negates all the intervention’s impact. Technical interventions are a bit more robust, but I’d claim a solid subset of those also require a type of coordination and trust that systemic cascading risks threaten.
I love your thoughts on this.
Need to do more thinking on whether this point is correct, but a lot of what you’re saying about forging our own institutions reminds me of Abraham Rowe’s forum post on EA critiques: