Thanks for the comment! I first want to register strong agreement with many of your points, e.g. the root of the problem isn’t necessarily technology inherently, but rather our inability to do things like coordinate well and think in a long-term way. I also think that focusing too much on individual risks while avoiding the larger picture is a failure mode that some in the community fall into, and Ord’s book might have done well to spend some time taking this perspective (he does talk about risk factors which is part of the way to a more systemic perspective, but he doesn’t really address the fundamental drivers of many of these risks, which I agree seems like a missed opportunity).
That being said, I think I have a few main disagreements here:
Lack of good opportunities for more general longtermist interventions. I think if there were really promising avenues for advancing along the frontiers you suggest (e.g. trying to encourage cultural philosophical perspective shifts, if I’m understanding your point here correctly) then I’d probably change my mind here. But it still seems imo like these kinds of interventions aren’t as promising as direct work on individual risks, which is still super neglected in cases like bio/AI.
Work on individual risks does (at least partially) generalise. For instance, in the case of work on specific future risks e.g. bio and AI, it doesn’t seem like we can draw useful lessons about what kinds of strategies work (e.g. regulation/slowing research, better public materials and education about the risks, integrating more with the academic community) unless we actually try out these strategies.
Addressing some risks might directly reduce others. For instance, getting AI alignment right would probably be a massive boon for our ability to handle other natural risks. This is pretty speculative though, because we don’t really know what a future where we get AI right looks like.
Great! (-: