Oh, just realised that I mentioned I hope to follow-up with those directions for future work at some point, and that it’d also be great for others to do so, but I didn’t mention a third option: If anyone’s interested to collaborate with me on work along those lines—which could perhaps be as simple as making one, more fleshed out diagram for a specific risk—I might be keen for that.
In particular, if you have expertise relevant to a particular risk (e.g., AI safety, machine learning more generally, epidemiology, nanotech), collaborating on a fleshed out diagram for that risk could be really interesting. Likewise if you know a lot about one step of the causal path or one intervention type, e.g. civilizational collapse or recovery, or ways there could be a continuous progression directly from a harmful event to an existential catastrophe.
Oh, just realised that I mentioned I hope to follow-up with those directions for future work at some point, and that it’d also be great for others to do so, but I didn’t mention a third option: If anyone’s interested to collaborate with me on work along those lines—which could perhaps be as simple as making one, more fleshed out diagram for a specific risk—I might be keen for that.
In particular, if you have expertise relevant to a particular risk (e.g., AI safety, machine learning more generally, epidemiology, nanotech), collaborating on a fleshed out diagram for that risk could be really interesting. Likewise if you know a lot about one step of the causal path or one intervention type, e.g. civilizational collapse or recovery, or ways there could be a continuous progression directly from a harmful event to an existential catastrophe.