I think actually this list of 8 goals in 3 categories could be adapted into something like a template/​framework applicable to a wide range of areas longtermism-inclined people might want to work on, especially areas other than AI and biorisk (where it seems likely that the key goal will usually simply by 1a, maybe along with 1b).
E.g., nanotechnology, cybersecurity, space governance.
Then one could think about how much sense each of these goals make for that specific area.
I personally tentatively feel like something along these lines should be done for each area before significant investment is made into it
(But maybe if this is done, it’d be better to first come up with a somewhat better and cleaner framework, maybe trying to make it MECE-like)
If the very initial exploration at a similar level to that done in this post makes it still look like the area warrants some attention, it would then probably be good to get more detailed and area-specific than this post gets for nuclear risk.
Some additional additional rough notes:
I think actually this list of 8 goals in 3 categories could be adapted into something like a template/​framework applicable to a wide range of areas longtermism-inclined people might want to work on, especially areas other than AI and biorisk (where it seems likely that the key goal will usually simply by 1a, maybe along with 1b).
E.g., nanotechnology, cybersecurity, space governance.
Then one could think about how much sense each of these goals make for that specific area.
I personally tentatively feel like something along these lines should be done for each area before significant investment is made into it
(But maybe if this is done, it’d be better to first come up with a somewhat better and cleaner framework, maybe trying to make it MECE-like)
If the very initial exploration at a similar level to that done in this post makes it still look like the area warrants some attention, it would then probably be good to get more detailed and area-specific than this post gets for nuclear risk.