Very interesting post and discussion in the comments.
I said at the start that itās non-obvious whaput follows, for the purposes of action, from outside-view longtermism. The most obvious course of action that might seem comparatively more promising is investment, such as saving in a long-term foundation, or movement-building, with the aim of increasing the amount of resources longtermist altruists have at a future, more hingey time.
Throughout a lot of this post, I was wondering if the sort of reasoning given in that quote would generalise to an update in favouring of building āflexibleā rather than ātargetedā career capital. It seems to me that flexible career capital could be seen as a form of investment that at least allows you to āpuntā to your future self, which could be valuable if a later time within your lifetime is āhingierā or at least provides a clearer view of what investment strategies would be best.
For example, instead of focusing specifically on becoming influential in AI policy in the next two decades, one could focus on developing generic prestige/ācredentials/āconnections that will be useful in decades after that, or if later insights suggest work on other x-risks has higher leverage in this lifetime, or for future movement-building activities that can then be informed by new insights (e.g., regarding population ethics or metaethics)
So Iām wondering if thatās a sensible generalisation of that reasoning, and, if so, whether that would suggest Will would push somewhat against 80kās move towards prioritising targeted career capital (as shown for example in the update on this page).
Very interesting post and discussion in the comments.
Throughout a lot of this post, I was wondering if the sort of reasoning given in that quote would generalise to an update in favouring of building āflexibleā rather than ātargetedā career capital. It seems to me that flexible career capital could be seen as a form of investment that at least allows you to āpuntā to your future self, which could be valuable if a later time within your lifetime is āhingierā or at least provides a clearer view of what investment strategies would be best.
For example, instead of focusing specifically on becoming influential in AI policy in the next two decades, one could focus on developing generic prestige/ācredentials/āconnections that will be useful in decades after that, or if later insights suggest work on other x-risks has higher leverage in this lifetime, or for future movement-building activities that can then be informed by new insights (e.g., regarding population ethics or metaethics)
So Iām wondering if thatās a sensible generalisation of that reasoning, and, if so, whether that would suggest Will would push somewhat against 80kās move towards prioritising targeted career capital (as shown for example in the update on this page).