Conflict of Interest re AGI risk between current and future generations:
Given the potential transformative ability of AGI, it seems plausible that it could be the key to drastically increasing longevity...
Thus, if we consider a given generation that has the power to unleash AGI, or wait for later generations, where it could be safer, selfishly, it may make sense for the current generation to gamble. Let’s say that there’s a 50% doom an 50% 100x life extension. Selfishly, it could make sense for someone to risk ending his/her life for a 50% chance to live 10k years, for instance.
From the species’ perspective, deferring the choice for several generations, to minimize the likelihood of doom, makes sense.
Has much consideration been given to this possible conflict of interest arising from the risk of doom from AI (being borne across generations) and the benefit from accelerating immortality (being only enjoyed by generations within the safety-time delta)?
Conflict of Interest re AGI risk between current and future generations:
Given the potential transformative ability of AGI, it seems plausible that it could be the key to drastically increasing longevity...
Thus, if we consider a given generation that has the power to unleash AGI, or wait for later generations, where it could be safer, selfishly, it may make sense for the current generation to gamble. Let’s say that there’s a 50% doom an 50% 100x life extension. Selfishly, it could make sense for someone to risk ending his/her life for a 50% chance to live 10k years, for instance.
From the species’ perspective, deferring the choice for several generations, to minimize the likelihood of doom, makes sense.
Has much consideration been given to this possible conflict of interest arising from the risk of doom from AI (being borne across generations) and the benefit from accelerating immortality (being only enjoyed by generations within the safety-time delta)?