Longtermist regress IS EITHER Contraction OR Average wellbeing decrease,
but consider a certain baseline trajectory A on which
longterm population = 3 gazillion person life years for sure
average wellbeing = 3 utils per person per life year for sure,
so that their expected product equals 9 gazillion utils, and an uncertain alternative trajectory B on which
if nature’s coin lands heads, longterm population = 7 gazillion person life years but average wellbeing = 1 util per person per life year
if nature’s coin lands tails, longterm population = 1 gazillion person life years but average wellbeing = 7 utils per person per life year,
so that their expected product equals (7 x 1 + 1 x 7) / 2 = 7 gazillion utils.
Then an event that changes the trajectory from A to B is a longtermist regress since it reduces the expected utility.
But it is NEITHER a contraction NOR an average wellbeing decrease. In fact, it is BOTH an Expansion, since the expected longterm population increases from 3 to 4 gazillion person life years, AND an average wellbeing increase, since that increases from 3 to 4 utils per person per life year.
Ah, good point. In which case I don’t think there’s any clean way to dissolve expected utility into simple factors without making strong assumptions. Does that sound right?
Thinking about changes to expected population holding average wellbeing constant and and expected average wellbeing holding population constant still seem like they’re useful approaches (the former being what I’m doing in the rest of the series), but that would make them heuristics as well—albeit higher fidelity ones than ‘existential risk’ vs ‘not existential risk’.
I think you are right, and the distinction still makes sense, but only as a theoretical device to disentangle things in thought experiments, maybe less in practice, unless one can argue that the correlations are weak.
Related to that:
Your figure says
but consider a certain baseline trajectory A on which
longterm population = 3 gazillion person life years for sure
average wellbeing = 3 utils per person per life year for sure,
so that their expected product equals 9 gazillion utils, and an uncertain alternative trajectory B on which
if nature’s coin lands heads, longterm population = 7 gazillion person life years but average wellbeing = 1 util per person per life year
if nature’s coin lands tails, longterm population = 1 gazillion person life years but average wellbeing = 7 utils per person per life year,
so that their expected product equals (7 x 1 + 1 x 7) / 2 = 7 gazillion utils.
Then an event that changes the trajectory from A to B is a longtermist regress since it reduces the expected utility.
But it is NEITHER a contraction NOR an average wellbeing decrease. In fact, it is BOTH an Expansion, since the expected longterm population increases from 3 to 4 gazillion person life years, AND an average wellbeing increase, since that increases from 3 to 4 utils per person per life year.
Ah, good point. In which case I don’t think there’s any clean way to dissolve expected utility into simple factors without making strong assumptions. Does that sound right?
Thinking about changes to expected population holding average wellbeing constant and and expected average wellbeing holding population constant still seem like they’re useful approaches (the former being what I’m doing in the rest of the series), but that would make them heuristics as well—albeit higher fidelity ones than ‘existential risk’ vs ‘not existential risk’.
I think you are right, and the distinction still makes sense, but only as a theoretical device to disentangle things in thought experiments, maybe less in practice, unless one can argue that the correlations are weak.