I think that ignoring all the value in futures where we don’t safely reach technological maturity kind of stacks the deck against GPR, which I intuitively think is better than your model suggests. This seems especially the case if we have a suffering-focused ethics (I mean by this: there is an asymmetry between suffering and happiness, such that decreasing suffering by x is better than increasing happiness by x).
Including ‘bad futures’ would, I suspect, affect how easy you think it is to increase the value of the future by 1⁄4 (or equivalent). This is because there are lots of different ways the future could be really bad, with loads and loads of moral patients who suffer a lot, and avoiding one of these sources of suffering feels to me like it’s more tractable than making the ‘good future’ even better (especially by some large fraction like 1⁄4). It would be even easier to improve the value of these ‘bad futures’ if we have a suffering-focused ethics rather than a symmetrical view of ethics.
(Note: I wrote this comment with one meaning of ‘technological maturity’ in mind, but now I’m actually not sure if that was what you meant by it, so maybe the answer is you would be including the kind of futures I mean. In that case, we probably differ on how easy we think it would be to affect these futures.)
I think that ignoring all the value in futures where we don’t safely reach technological maturity kind of stacks the deck against GPR, which I intuitively think is better than your model suggests. This seems especially the case if we have a suffering-focused ethics (I mean by this: there is an asymmetry between suffering and happiness, such that decreasing suffering by x is better than increasing happiness by x).
Including ‘bad futures’ would, I suspect, affect how easy you think it is to increase the value of the future by 1⁄4 (or equivalent). This is because there are lots of different ways the future could be really bad, with loads and loads of moral patients who suffer a lot, and avoiding one of these sources of suffering feels to me like it’s more tractable than making the ‘good future’ even better (especially by some large fraction like 1⁄4). It would be even easier to improve the value of these ‘bad futures’ if we have a suffering-focused ethics rather than a symmetrical view of ethics.
(Note: I wrote this comment with one meaning of ‘technological maturity’ in mind, but now I’m actually not sure if that was what you meant by it, so maybe the answer is you would be including the kind of futures I mean. In that case, we probably differ on how easy we think it would be to affect these futures.)