Thanks for writing this up! I agree that this is a relevant argument, even though many steps of the argument are (as you say yourself) not airtight. For example, consciousness or suffering may be related to learning, in which case point 3) is much less clear.
Also, the future may contain vastly larger populations (e.g. because of space colonization), which, all else being equal, may imply (vastly) more suffering. Even if your argument is valid and the fraction of suffering decreases, it’s not clear whether the absolute amount will be higher or lower (as you claim in 7.).
Finally, I would argue we should focus on the bad scenarios anyway – given sufficient uncertainty – because there’s not much to do if the future will “automatically” be good. If s-risks are likely, my actions matter much more.
(This is from a suffering-focused perspective. Other value systems may arrive at different conclusions.)
It would be surprising to me if learning required suffering, but I agree that if it does then point (3) is less clear.
Good point! I rewrote it to clarify that there is less net suffering
Where I disagree with you the most is your statement “there’s not much to do if the future will ‘automatically’ be good.” Most obviously, we have the difficult (and perhaps impossible) task of ensuring the future exists at all (maxipok).
Thanks for writing this up! I agree that this is a relevant argument, even though many steps of the argument are (as you say yourself) not airtight. For example, consciousness or suffering may be related to learning, in which case point 3) is much less clear.
Also, the future may contain vastly larger populations (e.g. because of space colonization), which, all else being equal, may imply (vastly) more suffering. Even if your argument is valid and the fraction of suffering decreases, it’s not clear whether the absolute amount will be higher or lower (as you claim in 7.).
Finally, I would argue we should focus on the bad scenarios anyway – given sufficient uncertainty – because there’s not much to do if the future will “automatically” be good. If s-risks are likely, my actions matter much more.
(This is from a suffering-focused perspective. Other value systems may arrive at different conclusions.)
Thanks for the response!
It would be surprising to me if learning required suffering, but I agree that if it does then point (3) is less clear.
Good point! I rewrote it to clarify that there is less net suffering
Where I disagree with you the most is your statement “there’s not much to do if the future will ‘automatically’ be good.” Most obviously, we have the difficult (and perhaps impossible) task of ensuring the future exists at all (maxipok).
The Foundational Research Institute site in the links above seems to have a wealth of writing about the far future!