I think his use of “ceiling” is maybe somewhat confusing: he’s not saying that survival is near 100% (in the article he uses 80% as his example, and my sense is that this is near his actual belief). I interpret him to just mean that we are notably higher on the vertical axis than the horizontal one:
I see! Thanks for the clarification. It’s a fascinating argument if I’m understanding it correctly now: it could be worth substantially increasing our risk of extinction if we more substantially increased our odds of capturing more of the potential value in our light cone.
I’m not a dedicated utilitarian, so I typically tend to value futures with some human flourishing and little suffering vastly higher than futures with no sentient beings. But I am actually convinced that we should tilt a little toward futures with more flourishing.
Aligning AGI seems like the crux for both survival and flourishing (and aligning society, in the likely case that “aligned” AGI is intent-aligned to take orders from individuals). But there will be small changes in strategy that emphasize flourishing vs mere survival futures, and I’ll lean toward those based on this discussion, because outside of myself and my loved ones, my preferences become largely utilitarian.
It should also be born in mind that creating misaligned AGI runs a pretty big risk of wiping out not just us but any other sentient species in the lightcone.
”I’m not a dedicated utilitarian, so I typically tend to value futures with some human flourishing and little suffering vastly higher than futures with no sentient beings. But I am actually convinced that we should tilt a little toward futures with more flourishing.”
See the next essay, “no easy eutopia” for more on this!
I think his use of “ceiling” is maybe somewhat confusing: he’s not saying that survival is near 100% (in the article he uses 80% as his example, and my sense is that this is near his actual belief). I interpret him to just mean that we are notably higher on the vertical axis than the horizontal one:
Man, was that unclear?
Sorry for sucking at basic communication, lol.
I see! Thanks for the clarification. It’s a fascinating argument if I’m understanding it correctly now: it could be worth substantially increasing our risk of extinction if we more substantially increased our odds of capturing more of the potential value in our light cone.
I’m not a dedicated utilitarian, so I typically tend to value futures with some human flourishing and little suffering vastly higher than futures with no sentient beings. But I am actually convinced that we should tilt a little toward futures with more flourishing.
Aligning AGI seems like the crux for both survival and flourishing (and aligning society, in the likely case that “aligned” AGI is intent-aligned to take orders from individuals). But there will be small changes in strategy that emphasize flourishing vs mere survival futures, and I’ll lean toward those based on this discussion, because outside of myself and my loved ones, my preferences become largely utilitarian.
It should also be born in mind that creating misaligned AGI runs a pretty big risk of wiping out not just us but any other sentient species in the lightcone.
Thanks—sorry my initial post was unclear.
”I’m not a dedicated utilitarian, so I typically tend to value futures with some human flourishing and little suffering vastly higher than futures with no sentient beings. But I am actually convinced that we should tilt a little toward futures with more flourishing.”
See the next essay, “no easy eutopia” for more on this!