I think I agree with the Moral Power Laws hypothesis, but it might be irrelevant to the question of whether to try to improve the value of the future or work on extinction risk.
My thought is this: the best future is probably a convergence of many things going well, such as people being happy on average, there being many people, the future lasting a long time, and maybe some empirical/moral uncertainty stuff. Each of these things plausibly has a variety of components, creating a long tail. Yet you’d need expansive, simultaneous efforts on many fronts to get there. In practice, even a moderately sized group of people is only going to make a moderate to small push on a single front, or very small pushes on many fronts. This means the value we could plausibly affect, obviously quite loosely speaking, does not follow a power law.
Thanks, @zdgroff! I think MPL is most important if you think that there are going to be some agents shaping things, these agents’ motivations are decisive for what outcomes are achieved, and you might (today) be able to align these agents with tail-valuable outcomes. Then aligning these agents with your moral values is wildly important. And by contrast marginal improvements to the agents’ motivations are relatively unimportant.
You’re right that if you don’t have any chance of optimizing any part of the universe, then MPL doesn’t matter as much. Do you think that there won’t be agents (even groups of them) with decisive control over what outcomes are achieved in (even parts of) the world?
It seems to me in the worst case we could at least ask Dustin to try to buy one star and then eventually turn it into computronium.
I think I agree with the Moral Power Laws hypothesis, but it might be irrelevant to the question of whether to try to improve the value of the future or work on extinction risk.
My thought is this: the best future is probably a convergence of many things going well, such as people being happy on average, there being many people, the future lasting a long time, and maybe some empirical/moral uncertainty stuff. Each of these things plausibly has a variety of components, creating a long tail. Yet you’d need expansive, simultaneous efforts on many fronts to get there. In practice, even a moderately sized group of people is only going to make a moderate to small push on a single front, or very small pushes on many fronts. This means the value we could plausibly affect, obviously quite loosely speaking, does not follow a power law.
Thanks, @zdgroff! I think MPL is most important if you think that there are going to be some agents shaping things, these agents’ motivations are decisive for what outcomes are achieved, and you might (today) be able to align these agents with tail-valuable outcomes. Then aligning these agents with your moral values is wildly important. And by contrast marginal improvements to the agents’ motivations are relatively unimportant.
You’re right that if you don’t have any chance of optimizing any part of the universe, then MPL doesn’t matter as much. Do you think that there won’t be agents (even groups of them) with decisive control over what outcomes are achieved in (even parts of) the world?
It seems to me in the worst case we could at least ask Dustin to try to buy one star and then eventually turn it into computronium.