Thanks, @zdgroff! I think MPL is most important if you think that there are going to be some agents shaping things, these agents’ motivations are decisive for what outcomes are achieved, and you might (today) be able to align these agents with tail-valuable outcomes. Then aligning these agents with your moral values is wildly important. And by contrast marginal improvements to the agents’ motivations are relatively unimportant.
You’re right that if you don’t have any chance of optimizing any part of the universe, then MPL doesn’t matter as much. Do you think that there won’t be agents (even groups of them) with decisive control over what outcomes are achieved in (even parts of) the world?
It seems to me in the worst case we could at least ask Dustin to try to buy one star and then eventually turn it into computronium.
Thanks, @zdgroff! I think MPL is most important if you think that there are going to be some agents shaping things, these agents’ motivations are decisive for what outcomes are achieved, and you might (today) be able to align these agents with tail-valuable outcomes. Then aligning these agents with your moral values is wildly important. And by contrast marginal improvements to the agents’ motivations are relatively unimportant.
You’re right that if you don’t have any chance of optimizing any part of the universe, then MPL doesn’t matter as much. Do you think that there won’t be agents (even groups of them) with decisive control over what outcomes are achieved in (even parts of) the world?
It seems to me in the worst case we could at least ask Dustin to try to buy one star and then eventually turn it into computronium.