Could you say a bit more about the power law point?
A related thing I’ve been thinking about is that some kinds of deep democracy and some kinds of better futures-style reasoning (for sufficiently risk-neutral, utilitarian, confident in their moral views, etc etc etc kinds of agents, assume all the necessary qualifiers here) will end up being in tension — after all, why compromise between lots of moral views when this means you miss out on a bunch of feasible moral value? (More precisely, why choose the compromise it’s-just-ok future when you could optimise really hard according to the moral view you favour and have some small chance of getting almost all feasible value?)
I think that some versions of the power law point might make moral compromise look more appealing, which is why I’m interested. (I’m personally on team compromise!)
Yes! That is very close to the kind of idea that drove me from utilitarian supervillain towards deep democracy enjoyer. I do think it’s worth reading the whole post (it’s short), but in brief:
On a bunch of moral worldviews there are some futures that are astronomically more valuable than others, and they are not valued to nearly that extent in the world today, leading to the possibility of losing out on almost all value
For example, maybe you’re a hedonium fan and you think hedonium is 10^20 times more valuable than whatever matter is turned into by default; if you can’t get any hedonium, then the future you expect is like 10^20 times worse than it could be… ~all value has been lost
One way to hedge against this possibility is essentially a kind of worldview diversification, where you give power to a bunch of different moral views that then each maximize their own goals
Then if you’re someone with a minority viewpoint you at least get some of the world to maximize according to your values, so maybe you capture 1/1000th of value instead of 1 over 10^20
This only works if you have a democratic procedure that is genuinely sensitive to minority views and not democratic procedures that, say, maximize whatever 51% of people want, which leads to zero hedonium
So if you have extremely scope sensitive, fragile values that society doesn’t have (which… probably all of us do?) then you do much better with Deep Democracy than with Normal Democracy and, arguably, than you do with a coup from an arbitrary dictator.
If you have a lot of power, then gambling on a chance at dictatorship (say, proportional to your power) could be worth it and incentivized, and I think it’s important to be realistic about that to understand how the world could unfold. But there are a lot of other downsides to it, like wasteful races and the chance of being cut out of the pie as punishment for your societal defection, which do favour more democratic approaches.
Could you say a bit more about the power law point?
A related thing I’ve been thinking about is that some kinds of deep democracy and some kinds of better futures-style reasoning (for sufficiently risk-neutral, utilitarian, confident in their moral views, etc etc etc kinds of agents, assume all the necessary qualifiers here) will end up being in tension — after all, why compromise between lots of moral views when this means you miss out on a bunch of feasible moral value? (More precisely, why choose the compromise it’s-just-ok future when you could optimise really hard according to the moral view you favour and have some small chance of getting almost all feasible value?)
I think that some versions of the power law point might make moral compromise look more appealing, which is why I’m interested. (I’m personally on team compromise!)
Yes! That is very close to the kind of idea that drove me from utilitarian supervillain towards deep democracy enjoyer. I do think it’s worth reading the whole post (it’s short), but in brief:
On a bunch of moral worldviews there are some futures that are astronomically more valuable than others, and they are not valued to nearly that extent in the world today, leading to the possibility of losing out on almost all value
For example, maybe you’re a hedonium fan and you think hedonium is 10^20 times more valuable than whatever matter is turned into by default; if you can’t get any hedonium, then the future you expect is like 10^20 times worse than it could be… ~all value has been lost
One way to hedge against this possibility is essentially a kind of worldview diversification, where you give power to a bunch of different moral views that then each maximize their own goals
Then if you’re someone with a minority viewpoint you at least get some of the world to maximize according to your values, so maybe you capture 1/1000th of value instead of 1 over 10^20
This only works if you have a democratic procedure that is genuinely sensitive to minority views and not democratic procedures that, say, maximize whatever 51% of people want, which leads to zero hedonium
So if you have extremely scope sensitive, fragile values that society doesn’t have (which… probably all of us do?) then you do much better with Deep Democracy than with Normal Democracy and, arguably, than you do with a coup from an arbitrary dictator.
If you have a lot of power, then gambling on a chance at dictatorship (say, proportional to your power) could be worth it and incentivized, and I think it’s important to be realistic about that to understand how the world could unfold. But there are a lot of other downsides to it, like wasteful races and the chance of being cut out of the pie as punishment for your societal defection, which do favour more democratic approaches.