Thanks for your comment. It makes me realize I failed to properly communicate some of my ideas. Hopefully this comment can elucidate them.
Better democracy won’t help much with EA causes if people generally don’t care about them
More democracy could even make things worse (see 10% Less Democracy). But much better democracy wouldn’t because it would do things like:
Disentangling values from expertise (ex.: predicting which global catastrophes are most likely shouldn’t be done democratically, but rather with expert systems such as prediction markets)
Representing the unrepresented (ex.: having a group representing the interest of non-human animals during elections)
we choose EA causes in part based on their neglectedness
I was claiming that with the best system, all causes would be equally (not) neglected. Although this wouldn’t be entirely true as I conceded in the previous comment because people have different fundamental values.
Causes have to be made salient to people, and that’s a role for advocacy to play,
I think most causes wouldn’t have to be made salient to people if we had a great System. You can have something like (with a lot of details still to be worked out): 1) have a prediction market to predict what values existing people would vote on in the future, and 2) have a prediction market to predict which interventions will fulfill those values the most. And psychological research and education helping people to introspect is a common good that would likely be financed by such a System. Also, if ‘advocacy’ is about a way of enforcing cooperative social norms, then this would be fixed by solving coordination problems.
But maybe you want to declare ideological war, and aim to overwrite people’s terminal values with yours, hence partly killing their identity in the process. If that’s what you mean by ‘advocacy’, then you’re right that this wouldn’t be captured by the System, and ‘philanthropy’ would still be needed. But protecting ourselves against such ideological attacks is a social good: it’s good for everyone individually to be protected. I also think it’s likely better for everyone (or at least a supermajority) to have this protection for everyone rather than for no one. If we let ideological wars go on, there will likely be an evolutionary process that will select for ideologies adapted to their environment, which is likely to be worse from most currently existing people’s moral standpoint than if there had been ideological peace. Robin Hanson has written a lot about such multipolar outcomes.
Maybe pushing for altruism right now is a good patch to fund social good in the current System. And maybe current ideological wars against weaker ideologies is rational. But I don’t think it’s the best solution in the long run.
I’m not sure you can or should try to capture this all without philanthropy
I proposed arguments for and against capturing philanthropy in the article. If you have more considerations to add, I’m interested.
Also, I don’t think inequality will ever be fixed, since there’s no well-defined target. People will always argue about what’s fair, because of differing values.
I don’t know. Maybe we settle on the Schelling point of splitting the Universe among all political actors (or in some other ways), and this gets locked-in through apparatuses like Windfall clauses (for example), and even if some people disagree with them, they can’t change them. Although they could still decide to redistribute their own wealth in a way that’s more fair according to their values, so in that sense you’re right that their would still be a place for philanthropy.
Some issues may remain extremely expensive to address [...] so people as a group may be unwilling to fund them, and that’s where advocates and philanthropists should come in.
I guess it comes down to inequality. Maybe someone thinks it’s particularly unfair that someone has a rare disease, and so is willing to spend more resources on it than what the collective wants. And so they would inject more resources in a market for this value.
Another example: maybe the Universe is split equally among everyone alive at the point of the intelligence explosion, but some people will want to redistribute some of their wealth to fulfill the preferences of dead people, or will want to reward those that helped make this happen.
What is “just the right amount”?
I was thinking something like the amount one would spend if everyone else would spent the same amount than them, repeating this process for everyone and summing all those quantities. This would just be resource spent on a value; how to actually use the resources for that value would be decided by some expert systems.
And how do you see the UN coming to fund it if they haven’t so far?
The UN would need to have more power. But I don’t know how to make this happen.
If you got rid of Open Phil and other private foundations, redistributed the money to individuals proportionally, even if earmarked for altruistic purposes, and solved all coordination problems, do you think (longtermist) AI safety would be more or less funded than it is now?
At this point we would have formed a political singleton. I think a significant part of our entire world economy would be structured around AI safety. So more.
How else would you see (longtermist) AI safety make up for Open Phil’s funding through political mechanisms, given how much people care about it?
As mentioned above, using something like Futarchy.
-----
Creating a perfect system would be hard, but I’m proposing moving in that direction. I updated that even with a perfect system, there would still be some people wanting to redistribute their wealth, but less so than currently.
Thanks for your comment. It makes me realize I failed to properly communicate some of my ideas. Hopefully this comment can elucidate them.
More democracy could even make things worse (see 10% Less Democracy). But much better democracy wouldn’t because it would do things like:
Disentangling values from expertise (ex.: predicting which global catastrophes are most likely shouldn’t be done democratically, but rather with expert systems such as prediction markets)
Representing the unrepresented (ex.: having a group representing the interest of non-human animals during elections)
I was claiming that with the best system, all causes would be equally (not) neglected. Although this wouldn’t be entirely true as I conceded in the previous comment because people have different fundamental values.
I think most causes wouldn’t have to be made salient to people if we had a great System. You can have something like (with a lot of details still to be worked out): 1) have a prediction market to predict what values existing people would vote on in the future, and 2) have a prediction market to predict which interventions will fulfill those values the most. And psychological research and education helping people to introspect is a common good that would likely be financed by such a System. Also, if ‘advocacy’ is about a way of enforcing cooperative social norms, then this would be fixed by solving coordination problems.
But maybe you want to declare ideological war, and aim to overwrite people’s terminal values with yours, hence partly killing their identity in the process. If that’s what you mean by ‘advocacy’, then you’re right that this wouldn’t be captured by the System, and ‘philanthropy’ would still be needed. But protecting ourselves against such ideological attacks is a social good: it’s good for everyone individually to be protected. I also think it’s likely better for everyone (or at least a supermajority) to have this protection for everyone rather than for no one. If we let ideological wars go on, there will likely be an evolutionary process that will select for ideologies adapted to their environment, which is likely to be worse from most currently existing people’s moral standpoint than if there had been ideological peace. Robin Hanson has written a lot about such multipolar outcomes.
Maybe pushing for altruism right now is a good patch to fund social good in the current System. And maybe current ideological wars against weaker ideologies is rational. But I don’t think it’s the best solution in the long run.
Also relevant: Against moral advocacy.
I proposed arguments for and against capturing philanthropy in the article. If you have more considerations to add, I’m interested.
I don’t know. Maybe we settle on the Schelling point of splitting the Universe among all political actors (or in some other ways), and this gets locked-in through apparatuses like Windfall clauses (for example), and even if some people disagree with them, they can’t change them. Although they could still decide to redistribute their own wealth in a way that’s more fair according to their values, so in that sense you’re right that their would still be a place for philanthropy.
I guess it comes down to inequality. Maybe someone thinks it’s particularly unfair that someone has a rare disease, and so is willing to spend more resources on it than what the collective wants. And so they would inject more resources in a market for this value.
Another example: maybe the Universe is split equally among everyone alive at the point of the intelligence explosion, but some people will want to redistribute some of their wealth to fulfill the preferences of dead people, or will want to reward those that helped make this happen.
I was thinking something like the amount one would spend if everyone else would spent the same amount than them, repeating this process for everyone and summing all those quantities. This would just be resource spent on a value; how to actually use the resources for that value would be decided by some expert systems.
The UN would need to have more power. But I don’t know how to make this happen.
At this point we would have formed a political singleton. I think a significant part of our entire world economy would be structured around AI safety. So more.
As mentioned above, using something like Futarchy.
-----
Creating a perfect system would be hard, but I’m proposing moving in that direction. I updated that even with a perfect system, there would still be some people wanting to redistribute their wealth, but less so than currently.