Thanks for our comment, it helped me clarified my model to myself.
especially politically unempowered moral beings
It proposes a lot of different voting systems to avoid (human) minorities being oppressed.
I could definitely see them develop systems to include future / past people.
But I agree they don’t seem to tackle beings not capable (at least in some ways) of representing themselves, like non-human animals and reinforcement learners. Good point. It might be a blank spot for that community(?)
or many of the EA causes
Such as? Can you see other altruistic use of philanthropy beside coordination problems, politically empowering moral beings, and fixing inequality? Although maybe that assumes preference utilitarianism. With pure positive hedonistic utilitarianism, wanting to created more happy people is not really a coordination problem (to the extent most people are not positive hedonistic utilitarians), nor about empowering moral beings (ie. happiness is mandatory), nor about fixing inequalities (nor an egoist preference).
Maybe it can make solving them easier, but it doesn’t offer full solutions to them all, which seems to be necessary for making philanthropy obsolete.
Oh, I agree solving coordination failures to finance public goods doesn’t solve the AI safety problem, but it solves the AI safety funding problem. In that world, the UN would arguably finance AI safety at just the right amount, so there would be no need for philanthropists to fund the cause. In that world, 1$ at the margin of any public good would be just as effective. And egoists motivations to work in any of those field would be sufficient. Although maybe there are market failures that aren’t coordination failures, like information asymmetries, in which case there might still be a used for personal sacrifices.
Such as? Can you see other altruistic use of philanthropy beside coordination problems, politically empowering moral beings, and fixing inequality?
Better democracy won’t help much with EA causes if people generally don’t care about them, and we choose EA causes in part based on their neglectedness, i.e. the fact that others don’t care enough. Causes have to be made salient to people, and that’s a role for advocacy to play, and when they remain neglected after that, that’s where philanthropy should come in. I think people would care more about animal welfare if they had more access to information and were given opportunities to vote on it (based on ballot initiatives and surveys), but you need advocates to drive this, and I’m not sure you can or should try to capture this all without philanthropy. Most people don’t care much about the x-risks EAs are most concerned with, and some of the x-risks are too difficult to understand for the average person to get them to care.
Also, I don’t think inequality will ever be fixed, since there’s no well-defined target. People will always argue about what’s fair, because of differing values. Some issues may remain extremely expensive to address, including some medical conditions, and wild animal welfare generally, so people as a group may be unwilling to fund them, and that’s where advocates and philanthropists should come in.
Oh, I agree solving coordination failures to finance public goods doesn’t solve the AI safety problem, but it solves the AI safety funding problem. In that world, the UN would arguably finance AI safety at just the right amount, so there would be no need for philanthropists to fund the cause. In that world, 1$ at the margin of any public good would be just as effective. And egoists motivations to work in any of those field would be sufficient. Although maybe there are market failures that aren’t coordination failures, like information asymmetries, in which case there might still be a used for personal sacrifices.
What is “just the right amount”? And how do you see the UN coming to fund it if they haven’t so far?
I don’t think AI safety’s current and past funding levels were significantly lower than otherwise due to coordination failures, but rather information asymmetries, like you say, as well as differences in values, and differences in how people form and combine beliefs (e.g. most people aren’t Bayesian).
If you got rid of Open Phil and other private foundations, redistributed the money to individuals proportionally, even if earmarked for altruistic purposes, and solved all coordination problems, do you think (longtermist) AI safety would be more or less funded than it is now?
How else would you see (longtermist) AI safety make up for Open Phil’s funding through political mechanisms, given how much people care about it?
Thanks for your comment. It makes me realize I failed to properly communicate some of my ideas. Hopefully this comment can elucidate them.
Better democracy won’t help much with EA causes if people generally don’t care about them
More democracy could even make things worse (see 10% Less Democracy). But much better democracy wouldn’t because it would do things like:
Disentangling values from expertise (ex.: predicting which global catastrophes are most likely shouldn’t be done democratically, but rather with expert systems such as prediction markets)
Representing the unrepresented (ex.: having a group representing the interest of non-human animals during elections)
we choose EA causes in part based on their neglectedness
I was claiming that with the best system, all causes would be equally (not) neglected. Although this wouldn’t be entirely true as I conceded in the previous comment because people have different fundamental values.
Causes have to be made salient to people, and that’s a role for advocacy to play,
I think most causes wouldn’t have to be made salient to people if we had a great System. You can have something like (with a lot of details still to be worked out): 1) have a prediction market to predict what values existing people would vote on in the future, and 2) have a prediction market to predict which interventions will fulfill those values the most. And psychological research and education helping people to introspect is a common good that would likely be financed by such a System. Also, if ‘advocacy’ is about a way of enforcing cooperative social norms, then this would be fixed by solving coordination problems.
But maybe you want to declare ideological war, and aim to overwrite people’s terminal values with yours, hence partly killing their identity in the process. If that’s what you mean by ‘advocacy’, then you’re right that this wouldn’t be captured by the System, and ‘philanthropy’ would still be needed. But protecting ourselves against such ideological attacks is a social good: it’s good for everyone individually to be protected. I also think it’s likely better for everyone (or at least a supermajority) to have this protection for everyone rather than for no one. If we let ideological wars go on, there will likely be an evolutionary process that will select for ideologies adapted to their environment, which is likely to be worse from most currently existing people’s moral standpoint than if there had been ideological peace. Robin Hanson has written a lot about such multipolar outcomes.
Maybe pushing for altruism right now is a good patch to fund social good in the current System. And maybe current ideological wars against weaker ideologies is rational. But I don’t think it’s the best solution in the long run.
I’m not sure you can or should try to capture this all without philanthropy
I proposed arguments for and against capturing philanthropy in the article. If you have more considerations to add, I’m interested.
Also, I don’t think inequality will ever be fixed, since there’s no well-defined target. People will always argue about what’s fair, because of differing values.
I don’t know. Maybe we settle on the Schelling point of splitting the Universe among all political actors (or in some other ways), and this gets locked-in through apparatuses like Windfall clauses (for example), and even if some people disagree with them, they can’t change them. Although they could still decide to redistribute their own wealth in a way that’s more fair according to their values, so in that sense you’re right that their would still be a place for philanthropy.
Some issues may remain extremely expensive to address [...] so people as a group may be unwilling to fund them, and that’s where advocates and philanthropists should come in.
I guess it comes down to inequality. Maybe someone thinks it’s particularly unfair that someone has a rare disease, and so is willing to spend more resources on it than what the collective wants. And so they would inject more resources in a market for this value.
Another example: maybe the Universe is split equally among everyone alive at the point of the intelligence explosion, but some people will want to redistribute some of their wealth to fulfill the preferences of dead people, or will want to reward those that helped make this happen.
What is “just the right amount”?
I was thinking something like the amount one would spend if everyone else would spent the same amount than them, repeating this process for everyone and summing all those quantities. This would just be resource spent on a value; how to actually use the resources for that value would be decided by some expert systems.
And how do you see the UN coming to fund it if they haven’t so far?
The UN would need to have more power. But I don’t know how to make this happen.
If you got rid of Open Phil and other private foundations, redistributed the money to individuals proportionally, even if earmarked for altruistic purposes, and solved all coordination problems, do you think (longtermist) AI safety would be more or less funded than it is now?
At this point we would have formed a political singleton. I think a significant part of our entire world economy would be structured around AI safety. So more.
How else would you see (longtermist) AI safety make up for Open Phil’s funding through political mechanisms, given how much people care about it?
As mentioned above, using something like Futarchy.
-----
Creating a perfect system would be hard, but I’m proposing moving in that direction. I updated that even with a perfect system, there would still be some people wanting to redistribute their wealth, but less so than currently.
Thanks for our comment, it helped me clarified my model to myself.
It proposes a lot of different voting systems to avoid (human) minorities being oppressed.
I could definitely see them develop systems to include future / past people.
But I agree they don’t seem to tackle beings not capable (at least in some ways) of representing themselves, like non-human animals and reinforcement learners. Good point. It might be a blank spot for that community(?)
Such as? Can you see other altruistic use of philanthropy beside coordination problems, politically empowering moral beings, and fixing inequality? Although maybe that assumes preference utilitarianism. With pure positive hedonistic utilitarianism, wanting to created more happy people is not really a coordination problem (to the extent most people are not positive hedonistic utilitarians), nor about empowering moral beings (ie. happiness is mandatory), nor about fixing inequalities (nor an egoist preference).
Oh, I agree solving coordination failures to finance public goods doesn’t solve the AI safety problem, but it solves the AI safety funding problem. In that world, the UN would arguably finance AI safety at just the right amount, so there would be no need for philanthropists to fund the cause. In that world, 1$ at the margin of any public good would be just as effective. And egoists motivations to work in any of those field would be sufficient. Although maybe there are market failures that aren’t coordination failures, like information asymmetries, in which case there might still be a used for personal sacrifices.
Better democracy won’t help much with EA causes if people generally don’t care about them, and we choose EA causes in part based on their neglectedness, i.e. the fact that others don’t care enough. Causes have to be made salient to people, and that’s a role for advocacy to play, and when they remain neglected after that, that’s where philanthropy should come in. I think people would care more about animal welfare if they had more access to information and were given opportunities to vote on it (based on ballot initiatives and surveys), but you need advocates to drive this, and I’m not sure you can or should try to capture this all without philanthropy. Most people don’t care much about the x-risks EAs are most concerned with, and some of the x-risks are too difficult to understand for the average person to get them to care.
Also, I don’t think inequality will ever be fixed, since there’s no well-defined target. People will always argue about what’s fair, because of differing values. Some issues may remain extremely expensive to address, including some medical conditions, and wild animal welfare generally, so people as a group may be unwilling to fund them, and that’s where advocates and philanthropists should come in.
What is “just the right amount”? And how do you see the UN coming to fund it if they haven’t so far?
I don’t think AI safety’s current and past funding levels were significantly lower than otherwise due to coordination failures, but rather information asymmetries, like you say, as well as differences in values, and differences in how people form and combine beliefs (e.g. most people aren’t Bayesian).
If you got rid of Open Phil and other private foundations, redistributed the money to individuals proportionally, even if earmarked for altruistic purposes, and solved all coordination problems, do you think (longtermist) AI safety would be more or less funded than it is now?
How else would you see (longtermist) AI safety make up for Open Phil’s funding through political mechanisms, given how much people care about it?
Thanks for your comment. It makes me realize I failed to properly communicate some of my ideas. Hopefully this comment can elucidate them.
More democracy could even make things worse (see 10% Less Democracy). But much better democracy wouldn’t because it would do things like:
Disentangling values from expertise (ex.: predicting which global catastrophes are most likely shouldn’t be done democratically, but rather with expert systems such as prediction markets)
Representing the unrepresented (ex.: having a group representing the interest of non-human animals during elections)
I was claiming that with the best system, all causes would be equally (not) neglected. Although this wouldn’t be entirely true as I conceded in the previous comment because people have different fundamental values.
I think most causes wouldn’t have to be made salient to people if we had a great System. You can have something like (with a lot of details still to be worked out): 1) have a prediction market to predict what values existing people would vote on in the future, and 2) have a prediction market to predict which interventions will fulfill those values the most. And psychological research and education helping people to introspect is a common good that would likely be financed by such a System. Also, if ‘advocacy’ is about a way of enforcing cooperative social norms, then this would be fixed by solving coordination problems.
But maybe you want to declare ideological war, and aim to overwrite people’s terminal values with yours, hence partly killing their identity in the process. If that’s what you mean by ‘advocacy’, then you’re right that this wouldn’t be captured by the System, and ‘philanthropy’ would still be needed. But protecting ourselves against such ideological attacks is a social good: it’s good for everyone individually to be protected. I also think it’s likely better for everyone (or at least a supermajority) to have this protection for everyone rather than for no one. If we let ideological wars go on, there will likely be an evolutionary process that will select for ideologies adapted to their environment, which is likely to be worse from most currently existing people’s moral standpoint than if there had been ideological peace. Robin Hanson has written a lot about such multipolar outcomes.
Maybe pushing for altruism right now is a good patch to fund social good in the current System. And maybe current ideological wars against weaker ideologies is rational. But I don’t think it’s the best solution in the long run.
Also relevant: Against moral advocacy.
I proposed arguments for and against capturing philanthropy in the article. If you have more considerations to add, I’m interested.
I don’t know. Maybe we settle on the Schelling point of splitting the Universe among all political actors (or in some other ways), and this gets locked-in through apparatuses like Windfall clauses (for example), and even if some people disagree with them, they can’t change them. Although they could still decide to redistribute their own wealth in a way that’s more fair according to their values, so in that sense you’re right that their would still be a place for philanthropy.
I guess it comes down to inequality. Maybe someone thinks it’s particularly unfair that someone has a rare disease, and so is willing to spend more resources on it than what the collective wants. And so they would inject more resources in a market for this value.
Another example: maybe the Universe is split equally among everyone alive at the point of the intelligence explosion, but some people will want to redistribute some of their wealth to fulfill the preferences of dead people, or will want to reward those that helped make this happen.
I was thinking something like the amount one would spend if everyone else would spent the same amount than them, repeating this process for everyone and summing all those quantities. This would just be resource spent on a value; how to actually use the resources for that value would be decided by some expert systems.
The UN would need to have more power. But I don’t know how to make this happen.
At this point we would have formed a political singleton. I think a significant part of our entire world economy would be structured around AI safety. So more.
As mentioned above, using something like Futarchy.
-----
Creating a perfect system would be hard, but I’m proposing moving in that direction. I updated that even with a perfect system, there would still be some people wanting to redistribute their wealth, but less so than currently.