I think experimentation with new approaches is good, so for that reason I’m a fan of this.
When I evaluate your actual arguments for this particular mechanism design though, they seem quite weak. This makes me worry that, if this mechanism turns out to be good, it will only be by chance, rather than because it was well designed to address a real problem.
To motivate the idea you set up a scenario with three donors, varying dramatically in their level of generosity:
Donors 1 and 3 both think animals matter a lot, but Donor 3 is skeptical of the existing charities. Donor 1 doesn’t have access to the information that makes Donor 3 skeptical. It’s unclear if Donor 3 is right, but aggregating their beliefs might better capture an accurate view of the animal welfare space.
Donor 2 knows a lot about their specific research area, but not other areas, so they just give within GCRs and not outside it. They’d be happy to get the expertise of Donors 1 and 3 to inform their giving.
All three are motivated by making the world better, and believe strongly that other people have good views about the world, access to different information, etc.
I struggle to see how this setup really justifies the introduction of your complicated donation pooling and voting system. The sort of situation you described already occurs in many places in the global economy—and within the EA movement—and we have standard methods of addressing it, for example:
Donor 3 could write an article or an email about their doubts.
Donor 1 could hire Donor 3 as a consultant.
Donor 1 could delegate decisions to Donor 3.
Donor 2 can just give to GCR, this seems fine, they are a small donor anyway.
They could all give to professionally managed donation funds like the EA funds.
What all of these have in common is they attempt to directly access the information people have, rather than just introducing it in a dilute form into a global average. The traditional approach can take a single expert with very unusual knowledge and give them major influence over large donors; your approach gives this expert no more influence than any other person.
This also comes up in your democracy point:
Equal Hands functions similarly to tax systems in democracies — we don’t expect people who pay more in taxes to have better views about who should be elected to spend that tax money.
The way modern democratic states work is decidedly not that everyone can determine where a fraction of the taxes go if they pay a minimum of tax. Rather, voters elect politicians, who then choose where the money is spent. Ideally voters choose good politicians, and these politicians consult good experts.
One of the reasons for this is that is would be incredibly time consuming for individual voters to make all these determinations. And this seems to be an issue with your proposal also—it simply is not a good use of people’s time to be making donation decisions and filling in donation forms every month for very small amounts of money. Aggregation, whether through large donors (e.g. the donation lottery) or professional delegation (e.g. the EA funds) is the key to efficiency.
The most bizarre thing to me however is this argument (emphasis added):
Donating inherently has huge power differentials — the beliefs of donors who are wealthier inevitably exerts greater force on charities than those with fewer funds. But it seems unlikely that having more money would be correlated with having more accurate views about the world.
Perhaps I am misunderstanding, or you intended to make some weaker argument. But as it stands your premise here, which seems important to the entire endeavor, seems overwhelmingly likely to be false.
There are many factors which are correlated both with having money money and having accurate views about the world, because they help with both: intelligence, education, diligence, emotional control, strong social networks, low levels of chronic stress, low levels of lead poisoning, low levels of childhood disease… And there are direct causal connections between money and accurate views, in both directions, because having accurate views about the world directly helps you make money (recognizing good opportunities for income, avoiding unnecessary costs, etc.) and having money helps you gain more accurate views about the world (access to information, more well educated social circle, etc.).
Even absent these general considerations, you can see it just by looking at the major donors we have in EA: they are generally not lottery winners or football players, they tend to be people who succeeded in entrepreneurship or investment, two fields which require accurate views about the world.
Nice! This is great pushback! I think that most my would be responses are covered by other people, so will add one thing just on this:
Even absent these general considerations, you can see it just by looking at the major donors we have in EA: they are generally not lottery winners or football players, they tend to be people who succeeded in entrepreneurship or investment, two fields which require accurate views about the world.
My experience isn’t this. I think that I have probably engaged with something like ~15 >$1M donors in EA or adjacent fields. Doing a brief exercise in my head of thinking through everyone I could, I got to something like:
~33% inherited wealth / family business
~40% seems like they mostly “earned it” in the sense that it seems like they started a business or did a job well, climbed the ranks in a company due to their skills, etc. To be generous, I’m also including people here who were early investors in crypto, say, where they made a good but highly speculative bet at the right time.
~20% seems like the did a lot of very difficult work, but also seem to have gotten really really lucky—e.g. grew a pre-existing major family business a lot, were roommates with Mark Zuckerberg, etc.
Obviously we don’t have the counterfactuals on these people’s lucky breaks, so it’s hard for me to guess what the world looks like where they didn’t have this lucky break, but I’d guess it’s at least at a much lower giving potential.
7% I’m not really sure.
So I’d guess that even trying to do this approach, only like 50% of major donors would pass this filter. Though it seems possible luck also played a major role for many of those 50% too and I just don’t know about it. I’m surprised you find the overall claim bizarre though, because to me it often feels somewhat self-evident from interacting with people from different wealth levels within EA, where it seems like the best calibrated people are often like, mid-level non-executives at organizations, who neither have information distortions from having power but also have deep networks / expertise and a sense of the entire space. I don’t think ultra-wealthy people have worse views, to be clear — just that wealth and having well-calibrated, thoughtful views about the world seem unrelated (or to the extent they are correlated, those differences stop being meaningful below the wealth of the average EA donor), and certainly a default of “cause prioritization is directly downstream of the views of the wealthiest people” is worse than many alternatives.
I strongly agree about the clunkiness of this approach though, and many of the downsides you highlight. I think in my ideal EA, there would be lots and lots of various things like this tried, and good ones would survive and iterate, and just generally EAs experiment with different models for distributing funding, so this is my humble submission to that project.
I think it’s important to separate out the critical design features from the specific instantiation—this is a six-month prototype that can run on $0 with a reasonable amount of a single person’s volunteer labor. Like most no-budget volunteer efforts, it is likely going to be a bit clunky (e.g., “filling in donation forms every month for very small amounts of money”). Having a 501(c)(3) efficiently distribute out the money in a centralized manner would be ideal; it would also take a good bit of time and money to set up. It makes sense to run the clunky prototype first, get some of the bugs out, and then seek commitments of time and money to set up a more efficient infrastructure if the trials are promising enough.
What all of these have in common is they attempt to directly access the information people have, rather than just introducing it in a dilute form into a global average.
How effectively does it succeed in incorporating and weighing all that information, though? As an intuition pump, if the current system perfectly did so, it shouldn’t matter who which Donor was the million-dollar donor and which were small-dollar donors.
The traditional approach can take a single expert with very unusual knowledge and give them major influence over large donors; your approach gives this expert no more influence than any other person.
This, of course, requires the large donor(s) to recognize the expert’s expertise. Likewise, all of your examples rely on Donor 1 picking the right person to be persuaded by, to hire as a consultant, etc.
Rather, voters elect politicians, who then choose where the money is spent. Ideally voters choose good politicians, and these politicians consult good experts.
But they don’t pick generically “good” politicians—they pick ones who line up with their preferences on some big-picture questions (which can be seen as analogous to cause prio within cause areas here, or maybe even more specific than that). In this way, the preferences of the wealthy taxpayer (theoretically) don’t get more weight than those of the pauper in the politician’s decisions, and then the details get worked out by technocrats.
Of course, an outfit like EA Funds could do something like this if desired—monies flowing into a Democratic Allocation Fund could be distributed amongst the existing cause-area funds based on some sort of democratic allocation algorithm.
There are many factors which are correlated both with having money money and having accurate views about the world, because they help with both [ . . .]
I don’t think zero (or even particularly low) correlation is necessary for this project to make sense.
If one were shown 5,000 people in a crowd and were required to make a personally important decision based on their judgment about the world while knowing nothing other than their income/wealth, I submit that the optimal decision rule would be neither (a) weight all 5,000 views evenly, or (b) give predominant weight to the very richest people in the bunch, and very little to the bottom 80% (or whatever). But: if I know that a determination had already been made to make the bulk of the decision using rule (b), it would often make sense to use rule (a) on the margin that I could control.
In addition to questioning how strong the (money:good cause prio) correlation is, I am pretty confident it is not remotely linear. Suppose we somehow knew that it made sense to give five times the weight to the views of someone who made $250K/year than someone who made $50K/year (which is already doubtful to me). I would expect a much more modest ideal weighting between $250K and $1.25MM, and an even more modest ideal weighting between $1.25MM and $6.25MM, etc. Yet the current system gives greater prominence to the higher intervals (in absolute terms).
Finally, donation decisions can be significantly driven by donors’ somewhat idiosyncratic preferences—cf. Good Ventures’ recent decision to stop funding various subcauses for reasons unrelated to any determination it made about those subcauses’ effectiveness. Those preferences may well be anti-correlated with effectiveness insofar as highly neglected causes may pose more PR headaches. Not having their own private foundations, smaller donors can donate as they honestly see best without having to face the risk of external PR backlash. Even if idiosyncratic preferences were no more prevalent among the wealthy, it is probably better to dilute them rather than have so much riding on those of the top few people.
Second point within this comment I’m interested in discussing: If I’m summarizing you correctly, you think standard methods of addressing the problem (“cause allocation in EA is controlled by a few rich people who might not make good decisions”) makes Equal Hands an unnecessary project.
First: I agree with you that the current donation pooling/voting process is not optimal. Hopefully in the six months of the trial a more streamlined option will be found. A fund seems good; knowing the annoying-ness of setting up an appropriate 501c3 and considering the international nature of EA I understand why Abraham didn’t go that route before determining whether there was any interest in the project, but I think if it succeeds creating a fund would be good.
If a fund is created, the main difference between the Equal Hands concept and EA funds is that typical EA funds don’t address at all the issue of larger donors having more influence. Yes, experts decide where the amounts within the buckets go. But if one billionaire likes GCR and no billionaires like animal welfare, there will be no mechanism to democratize the distribution between pools. It may be that you don’t care about that, but assuming you did, do you see EA funds as addressing that issue in some way that I am missing?
Second: I agree that a certain amount of donor 3 hiring donor 1 as a consultant or being convinced by a persuasive argument or similar goes on in EA (at least, much more than outside of EA). But the examples you give are such small levels of decision-making sharing. If you endorse the general rule that more decision makers tend to make better decisions than small groups, even when the small groups are composed of experts (which I think there is quite a bit of evidence for?) then a much more robust democratization seems good.
There’s a lot to discuss in this comment so it might be worth unpacking responses into sections. For myself, I’m most interested in your assertion that money is well-correlated with having more accurate views about the world.
I think you’re correct that there is some connection between “accurate views in a domain” to “success in that domain” on average. But I think the main driver of that connection is a correlation at the low end (e.g., people with really faulty pictures of reality are not successful) but no low correlation outside of that range.
In the case of wealth, while we might expect that being well-attuned to reality is helpful, being “well-attuned to reality” is not a real trait (or if it is, it’s extremely rare) -- most people are well-attuned to parts of reality and not others. Furthermore, wealth is in most societies highly driven by being lucky to be born into a particular family. So at the end of the day, we shouldn’t expect donors with the most money to generally have the best views on what to do with it.
In particular, I think that the dynamics in charity make this lack of correlation even more problematic, because the wealthiest folks have disproportionately more control over what happens in charity than the just-relatively-well-off folks, and we particularly shouldn’t expect that being wildly wealthy is a good predictor of “being good at figuring out which charities are most impactful.” Being insanely wealthy is probably even more luck driven than being successful in a normal way, and the more insanely wealthy you are, the more likely you are to have charities trying to sell themselves to you, and the worse your access to information about them will be.
Just to reality-test my mental model here against my own experience: you suggest looking at the major donors in EA. By and large, my experience in EA is that there is not really a correlation between wealth and having good ideas about charity. I meet a lot of wealthy people in my job, and they are often shockingly out of touch. Maybe they were better calibrated before they got wealthy, but becoming insanely wealthy reduces how much people are honest to you and makes your life look so different from normal I expect you forget what normal is. Often, the people in EA I think make the best calls are sort of mid-tier employees of EA orgs, who are both thoughtful and have great insider info.
Even beyond that, EA major donors are a small selection of rich people in general, who by and large I think make absolutely terrible decisions about charity (and I expect you think that also, since you’re on the EA forum). So even if I wanted to grant you that these rich people might have accurate views within their domain, I wouldn’t grant that that makes them better at choosing charities.
Basically, my overall point is that (1) really wealthy people are probably mostly really wealthy by chance of circumstance; (2) if not chance, and it is domain expertise in the area of their success, that doesn’t largely transfer to success in choosing charities, and (3) based on my experience of EA, wealthy EAs are no more likely to make good decisions than non-wealthy EAs. So I’m comfortable endorsing the idea that having more money is not generally a good predictor of having great ideas about charity.
I don’t really want to get into an argument here about whether extreme wealth is largely luck-driven, or how much success in one domain translates to success in others, since I believe people tend to be firmly entrenched in one view or another on those topics and it could distract from the main topic of the Equal Hands experiment. My intention is merely to illustrate why someone might endorse the original statement.
rich people in general, who by and large I think make absolutely terrible decisions about charity
I think this follows from a more general fact about people. If anything, I would guess that there’s a positive correlation between wealth and EA values: that a higher (though still depressingly low) proportion of wealthy people donate to effective causes than is true of the general population? Would be interesting to see actual data, though.
I think experimentation with new approaches is good, so for that reason I’m a fan of this.
When I evaluate your actual arguments for this particular mechanism design though, they seem quite weak. This makes me worry that, if this mechanism turns out to be good, it will only be by chance, rather than because it was well designed to address a real problem.
To motivate the idea you set up a scenario with three donors, varying dramatically in their level of generosity:
I struggle to see how this setup really justifies the introduction of your complicated donation pooling and voting system. The sort of situation you described already occurs in many places in the global economy—and within the EA movement—and we have standard methods of addressing it, for example:
Donor 3 could write an article or an email about their doubts.
Donor 1 could hire Donor 3 as a consultant.
Donor 1 could delegate decisions to Donor 3.
Donor 2 can just give to GCR, this seems fine, they are a small donor anyway.
They could all give to professionally managed donation funds like the EA funds.
What all of these have in common is they attempt to directly access the information people have, rather than just introducing it in a dilute form into a global average. The traditional approach can take a single expert with very unusual knowledge and give them major influence over large donors; your approach gives this expert no more influence than any other person.
This also comes up in your democracy point:
The way modern democratic states work is decidedly not that everyone can determine where a fraction of the taxes go if they pay a minimum of tax. Rather, voters elect politicians, who then choose where the money is spent. Ideally voters choose good politicians, and these politicians consult good experts.
One of the reasons for this is that is would be incredibly time consuming for individual voters to make all these determinations. And this seems to be an issue with your proposal also—it simply is not a good use of people’s time to be making donation decisions and filling in donation forms every month for very small amounts of money. Aggregation, whether through large donors (e.g. the donation lottery) or professional delegation (e.g. the EA funds) is the key to efficiency.
The most bizarre thing to me however is this argument (emphasis added):
Perhaps I am misunderstanding, or you intended to make some weaker argument. But as it stands your premise here, which seems important to the entire endeavor, seems overwhelmingly likely to be false.
There are many factors which are correlated both with having money money and having accurate views about the world, because they help with both: intelligence, education, diligence, emotional control, strong social networks, low levels of chronic stress, low levels of lead poisoning, low levels of childhood disease… And there are direct causal connections between money and accurate views, in both directions, because having accurate views about the world directly helps you make money (recognizing good opportunities for income, avoiding unnecessary costs, etc.) and having money helps you gain more accurate views about the world (access to information, more well educated social circle, etc.).
Even absent these general considerations, you can see it just by looking at the major donors we have in EA: they are generally not lottery winners or football players, they tend to be people who succeeded in entrepreneurship or investment, two fields which require accurate views about the world.
Nice! This is great pushback! I think that most my would be responses are covered by other people, so will add one thing just on this:
My experience isn’t this. I think that I have probably engaged with something like ~15 >$1M donors in EA or adjacent fields. Doing a brief exercise in my head of thinking through everyone I could, I got to something like:
~33% inherited wealth / family business
~40% seems like they mostly “earned it” in the sense that it seems like they started a business or did a job well, climbed the ranks in a company due to their skills, etc. To be generous, I’m also including people here who were early investors in crypto, say, where they made a good but highly speculative bet at the right time.
~20% seems like the did a lot of very difficult work, but also seem to have gotten really really lucky—e.g. grew a pre-existing major family business a lot, were roommates with Mark Zuckerberg, etc.
Obviously we don’t have the counterfactuals on these people’s lucky breaks, so it’s hard for me to guess what the world looks like where they didn’t have this lucky break, but I’d guess it’s at least at a much lower giving potential.
7% I’m not really sure.
So I’d guess that even trying to do this approach, only like 50% of major donors would pass this filter. Though it seems possible luck also played a major role for many of those 50% too and I just don’t know about it. I’m surprised you find the overall claim bizarre though, because to me it often feels somewhat self-evident from interacting with people from different wealth levels within EA, where it seems like the best calibrated people are often like, mid-level non-executives at organizations, who neither have information distortions from having power but also have deep networks / expertise and a sense of the entire space. I don’t think ultra-wealthy people have worse views, to be clear — just that wealth and having well-calibrated, thoughtful views about the world seem unrelated (or to the extent they are correlated, those differences stop being meaningful below the wealth of the average EA donor), and certainly a default of “cause prioritization is directly downstream of the views of the wealthiest people” is worse than many alternatives.
I strongly agree about the clunkiness of this approach though, and many of the downsides you highlight. I think in my ideal EA, there would be lots and lots of various things like this tried, and good ones would survive and iterate, and just generally EAs experiment with different models for distributing funding, so this is my humble submission to that project.
I think it’s important to separate out the critical design features from the specific instantiation—this is a six-month prototype that can run on $0 with a reasonable amount of a single person’s volunteer labor. Like most no-budget volunteer efforts, it is likely going to be a bit clunky (e.g., “filling in donation forms every month for very small amounts of money”). Having a 501(c)(3) efficiently distribute out the money in a centralized manner would be ideal; it would also take a good bit of time and money to set up. It makes sense to run the clunky prototype first, get some of the bugs out, and then seek commitments of time and money to set up a more efficient infrastructure if the trials are promising enough.
How effectively does it succeed in incorporating and weighing all that information, though? As an intuition pump, if the current system perfectly did so, it shouldn’t matter who which Donor was the million-dollar donor and which were small-dollar donors.
This, of course, requires the large donor(s) to recognize the expert’s expertise. Likewise, all of your examples rely on Donor 1 picking the right person to be persuaded by, to hire as a consultant, etc.
But they don’t pick generically “good” politicians—they pick ones who line up with their preferences on some big-picture questions (which can be seen as analogous to cause prio within cause areas here, or maybe even more specific than that). In this way, the preferences of the wealthy taxpayer (theoretically) don’t get more weight than those of the pauper in the politician’s decisions, and then the details get worked out by technocrats.
Of course, an outfit like EA Funds could do something like this if desired—monies flowing into a Democratic Allocation Fund could be distributed amongst the existing cause-area funds based on some sort of democratic allocation algorithm.
I don’t think zero (or even particularly low) correlation is necessary for this project to make sense.
If one were shown 5,000 people in a crowd and were required to make a personally important decision based on their judgment about the world while knowing nothing other than their income/wealth, I submit that the optimal decision rule would be neither (a) weight all 5,000 views evenly, or (b) give predominant weight to the very richest people in the bunch, and very little to the bottom 80% (or whatever). But: if I know that a determination had already been made to make the bulk of the decision using rule (b), it would often make sense to use rule (a) on the margin that I could control.
In addition to questioning how strong the (money:good cause prio) correlation is, I am pretty confident it is not remotely linear. Suppose we somehow knew that it made sense to give five times the weight to the views of someone who made $250K/year than someone who made $50K/year (which is already doubtful to me). I would expect a much more modest ideal weighting between $250K and $1.25MM, and an even more modest ideal weighting between $1.25MM and $6.25MM, etc. Yet the current system gives greater prominence to the higher intervals (in absolute terms).
Finally, donation decisions can be significantly driven by donors’ somewhat idiosyncratic preferences—cf. Good Ventures’ recent decision to stop funding various subcauses for reasons unrelated to any determination it made about those subcauses’ effectiveness. Those preferences may well be anti-correlated with effectiveness insofar as highly neglected causes may pose more PR headaches. Not having their own private foundations, smaller donors can donate as they honestly see best without having to face the risk of external PR backlash. Even if idiosyncratic preferences were no more prevalent among the wealthy, it is probably better to dilute them rather than have so much riding on those of the top few people.
Second point within this comment I’m interested in discussing: If I’m summarizing you correctly, you think standard methods of addressing the problem (“cause allocation in EA is controlled by a few rich people who might not make good decisions”) makes Equal Hands an unnecessary project.
First: I agree with you that the current donation pooling/voting process is not optimal. Hopefully in the six months of the trial a more streamlined option will be found. A fund seems good; knowing the annoying-ness of setting up an appropriate 501c3 and considering the international nature of EA I understand why Abraham didn’t go that route before determining whether there was any interest in the project, but I think if it succeeds creating a fund would be good.
If a fund is created, the main difference between the Equal Hands concept and EA funds is that typical EA funds don’t address at all the issue of larger donors having more influence. Yes, experts decide where the amounts within the buckets go. But if one billionaire likes GCR and no billionaires like animal welfare, there will be no mechanism to democratize the distribution between pools. It may be that you don’t care about that, but assuming you did, do you see EA funds as addressing that issue in some way that I am missing?
Second: I agree that a certain amount of donor 3 hiring donor 1 as a consultant or being convinced by a persuasive argument or similar goes on in EA (at least, much more than outside of EA). But the examples you give are such small levels of decision-making sharing. If you endorse the general rule that more decision makers tend to make better decisions than small groups, even when the small groups are composed of experts (which I think there is quite a bit of evidence for?) then a much more robust democratization seems good.
There’s a lot to discuss in this comment so it might be worth unpacking responses into sections. For myself, I’m most interested in your assertion that money is well-correlated with having more accurate views about the world.
I think you’re correct that there is some connection between “accurate views in a domain” to “success in that domain” on average. But I think the main driver of that connection is a correlation at the low end (e.g., people with really faulty pictures of reality are not successful) but no low correlation outside of that range.
In the case of wealth, while we might expect that being well-attuned to reality is helpful, being “well-attuned to reality” is not a real trait (or if it is, it’s extremely rare) -- most people are well-attuned to parts of reality and not others. Furthermore, wealth is in most societies highly driven by being lucky to be born into a particular family. So at the end of the day, we shouldn’t expect donors with the most money to generally have the best views on what to do with it.
In particular, I think that the dynamics in charity make this lack of correlation even more problematic, because the wealthiest folks have disproportionately more control over what happens in charity than the just-relatively-well-off folks, and we particularly shouldn’t expect that being wildly wealthy is a good predictor of “being good at figuring out which charities are most impactful.” Being insanely wealthy is probably even more luck driven than being successful in a normal way, and the more insanely wealthy you are, the more likely you are to have charities trying to sell themselves to you, and the worse your access to information about them will be.
Just to reality-test my mental model here against my own experience: you suggest looking at the major donors in EA. By and large, my experience in EA is that there is not really a correlation between wealth and having good ideas about charity. I meet a lot of wealthy people in my job, and they are often shockingly out of touch. Maybe they were better calibrated before they got wealthy, but becoming insanely wealthy reduces how much people are honest to you and makes your life look so different from normal I expect you forget what normal is. Often, the people in EA I think make the best calls are sort of mid-tier employees of EA orgs, who are both thoughtful and have great insider info.
Even beyond that, EA major donors are a small selection of rich people in general, who by and large I think make absolutely terrible decisions about charity (and I expect you think that also, since you’re on the EA forum). So even if I wanted to grant you that these rich people might have accurate views within their domain, I wouldn’t grant that that makes them better at choosing charities.
Basically, my overall point is that (1) really wealthy people are probably mostly really wealthy by chance of circumstance; (2) if not chance, and it is domain expertise in the area of their success, that doesn’t largely transfer to success in choosing charities, and (3) based on my experience of EA, wealthy EAs are no more likely to make good decisions than non-wealthy EAs. So I’m comfortable endorsing the idea that having more money is not generally a good predictor of having great ideas about charity.
I don’t really want to get into an argument here about whether extreme wealth is largely luck-driven, or how much success in one domain translates to success in others, since I believe people tend to be firmly entrenched in one view or another on those topics and it could distract from the main topic of the Equal Hands experiment. My intention is merely to illustrate why someone might endorse the original statement.
I think this follows from a more general fact about people. If anything, I would guess that there’s a positive correlation between wealth and EA values: that a higher (though still depressingly low) proportion of wealthy people donate to effective causes than is true of the general population? Would be interesting to see actual data, though.