Thanks Jason—very helpful. I think that to a large extent you are right: concerns about billionaire philanthropists are not unique to the EA movement. In the United States, these concerns go back at least as far as the Rockefeller Foundation, which was met with so much skepticism that the US Congress refused to allow it to be established. Grappling with billionaire philanthropy is something that many of us must do. (How should we feel about the Koch Brothers?). In this respect, I hope that EAs can learn from the reflection on billionaire philanthropy that has taken place in other circles.
But I think one reason why EAs in particular need to think carefully about the role of billionaire philanthropy is that EA was not always funded by billionaires. In the early days of earning to give, EA was rather poorly funded, and members were encouraged to work for high salaries and donate what they could to the movement. Matters changed when the billionaires came. Now there wasn’t much need for earning to give. But billionaire philanthropy also raised some problems.
One problem is the role of donor discretion. In theory, EAs are committed to giving to the highest-impact causes. But in practice, money is guided by the wishes of leaders and high-profile donors. SBF established the FTX foundation, which had definite views about what should be funded, and what should not, and used its wealth to push those views on the community. Was FTX funding the most valuable causes? Or was there perhaps an outsized influence of SBF’s own beliefs, and the beliefs of those closest to him.
Another challenge is that EA didn’t merely take money from SBF. EA played a large role in the rise of SBF, and helped to shape his public image after he became wealthy. From this perspective, EAs can’t simply step back and say: “well, it’s too bad that there are billionaires, but what are we to do? Turn down their donations?” After all, EAs helped to make this man a billionaire.
There are also some questions about the special obligations that billionaires might incur by virtue of what they did to get wealthy. For example, suppose that many Silicon Valley billionaires (especially cryptocurrency billionaires) made their money in industries that created a great deal to global warming. Can they then turn around and refuse to fund climate mitigation efforts on the grounds that other efforts are more important? Or might they have an obligation to first undo the harms they caused on their way up the ladder?
Were there any issues about billionaire philanthropy that you found especially interesting (or uninteresting) as subjects for future discussion?
I think you’re right that influence/control by megadonors is a thing. But I think almost all ways of funding charitable work have funding-source problems, so I would be interested in seeing more about whether (and if so, why) you thought the funding-source problem from billionaires are worse for a charitable movement than the alternatives.
At the outset, I should note I am not a longtermist, and so criticisms that apply to all EA cause areas will be more salient/interesting to me as a reader than ones that depend on your assessment of current longtermist initiatives.
Perhaps something like earning to give, from a small army of mid-size donors (e.g., mid-six to low-seven figures a year) is the best funding source. There’s a strong argument that EA should not diminished the value of EtG as a role. But—the fact remains that that Moskovitz funneled more to charities through GiveWell recommendations than everyone else combined per the 2021 metrics report. Another 18 donors were about half the remainder, averaging about $7MM each. As a strong believer in GiveWell-type work, that’s a lot of impact to give up by foreswearing Big Money. Also, if your movement only has a limited supply of foot soliders, nudging a number of your best and brightest into working on the supply/logistics chain—no matter how critical that role is—necessarily trims the number you can deploy to the front lines doing direct work.
Relying on governments for funding has its own set of problems, and relying on hundreds of thousands of low-engagement, small-dollar donors requires you to target your efforts to what the general public will give toward—which often has nothing to do with effectiveness. The Cynic’s Golden Rule—he who has the gold, makes the rules—has nearly universal application to charity work. But as a whole, are there good reasons to think megadonors are worse taskmasters than the alternatives?
I am less convinced than many people here that EA can regularly create billionaires, but am open to changing my mind. So I’m personally less interested in the “EA helped create SBF” angle unless it can be tied to some warning or lesson for the future. Absent that, it sounds like a story about a few EA-aligned individuals who made an error in judging character (or perhaps turned a blind eye to yellow or even red flags) on a specific individual, rather than a particularly important story about the nature of the EA community itself.
I think the “special obligations” discussion is interesting. On the intellectual side (but slipping into theological metaphors), it’s not entirely clear to me why—for instance, the penance for one’s sins against the climate have to be repaid in climate-related donations, if some other form of penance would be more useful to humanity (especially to the most disadvantaged). Of course, you might feel climate work is the best use of charitable money, but each billionaire’s trail of collateral damage will be different. And there’s no a priori reason to think that the specific penance for the damage caused by a specific billionaire will ordinarily have greater-than-average socially utility as a place to send donations. I can apprehend why there might be an obligation to pay the penance to the benefit of the group of people the billionaire harmed, but am having a hard time understanding why it must be repaid to ameliorate the specific way in which the billionaire harmed those people as opposed to meeting their other interests. Moreover, from a distributional perspective, a billionaire’s collateral damage may tend to be more localized to his/her high-privilege geographical area, and I worry about a principle that would often imply that one must first meet a moral duty toward relatively privileged people in the U.S. before they should care about the global poor.
But the more practical question about “special obligations” is what the idea means for a charitable movement—I don’t think any of your readers are likely to be billionaires. A movement doesn’t really have any real leverage over the megadonor, and I don’t think there are any good ways to fix that. Are all the charities in the world supposed to refuse to take money from Polluter Paul until he first donates a suitable amount of money for pollution remediation? If you really believe your charitable movement does a lot of good for the world, it’s morally costly to tell Polluter Paul to go give to the opera houses instead to get his reputational boost because they will take his money without asking any questions.
Even worse (using a GiveWell-type framework because that’s my cause area), that moral cost is not borne by me. It’s mostly borne by small children in Africa—perhaps hundreds of thousands of them if the donation is big enough—who will die because I told Polluter Paul his money was too dirty for me. While I don’t cleanly identify as a utilitarian, that is a bitter pill to swallow.
Finally, you write about “rewriting the rules to make sure that philanthropic influence is used fairly, effectively, and in a way that does not disempower ordinary citizens.” I am interested in hearing more about that, but particularly in why you think that can be done in a way that doesn’t disincentivize would-be billionaire philantropists and push them toward just buying a mega-yacht and a professional sports team instead. Unless, of course, you feel billionaire philantrophy is a net negative for the world and should be discouraged, even though it will mean considerably less philantrophy overall.
“SBF established the FTX foundation, which had definite views about what should be funded, and what should not, and used its wealth to push those views on the community. Was FTX funding the most valuable causes? Or was there perhaps an outsized influence of SBF’s own beliefs, and the beliefs of those closest to him.”
I don’t think there is evidence that Dustin Moskovitz, Cari Tuna or SBF had an outsized influece on the types of cause areas that the EA community worked on. Looking at the things that were discussed in the community before and after these donors came in, I can’t see much difference. The ideas of AI safety, longtermism, animal welfare, and global health are pretty old. I’m sure SBF had his own opinions on specific matters and had some influence over ways to evaluate different projects, but nonetheless, I guess the overwhelming majority of the projects funded by SBF would still be funded by the EA community if there were enough resources. My guess is many of the projects initially funded by FTX will still be funded by other donors in the community.
The key words there are “if there were enough resources.”
As a practical matter, what EA does is inevitably and heavily influenced by what gets funded. That, and what will people think will get funded, influence what gets talked about at conferences, what areas new EAs go into, and so on. And, for the most part, what gets funded is ultimately up to a few people and their delegates.
Imagine a world in which SBF existed (in non-fraudulent form) and the FTX Animal Fund was handing out $150MM a year to animal-welfare organizations and peanuts to longtermism. I’d suggest that EA would already look significantly different than it did in October 2022, and would have looked even more significantly different in October 2027.
I don’t think the original poster is wrong that megadonors have an outsized influence on which cause areas EA is doing significant work in.
Also, one potential EA-focused topic on billionaire funding is the particular risks posed by certain “meta” funding. Some of the benefits of that funding accrue in part to the benefit of insiders—most people like going to fancy conferences in $15MM manor houses—with the idea that the spending will ultimately achieve more for EA’s goals than spending the money on direct work would. The benefits of much meta funding are too diffuse and indirect to be captured by GiveWell-style analyses or to be readily subject to evaluation by non-insiders. I’m not sure if other charitable movements pay so much attention to meta, but at least none I’m aware of do so as explicitly.
I suggest that there is a particular potential problem with billionaire funding of certain sorts of meta work that does not exist with billionaire funding of direct work (e.g., bednets, AI safety fellowships, etc.) I speculate that most billionaires lack motivation to attempt to monitor and evaluate the effectiveness of meta work—it’s too complex, and each individual spend is pretty small in the billionaire mind. Of course, the billionaire may be relying on delegates to evaluate all of their grantees anyway. But the potential problem is that the billionaire is relying on insiders to evaluate the effectiveness of the meta work, and insiders may have a bias in favor of that work.
I don’t fund meta work (with my donations as a public-sector attorney not in EA...) as a general rule because I do not feel qualified to assess its value. But if I were a billionaire, I would probably require a “community co-pay” before giving to certain sorts of meta work. For example, I might only match funds (up to a certain point) that small/medium donors contributed specifically for conferences. Since money is fungible, I’d be using the community’s willingness to pay for this expense—rather than donate more to effective charities—as an information signal about how valuable the conferences actually were. And with the community’s skin in the game, I’d have more confidence in their ability/capacity to police whether conference money was being spent wisely than in my own. Such a practice would also encourage what we might call intra-EA democracy—the decision about how much to fund conferences no longer depends predominately on my judgment but significantly depends on the judgment of a number of rank-and-file EAers as well. I would submit that is a feature, not a bug.
Ah right, good point! I’ll try to focus more on meta funding. You’re definitely right to be suspicious of this (hard to monitor; people have bad incentives; looks like we’re spending an awful lot on it now). I’ll see what I can say about this, and please do keep thinking about this if you have more thoughts. I like your suggestion of a co-pay.
Thanks Jason—very helpful. I think that to a large extent you are right: concerns about billionaire philanthropists are not unique to the EA movement. In the United States, these concerns go back at least as far as the Rockefeller Foundation, which was met with so much skepticism that the US Congress refused to allow it to be established. Grappling with billionaire philanthropy is something that many of us must do. (How should we feel about the Koch Brothers?). In this respect, I hope that EAs can learn from the reflection on billionaire philanthropy that has taken place in other circles.
But I think one reason why EAs in particular need to think carefully about the role of billionaire philanthropy is that EA was not always funded by billionaires. In the early days of earning to give, EA was rather poorly funded, and members were encouraged to work for high salaries and donate what they could to the movement. Matters changed when the billionaires came. Now there wasn’t much need for earning to give. But billionaire philanthropy also raised some problems.
One problem is the role of donor discretion. In theory, EAs are committed to giving to the highest-impact causes. But in practice, money is guided by the wishes of leaders and high-profile donors. SBF established the FTX foundation, which had definite views about what should be funded, and what should not, and used its wealth to push those views on the community. Was FTX funding the most valuable causes? Or was there perhaps an outsized influence of SBF’s own beliefs, and the beliefs of those closest to him.
Another challenge is that EA didn’t merely take money from SBF. EA played a large role in the rise of SBF, and helped to shape his public image after he became wealthy. From this perspective, EAs can’t simply step back and say: “well, it’s too bad that there are billionaires, but what are we to do? Turn down their donations?” After all, EAs helped to make this man a billionaire.
There are also some questions about the special obligations that billionaires might incur by virtue of what they did to get wealthy. For example, suppose that many Silicon Valley billionaires (especially cryptocurrency billionaires) made their money in industries that created a great deal to global warming. Can they then turn around and refuse to fund climate mitigation efforts on the grounds that other efforts are more important? Or might they have an obligation to first undo the harms they caused on their way up the ladder?
Were there any issues about billionaire philanthropy that you found especially interesting (or uninteresting) as subjects for future discussion?
I think you’re right that influence/control by megadonors is a thing. But I think almost all ways of funding charitable work have funding-source problems, so I would be interested in seeing more about whether (and if so, why) you thought the funding-source problem from billionaires are worse for a charitable movement than the alternatives.
At the outset, I should note I am not a longtermist, and so criticisms that apply to all EA cause areas will be more salient/interesting to me as a reader than ones that depend on your assessment of current longtermist initiatives.
Perhaps something like earning to give, from a small army of mid-size donors (e.g., mid-six to low-seven figures a year) is the best funding source. There’s a strong argument that EA should not diminished the value of EtG as a role. But—the fact remains that that Moskovitz funneled more to charities through GiveWell recommendations than everyone else combined per the 2021 metrics report. Another 18 donors were about half the remainder, averaging about $7MM each. As a strong believer in GiveWell-type work, that’s a lot of impact to give up by foreswearing Big Money. Also, if your movement only has a limited supply of foot soliders, nudging a number of your best and brightest into working on the supply/logistics chain—no matter how critical that role is—necessarily trims the number you can deploy to the front lines doing direct work.
Relying on governments for funding has its own set of problems, and relying on hundreds of thousands of low-engagement, small-dollar donors requires you to target your efforts to what the general public will give toward—which often has nothing to do with effectiveness. The Cynic’s Golden Rule—he who has the gold, makes the rules—has nearly universal application to charity work. But as a whole, are there good reasons to think megadonors are worse taskmasters than the alternatives?
I am less convinced than many people here that EA can regularly create billionaires, but am open to changing my mind. So I’m personally less interested in the “EA helped create SBF” angle unless it can be tied to some warning or lesson for the future. Absent that, it sounds like a story about a few EA-aligned individuals who made an error in judging character (or perhaps turned a blind eye to yellow or even red flags) on a specific individual, rather than a particularly important story about the nature of the EA community itself.
I think the “special obligations” discussion is interesting. On the intellectual side (but slipping into theological metaphors), it’s not entirely clear to me why—for instance, the penance for one’s sins against the climate have to be repaid in climate-related donations, if some other form of penance would be more useful to humanity (especially to the most disadvantaged). Of course, you might feel climate work is the best use of charitable money, but each billionaire’s trail of collateral damage will be different. And there’s no a priori reason to think that the specific penance for the damage caused by a specific billionaire will ordinarily have greater-than-average socially utility as a place to send donations. I can apprehend why there might be an obligation to pay the penance to the benefit of the group of people the billionaire harmed, but am having a hard time understanding why it must be repaid to ameliorate the specific way in which the billionaire harmed those people as opposed to meeting their other interests. Moreover, from a distributional perspective, a billionaire’s collateral damage may tend to be more localized to his/her high-privilege geographical area, and I worry about a principle that would often imply that one must first meet a moral duty toward relatively privileged people in the U.S. before they should care about the global poor.
But the more practical question about “special obligations” is what the idea means for a charitable movement—I don’t think any of your readers are likely to be billionaires. A movement doesn’t really have any real leverage over the megadonor, and I don’t think there are any good ways to fix that. Are all the charities in the world supposed to refuse to take money from Polluter Paul until he first donates a suitable amount of money for pollution remediation? If you really believe your charitable movement does a lot of good for the world, it’s morally costly to tell Polluter Paul to go give to the opera houses instead to get his reputational boost because they will take his money without asking any questions.
Even worse (using a GiveWell-type framework because that’s my cause area), that moral cost is not borne by me. It’s mostly borne by small children in Africa—perhaps hundreds of thousands of them if the donation is big enough—who will die because I told Polluter Paul his money was too dirty for me. While I don’t cleanly identify as a utilitarian, that is a bitter pill to swallow.
Finally, you write about “rewriting the rules to make sure that philanthropic influence is used fairly, effectively, and in a way that does not disempower ordinary citizens.” I am interested in hearing more about that, but particularly in why you think that can be done in a way that doesn’t disincentivize would-be billionaire philantropists and push them toward just buying a mega-yacht and a professional sports team instead. Unless, of course, you feel billionaire philantrophy is a net negative for the world and should be discouraged, even though it will mean considerably less philantrophy overall.
“SBF established the FTX foundation, which had definite views about what should be funded, and what should not, and used its wealth to push those views on the community. Was FTX funding the most valuable causes? Or was there perhaps an outsized influence of SBF’s own beliefs, and the beliefs of those closest to him.”
I don’t think there is evidence that Dustin Moskovitz, Cari Tuna or SBF had an outsized influece on the types of cause areas that the EA community worked on. Looking at the things that were discussed in the community before and after these donors came in, I can’t see much difference. The ideas of AI safety, longtermism, animal welfare, and global health are pretty old. I’m sure SBF had his own opinions on specific matters and had some influence over ways to evaluate different projects, but nonetheless, I guess the overwhelming majority of the projects funded by SBF would still be funded by the EA community if there were enough resources. My guess is many of the projects initially funded by FTX will still be funded by other donors in the community.
The key words there are “if there were enough resources.”
As a practical matter, what EA does is inevitably and heavily influenced by what gets funded. That, and what will people think will get funded, influence what gets talked about at conferences, what areas new EAs go into, and so on. And, for the most part, what gets funded is ultimately up to a few people and their delegates.
Imagine a world in which SBF existed (in non-fraudulent form) and the FTX Animal Fund was handing out $150MM a year to animal-welfare organizations and peanuts to longtermism. I’d suggest that EA would already look significantly different than it did in October 2022, and would have looked even more significantly different in October 2027.
I don’t think the original poster is wrong that megadonors have an outsized influence on which cause areas EA is doing significant work in.
Also, one potential EA-focused topic on billionaire funding is the particular risks posed by certain “meta” funding. Some of the benefits of that funding accrue in part to the benefit of insiders—most people like going to fancy conferences in $15MM manor houses—with the idea that the spending will ultimately achieve more for EA’s goals than spending the money on direct work would. The benefits of much meta funding are too diffuse and indirect to be captured by GiveWell-style analyses or to be readily subject to evaluation by non-insiders. I’m not sure if other charitable movements pay so much attention to meta, but at least none I’m aware of do so as explicitly.
I suggest that there is a particular potential problem with billionaire funding of certain sorts of meta work that does not exist with billionaire funding of direct work (e.g., bednets, AI safety fellowships, etc.) I speculate that most billionaires lack motivation to attempt to monitor and evaluate the effectiveness of meta work—it’s too complex, and each individual spend is pretty small in the billionaire mind. Of course, the billionaire may be relying on delegates to evaluate all of their grantees anyway. But the potential problem is that the billionaire is relying on insiders to evaluate the effectiveness of the meta work, and insiders may have a bias in favor of that work.
I don’t fund meta work (with my donations as a public-sector attorney not in EA...) as a general rule because I do not feel qualified to assess its value. But if I were a billionaire, I would probably require a “community co-pay” before giving to certain sorts of meta work. For example, I might only match funds (up to a certain point) that small/medium donors contributed specifically for conferences. Since money is fungible, I’d be using the community’s willingness to pay for this expense—rather than donate more to effective charities—as an information signal about how valuable the conferences actually were. And with the community’s skin in the game, I’d have more confidence in their ability/capacity to police whether conference money was being spent wisely than in my own. Such a practice would also encourage what we might call intra-EA democracy—the decision about how much to fund conferences no longer depends predominately on my judgment but significantly depends on the judgment of a number of rank-and-file EAers as well. I would submit that is a feature, not a bug.
Ah right, good point! I’ll try to focus more on meta funding. You’re definitely right to be suspicious of this (hard to monitor; people have bad incentives; looks like we’re spending an awful lot on it now). I’ll see what I can say about this, and please do keep thinking about this if you have more thoughts. I like your suggestion of a co-pay.