SBF has, of late, declared that his public proclamations on charity were merely a facade. âDumb game we woke westerners play where we say all the right shibboleths so people will like us,â he wrote in a direct Twitter message to a journalist.
Itâs not clear whether SBF was saying âthe whole EA/âutilitarianism thing was just a front; what I actually care about is myselfâ (as you seem to suppose) or âall the stuff I was saying to regulators was just a front; what I actually care about is utilitarianismâ. (This is not a counterargument; Iâm just pointing out the ambiguity.)
The Oxford philosopher William MacAskillâs 2015 book, Doing Good Better, expounds the central idea of effective altruism. [...] As it turns out, effective altruism in practice is not all that straightforward. Inconsistencies, both moral and intellectual, abound. We now know that the generous SBF, in fact, defrauded FTX customers to the tune of billions of dollars. [...] There is an evident self-serving pattern to SBFâs donations. In the lead up to the senate elections this year, SBF spent around $40 million. Most of this money went to Democrats who form subcommittees on crypto currencies, digital assets, and commercial law.
I think the implication here is that EA is to some extent self-serving. I think SBFâs political donations isnât good evidence of that. I think some of SBFâs spending was for pandemic prevention and other EA reasons, which seems pretty good to me (or wouldâve been had the money not been gained through fraud). But some of his spending was likely for business purposes, e.g. to get better-for-FTX crypto regulation, and not guided by EA principles.
I do think it was a problem that it wasnât clear how Protect Our Future (SBFâs PAC) selected its candidates, a point raised here. Itâs still not clear to me to what extent the recipients of SBFâs political donations support pandemic prevention/âother EA causes vs are pro-crypto (except Carrick Flynn, who was definitely more pro-EA-stuff than pro-crypto).
SBFâs Future Fund donated $36.5 million to Effective Ventures, a charity chaired by friend and mentor MacAskill. It is unclear what the basis of this donation was.
As AllAmericanBreakfast points out, this is actually an EA argument. If you think there are more cost-effective ways of spending money, we will listen to your argument! But then you need to actually make the argument. (Effective Ventures works on supporting the EA community. The argument for why itâs good to spend resources on EA movement-building is available online and can be freely refuted.)
Was there a randomised, controlled trial (RCT) to decide if this was the best use of the money?
RCTs are useful, but it isnât always possible or feasible to run them. Overemphasis on RCTs and other forms of hard evidence is a criticism that has repeatedly been leveled against EA itself.
Take for example something obviously good, like developing Covid vaccines in early 2020. Of course there were plenty of RCTs later on testing whether the vaccines were safe and effective. But there was no RCT in March 2020 testing whether investing in Covid vaccines was good, and there was no reason to try to make one if that were possible. Developing Covid vaccines was just great in expectation. You have previous examples of vaccines being effective; you have previous examples of diseases like Covid being costly in terms of lives and money; you can put two and two together.
I think ultimately EA is doing ~the right thing here: trying to get the best evidence available and funding the causes that are best in expectation, while being open to changing course in light of new evidence. For example, GiveWell mainly focuses on expected value, but its Top Charities Fund excludes grants that are riskier (less certain).
More to the point, can any scientific study speak to the means through which the money was earned in the first place?
Again, sometimes we donât have scientific studies (or, more often, we do have them but they are flawed or noisy) and have to make decisions under considerable uncertainty. We can make judgments with the evidence we do have. (For what itâs worth, in my judgment, and generally speaking, making money via honest crypto brokerage is only very mildly harmful, if at all, whereas making money via fraud is highly harmful.)
MacAskill was an âunpaidâ adviser to Future Fund, a post from which he claims to have resigned.
This seems to suggest that MacAskill may have been paid by the Future Fund, that he may not have resigned, and that he more broadly has significantly benefited financially from FTX/âEA. In fact, MacAskill wasnât paid by the Future Fund, he did resign, and he donates everything above ÂŁ26K a year.
SBFâs Future Fund donated $36.5 million to Effective Ventures, a charity chaired by friend and mentor MacAskill. [...] MacAskill was an âunpaidâ adviser to Future Fund, a post from which he claims to have resigned. But most would insist that $36.5 million was, in fact, paid.
Iâm not sure what this means. Are you saying that SBFâs donation to Effective Ventures was a payment to MacAskill? That makes no sense to me. It was a payment to Effective Ventures; it did not go into MacAskillâs bank account. (I donât know whether MacAskill is paid for his work on the EV board; he should be, but either way, again, he donates everything above ÂŁ26K a year.)
In the wake of the collapse, MacAskill has distanced himself from SBF. In a series of tweets, the philosopher declared that effective altruism is not above common sense moral constraints. Avoiding conflicts of interest, it would appear, is not one of them.
This also makes no sense to me. What conflict of interest? As far as I can tell, MacAskill had one interest with regard to FTX: to use it do good in the world. Whatâs the evidence that he has used it to financially benefit himself or anything of that sort?
In any case, the fact that MacAskill failed to spot SBFâs conceit despite a 10-year long association, is deeply damaging to him. Why should we take his views on existential threats facing humanity that he calls to our attention in his latest book What We Owe The Future with any seriousness? If MacAskill cannot predict threats to his own near-term future, namely, the threat associating with SBF posed to his own reputation and the effective altruism movement, how well can he estimate, much less affect, the million-year prospects of humanity?
This argument has also been made by Tyler Cowen: âHardly anyone associated with Future Fund saw the existential risk to ⌠Future Fund, even though they were as close to it as one could possibly be. I am thus skeptical about their ability to predict existential risk more generally, and for systems that are far more complex and also far more distant.â
Itâs unclear to me what exactly MacAskill shouldâve predicted. Should he have suspected that SBF was committing fraud in particular? That seems really hard to have known. He did consider the possibility that FTX may crash, but this doesnât seem like a âthreat to his own near-term futureâ.
While MacAskill and other EAs considered the hypothesis that FTX could crash, I think EAs didnât really consider the hypothesis that it would crash due to unambiguously criminal activities like fraud. What does that tell us about EAs and existential risks? I think not much about EAsâ abilities to evaluate existential risks, but maybe it suggests there are existential risks that EAs havenât even consideredâunknown unknowns.
Btw, I think peopleâs endorsing or not endorsing these arguments is only a relatively small part of whether you should or shouldnât believe them. You can also actually read the arguments and make up your own mind about how sound they seem.
Couched in scientific vocabulary and probability estimates [...]
Precision is important when trying to think and communicate clearly.
[Ida Sophia Scudder] set up the world class Christian Medical College, Vellore (CMC). This venerable institution, among others, pioneered Indiaâs polio elimination drive, introduced novel leprosy reconstructive surgery techniques to the world, and does cutting-edge research in multiple clinical medicine specialties. What would MacAskillian calculations make of CMCâs modest beginning as a single-bed clinic?
Iâm not sure what the implication is here? That EA wouldnât recognise how good the CMC is, because it was only a fledgling organisation at that time? But EA is in the business of founding new charities, which always start small.
Altruism, whether effective or otherwise, ultimately distorts community. Altruism is unidirectional /âgiving/â and /âreceiving/â that treats people as a problem to be fixed or a metric to be improved upon. But communities are fostered through sharing as equals.
Sharing as equals sounds great. But the world isnât fair, and weâre not all equal in our needs and capabilities. Some of us have much more than others, and some of us need much more. If we only give in our communities, we exclude people who would receive no help otherwise, people who donât have wealthy people in their communities with whom to share âas equalsâ. If everyone only helps within their own community, theyâll just be left out in the cold.
Shanti Bhavan, a below K-12 boarding school in Baliganapalli, admits 30 students from poor backgrounds every year and supports their learning up to university. At the end of their 17-year engagement with the school, a remarkable 97% of these kids find full-time employment. A year-long sponsorship of a child here costs $2,000. The alternative to effective altruism is a techie from, say, Chennai sponsoring a child in this school over delivering 4,000 deworming treatments in Kenya for the same cost. Not based on notions of effectiveness but because the techieâs own son and the child she sponsors might become friends.
In my opinion, the reason to help someone isnât that they may become friends with your son; itâs simply that they need help and can be helped.
There are opportunity costs to spending money. So yeah, you can spend $6K to put a child through three years of K-12 boarding school, or you can spend $5K to save a childâs life. I think both of these things are good, but one of the two is clearly better. Every time you choose to put a kid through three years of K-12, you let another kid die. Thatâs a horrible observation, but nevertheless one we need to reckon with.
Itâs not clear whether SBF was saying âthe whole EA/âutilitarianism thing was just a front; what I actually care about is myselfâ (as you seem to suppose) or âall the stuff I was saying to regulators was just a front; what I actually care about is utilitarianismâ. (This is not a counterargument; Iâm just pointing out the ambiguity.)
I think the implication here is that EA is to some extent self-serving. I think SBFâs political donations isnât good evidence of that. I think some of SBFâs spending was for pandemic prevention and other EA reasons, which seems pretty good to me (or wouldâve been had the money not been gained through fraud). But some of his spending was likely for business purposes, e.g. to get better-for-FTX crypto regulation, and not guided by EA principles.
I do think it was a problem that it wasnât clear how Protect Our Future (SBFâs PAC) selected its candidates, a point raised here. Itâs still not clear to me to what extent the recipients of SBFâs political donations support pandemic prevention/âother EA causes vs are pro-crypto (except Carrick Flynn, who was definitely more pro-EA-stuff than pro-crypto).
As AllAmericanBreakfast points out, this is actually an EA argument. If you think there are more cost-effective ways of spending money, we will listen to your argument! But then you need to actually make the argument. (Effective Ventures works on supporting the EA community. The argument for why itâs good to spend resources on EA movement-building is available online and can be freely refuted.)
RCTs are useful, but it isnât always possible or feasible to run them. Overemphasis on RCTs and other forms of hard evidence is a criticism that has repeatedly been leveled against EA itself.
Take for example something obviously good, like developing Covid vaccines in early 2020. Of course there were plenty of RCTs later on testing whether the vaccines were safe and effective. But there was no RCT in March 2020 testing whether investing in Covid vaccines was good, and there was no reason to try to make one if that were possible. Developing Covid vaccines was just great in expectation. You have previous examples of vaccines being effective; you have previous examples of diseases like Covid being costly in terms of lives and money; you can put two and two together.
I think ultimately EA is doing ~the right thing here: trying to get the best evidence available and funding the causes that are best in expectation, while being open to changing course in light of new evidence. For example, GiveWell mainly focuses on expected value, but its Top Charities Fund excludes grants that are riskier (less certain).
Again, sometimes we donât have scientific studies (or, more often, we do have them but they are flawed or noisy) and have to make decisions under considerable uncertainty. We can make judgments with the evidence we do have. (For what itâs worth, in my judgment, and generally speaking, making money via honest crypto brokerage is only very mildly harmful, if at all, whereas making money via fraud is highly harmful.)
This seems to suggest that MacAskill may have been paid by the Future Fund, that he may not have resigned, and that he more broadly has significantly benefited financially from FTX/âEA. In fact, MacAskill wasnât paid by the Future Fund, he did resign, and he donates everything above ÂŁ26K a year.
Iâm not sure what this means. Are you saying that SBFâs donation to Effective Ventures was a payment to MacAskill? That makes no sense to me. It was a payment to Effective Ventures; it did not go into MacAskillâs bank account. (I donât know whether MacAskill is paid for his work on the EV board; he should be, but either way, again, he donates everything above ÂŁ26K a year.)
This also makes no sense to me. What conflict of interest? As far as I can tell, MacAskill had one interest with regard to FTX: to use it do good in the world. Whatâs the evidence that he has used it to financially benefit himself or anything of that sort?
This argument has also been made by Tyler Cowen: âHardly anyone associated with Future Fund saw the existential risk to ⌠Future Fund, even though they were as close to it as one could possibly be. I am thus skeptical about their ability to predict existential risk more generally, and for systems that are far more complex and also far more distant.â
Itâs unclear to me what exactly MacAskill shouldâve predicted. Should he have suspected that SBF was committing fraud in particular? That seems really hard to have known. He did consider the possibility that FTX may crash, but this doesnât seem like a âthreat to his own near-term futureâ.
While MacAskill and other EAs considered the hypothesis that FTX could crash, I think EAs didnât really consider the hypothesis that it would crash due to unambiguously criminal activities like fraud. What does that tell us about EAs and existential risks? I think not much about EAsâ abilities to evaluate existential risks, but maybe it suggests there are existential risks that EAs havenât even consideredâunknown unknowns.
Btw, I think peopleâs endorsing or not endorsing these arguments is only a relatively small part of whether you should or shouldnât believe them. You can also actually read the arguments and make up your own mind about how sound they seem.
Precision is important when trying to think and communicate clearly.
Iâm not sure what the implication is here? That EA wouldnât recognise how good the CMC is, because it was only a fledgling organisation at that time? But EA is in the business of founding new charities, which always start small.
Sharing as equals sounds great. But the world isnât fair, and weâre not all equal in our needs and capabilities. Some of us have much more than others, and some of us need much more. If we only give in our communities, we exclude people who would receive no help otherwise, people who donât have wealthy people in their communities with whom to share âas equalsâ. If everyone only helps within their own community, theyâll just be left out in the cold.
Inequalities between countries are far more substantial than inequalities within countries. As this tweet puts it: âWould you rather donate to an American with $500K income, or one at the poverty line of $13,590? Thatâs roughly the income difference between someone at the US poverty line and an average GiveDirectly Africa recipient (~$1/âday).â
In my opinion, the reason to help someone isnât that they may become friends with your son; itâs simply that they need help and can be helped.
There are opportunity costs to spending money. So yeah, you can spend $6K to put a child through three years of K-12 boarding school, or you can spend $5K to save a childâs life. I think both of these things are good, but one of the two is clearly better. Every time you choose to put a kid through three years of K-12, you let another kid die. Thatâs a horrible observation, but nevertheless one we need to reckon with.