I wrote about SBF, MacAskill, and EA for The Hindu.
SBF’s Future Fund donated $36.5 million to Effective Ventures, a charity chaired by friend and mentor MacAskill. It is unclear what the basis of this donation was. Was there a randomised, controlled trial (RCT) to decide if this was the best use of the money? More to the point, can any scientific study speak to the means through which the money was earned in the first place? Things get messier still. MacAskill was an ‘unpaid’ adviser to Future Fund, a post from which he claims to have resigned. But most would insist that $36.5 million was, in fact, paid. In the wake of the collapse, MacAskill has distanced himself from SBF. In a series of tweets, the philosopher declared that effective altruism is not above common sense moral constraints. Avoiding conflicts of interest, it would appear, is not one of them.
Further:
In any case, the fact that MacAskill failed to spot SBF’s conceit despite a 10-year long association, is deeply damaging to him. Why should we take his views on existential threats facing humanity that he calls to our attention in his latest book What We Owe The Future with any seriousness? If MacAskill cannot predict threats to his own near-term future, namely, the threat associating with SBF posed to his own reputation and the effective altruism movement, how well can he estimate, much less affect, the million-year prospects of humanity?
I also tease a counter model for ‘charity’ based on recognising beneficiaries of charity as people:
Shanti Bhavan, a below K-12 boarding school in Baliganapalli, admits 30 students from poor backgrounds every year and supports their learning up to university. At the end of their 17-year engagement with the school, a remarkable 97% of these kids find full-time employment. A year-long sponsorship of a child here costs $2,000. The alternative to effective altruism is a techie from, say, Chennai sponsoring a child in this school over delivering 4,000 deworming treatments in Kenya for the same cost. Not based on notions of effectiveness but because the techie’s own son and the child she sponsors might become friends.
Climate scientists are predicting the damage that climate change will cause in 80 years. Do you think they should not be trusted about this because of their inability to predict their own personal lives?
EA is apparently such a successful idea that even its critics feel compelled to use its framing to levy their criticism:
“SBF’s Future Fund donated $36.5 million to Effective Ventures, a charity chaired by friend and mentor MacAskill. It is unclear what the basis of this donation was. Was there a randomised, controlled trial (RCT) to decide if this was the best use of the money?”
“ Why should we take his views on existential threats facing humanity that he calls to our attention in his latest book What We Owe The Future with any seriousness? If MacAskill cannot predict threats to his own near-term future, namely, the threat associating with SBF posed to his own reputation and the effective altruism movement, how well can he estimate, much less affect, the million-year prospects of humanity?”
Because it is in fact much easier to predict the major threats facing humanity in this century than to predict which specific companies will commit fraud. The former is just a property of the universe, available to discover on inspection. The latter is being actively hidden by precisely the person with the best ability and most incentive to hide it.
“ What would MacAskillian calculations make of CMC’s modest beginning as a single-bed clinic?”
I don’t know about MacAskill specifically, but this was in a time before AI was a looming threat. We are generally very up on neglected and important global health interventions.
“ If malaria eradication in Bangladesh is one’s greatest passion, the least that can be done is to live and work amongst the people of Chittagong’s hill tract districts.”
To be self-consistent, shouldn’t you be living among Chittagong’s hill tract districts if you’re going to use them in your piece?
It’s not clear whether SBF was saying “the whole EA/utilitarianism thing was just a front; what I actually care about is myself” (as you seem to suppose) or “all the stuff I was saying to regulators was just a front; what I actually care about is utilitarianism”. (This is not a counterargument; I’m just pointing out the ambiguity.)
I think the implication here is that EA is to some extent self-serving. I think SBF’s political donations isn’t good evidence of that. I think some of SBF’s spending was for pandemic prevention and other EA reasons, which seems pretty good to me (or would’ve been had the money not been gained through fraud). But some of his spending was likely for business purposes, e.g. to get better-for-FTX crypto regulation, and not guided by EA principles.
I do think it was a problem that it wasn’t clear how Protect Our Future (SBF’s PAC) selected its candidates, a point raised here. It’s still not clear to me to what extent the recipients of SBF’s political donations support pandemic prevention/other EA causes vs are pro-crypto (except Carrick Flynn, who was definitely more pro-EA-stuff than pro-crypto).
As AllAmericanBreakfast points out, this is actually an EA argument. If you think there are more cost-effective ways of spending money, we will listen to your argument! But then you need to actually make the argument. (Effective Ventures works on supporting the EA community. The argument for why it’s good to spend resources on EA movement-building is available online and can be freely refuted.)
RCTs are useful, but it isn’t always possible or feasible to run them. Overemphasis on RCTs and other forms of hard evidence is a criticism that has repeatedly been leveled against EA itself.
Take for example something obviously good, like developing Covid vaccines in early 2020. Of course there were plenty of RCTs later on testing whether the vaccines were safe and effective. But there was no RCT in March 2020 testing whether investing in Covid vaccines was good, and there was no reason to try to make one if that were possible. Developing Covid vaccines was just great in expectation. You have previous examples of vaccines being effective; you have previous examples of diseases like Covid being costly in terms of lives and money; you can put two and two together.
I think ultimately EA is doing ~the right thing here: trying to get the best evidence available and funding the causes that are best in expectation, while being open to changing course in light of new evidence. For example, GiveWell mainly focuses on expected value, but its Top Charities Fund excludes grants that are riskier (less certain).
Again, sometimes we don’t have scientific studies (or, more often, we do have them but they are flawed or noisy) and have to make decisions under considerable uncertainty. We can make judgments with the evidence we do have. (For what it’s worth, in my judgment, and generally speaking, making money via honest crypto brokerage is only very mildly harmful, if at all, whereas making money via fraud is highly harmful.)
This seems to suggest that MacAskill may have been paid by the Future Fund, that he may not have resigned, and that he more broadly has significantly benefited financially from FTX/EA. In fact, MacAskill wasn’t paid by the Future Fund, he did resign, and he donates everything above £26K a year.
I’m not sure what this means. Are you saying that SBF’s donation to Effective Ventures was a payment to MacAskill? That makes no sense to me. It was a payment to Effective Ventures; it did not go into MacAskill’s bank account. (I don’t know whether MacAskill is paid for his work on the EV board; he should be, but either way, again, he donates everything above £26K a year.)
This also makes no sense to me. What conflict of interest? As far as I can tell, MacAskill had one interest with regard to FTX: to use it do good in the world. What’s the evidence that he has used it to financially benefit himself or anything of that sort?
This argument has also been made by Tyler Cowen: “Hardly anyone associated with Future Fund saw the existential risk to … Future Fund, even though they were as close to it as one could possibly be. I am thus skeptical about their ability to predict existential risk more generally, and for systems that are far more complex and also far more distant.”
It’s unclear to me what exactly MacAskill should’ve predicted. Should he have suspected that SBF was committing fraud in particular? That seems really hard to have known. He did consider the possibility that FTX may crash, but this doesn’t seem like a “threat to his own near-term future”.
While MacAskill and other EAs considered the hypothesis that FTX could crash, I think EAs didn’t really consider the hypothesis that it would crash due to unambiguously criminal activities like fraud. What does that tell us about EAs and existential risks? I think not much about EAs’ abilities to evaluate existential risks, but maybe it suggests there are existential risks that EAs haven’t even considered—unknown unknowns.
Btw, I think people’s endorsing or not endorsing these arguments is only a relatively small part of whether you should or shouldn’t believe them. You can also actually read the arguments and make up your own mind about how sound they seem.
Precision is important when trying to think and communicate clearly.
I’m not sure what the implication is here? That EA wouldn’t recognise how good the CMC is, because it was only a fledgling organisation at that time? But EA is in the business of founding new charities, which always start small.
Sharing as equals sounds great. But the world isn’t fair, and we’re not all equal in our needs and capabilities. Some of us have much more than others, and some of us need much more. If we only give in our communities, we exclude people who would receive no help otherwise, people who don’t have wealthy people in their communities with whom to share “as equals”. If everyone only helps within their own community, they’ll just be left out in the cold.
Inequalities between countries are far more substantial than inequalities within countries. As this tweet puts it: “Would you rather donate to an American with $500K income, or one at the poverty line of $13,590? That’s roughly the income difference between someone at the US poverty line and an average GiveDirectly Africa recipient (~$1/day).”
In my opinion, the reason to help someone isn’t that they may become friends with your son; it’s simply that they need help and can be helped.
There are opportunity costs to spending money. So yeah, you can spend $6K to put a child through three years of K-12 boarding school, or you can spend $5K to save a child’s life. I think both of these things are good, but one of the two is clearly better. Every time you choose to put a kid through three years of K-12, you let another kid die. That’s a horrible observation, but nevertheless one we need to reckon with.