Not sure what this is but flagging the link doesn’t seem to lead anywhere when I try it
JeremyR
WaPo Editorial Board: The failed philanthropy of Sam Bankman-Fried
There’s an “economic growth” topic on the EA Forum (under the parent topic of Global Health & Development). Is that distinct from what you mean by Global Development?
In a separate but related vein, are there any organizations / funds that are EA-aligned and working in this area?
Just seeing this, but yes it was a quote from the original piece! FWIW I appreciate your use of “weird” vs. the original author’s more colorful language (though no idea if that’s what your pre-edit comment was in reference to)
Sharing my reflections on the piece here (not directly addressing this particular post but my own reflections I shared with a friend.)
While I agree with lots of points the author makes and think he raises valuable critiques of EA, I don’t find his arguments related to SBF to be especially compelling. My run-through of the perceived problems within EA that the author describes and my reactions:The dominance of philosophy. I personally find parts of long-termism kooky and I’m not strongly compelled by many of its claims, but the Vox author doesn’t explain how this relates to SBF (or his misdeeds)… it feels more like shoehorning a critique of EA in to a piece on SBF?
Porous boundaries between billionaires and their giving. So yes it sounds like SBF was very directly involved in the philanthropy his funds went toward but I don’t think that caused (much? any?) incremental reputational harm to EA vs. a world where he created the “SBF family foundation” and had other people running the organization.
If I wanted to rescue this argument, maybe I could say SBF’s behavior here is representative of a common trait of his (at FTX and in his charity) – SBF doesn’t even have the dignity to surround himself with yes-men; he insists on doing it all himself! And maybe that’s a red-flag RE cult of personality/genius and/or fraud that EA should have caught on to.
I will say, though, that the FTX Future Fund had a board/team that was fairly star-studded and ran a big re-granting program (i.e., let others make grants with their money). Which is to say I’m not sure how directly involved SBF actually was in the giving. [As an aside, I think it’s fine for billionaires to direct their own giving and am a lot more suspect of non-profit bloat and organizational incentives than the Vox author is.]
3. Utilitarianism free of guardrails. I agree a lack of guardrails is a problem, but:
a) On utilitarianism’s own account it seems to me you should recognize that if you commit massive fraud you’ll probably get caught and it will all be worthless (+ cause serious reputational harm to utilitarianism), so then committing the fraud is doing utilitarianism wrong. [I don’t think I’m no-true-Scotsman-ing here?]
b) More importantly… the author doesn’t explain how unabashed utilitarianism led to SBF’s actions—it’s sort of vaguely hand-waving and trying to make a point by association vs. actual causal reasoning / proof, in the same vein as the dominance of philosophy point above? I guess the steelman is: SBF wanted to do the most good at any cost, and genuinely thought the best way to do so was to commit fraud (?) A bit tough for me to swallow.
4. Utilitarianism full of hubris. A rare reference to evidence (well, an unconfirmed account, but at least it’s something!) Comparing the St. Petersburg paradox to SBF figuring let’s double-or-nothing our way out of letting Alameda default is an interesting point to make, but SBF’s take on this was so wild as to surprise other EA-ers. So it strikes me as a point in favor of “SBF has absurd viewpoints and his actions reflect that” vs. “EA enabled SBF.” Meanwhile the author moves directly from this anecdote to “This is not, I should say, the first time a consequentialist movement has made this kind of error” (emphasis added). SBF != the movement and I think the consensus EA view is the opposite of SBF’s, so this feels misleading at best.
One EA critique in the piece that resonated with me—and I’m not sure I’d seen put so succinctly elsewhere is:
“The philosophy-based contrarian culture means participants are incentivized to produce ‘fucking insane and bad’ ideas, which in turn become what many commentators latch to when trying to grasp what’s distinctive about EA.”
While not about SBF, it’s a point I don’t see us talking about often enough with regard to EA perceptions / reputation and I appreciated the author making it.
TL;DR: I thought it was an interesting and thought-provoking piece with some good critiques of EA, but the author (or—perhaps more likely—editor who wrote the title / sub-headers) bit off more than they could chew in actually connecting EA to SBF’s actions.
Thanks Adina! Agree it’s an awesome tool; the link was in my draft but I really should have incorporated it!
Taking the tool “one step further” (e.g., trying to size the impact of each intervention in a more standardized manner) is probably one of the most clear-cut (and possibly high-return) next steps a funder could take if they were interested in further pursuing the topic.
I know the footnotes in this piece don’t currently work :( I pasted my write-up from a Google doc based on this guidance but it seems something broke in my attempt. If anyone here can help me figure out how to get those sorted, that’d be much appreciated!
Relatedly, two upfront notes I’d have liked to add toward the start but couldn’t get to work as footnotes in the editor:
Almost all of the data I used in this piece came from the Texas A&M Transportation Institute’s (TTI) annual Urban Mobility Report, which is not peer-reviewed. It seems to be the only real game in town on the topic of traffic’s scale and effects, and is incredibly thorough. I spoke to David Schrank, one of its co-authors, in drafting this piece and made sure I had a (very) surface-level understanding of TTI’s methodology, but ultimately my findings do hinge largely on their work. This goes without saying, but further review is warranted before considering allocating meaningful resources accordingly
COVID dramatically altered the traffic landscape over the last several years, and is likely to leave a lasting mark. How lasting remains to be seen—TTI’s latest report is based on 2020 data—but when it comes to my analyses I generally rely on pre-COVID (2014-2019) data. It’s worth being explicit that—at its peak—COVID dramatically reduced traffic, and the work from home policies it begot will almost certainly lead to a step-change in traffic moving forward. While in some sense this means low-hanging fruit has already been plucked, COVID has also shifted the “Overton window,” allowing for discussion of opportunities that a few short years back seemed far-fetched
New cause area: Traffic congestion
I haven’t read any of Blattman’s writings but in case I’m not too late and these aren’t being covered, I’d be curious to hear his thoughts on
The impact of international institutions in regard to war (e.g., do they help prevent and/or end wars, are they merely an extension of power by different means, do these examples represent institutionalism and realism respectively which perhaps he thinks we should be “down with”)
The impact of nuclear weapons on willingness to fight (do they, in his view, help prevent war)
For what it’s worth, I took a course on Causes of War in college ~ten years back with Professor Gary Bass, and I still have the syllabus alongside a summary of a few of the assigned readings. It’s raw, but if you’re still looking for inspiration I’m happy to share them for you to skim.
On the other hand, taxes are not entirely “money lost”—a good part of government spending goes into causes that you may not be entirely averse to—although it’s hard to tell what a marginal dollar will do, e.g. whether it will be used to cut the taxes of millionaires, or to provide social benefits to the poor.
To your point on marginal impact—governments certainly don’t spend money they take in dollar for dollar, and in fact it seems the correlation between intake and expenditure is quite far from 1:1. US government debt is on the order of trillions of dollars, so while its maybe slightly better than flushing your money down the toilet, I’m not sure I’d value it much higher
Got it—thanks for taking the time to respond!
Personally, I would donate to the Long Term Future Fund over the global health fund, and would expect it to be perhaps 10-100x more cost-effective (and donating to global health is already very good). This is mainly because I think issues like AI safety and global catastrophic biorisks are bigger in scale and more neglected than global health. Coming up with an actual number is difficult – I certainly don’t think they’re overwhelmingly better.
Not to pick nits but what would you consider “overwhelmingly better?” 1000x? I’d have said 10x so curious to understand how differently we’re calibrated / the scales we think on.
Should “reduction” in the quote below (my emphasis) read “increase?”
“This is hard to justify intuitively—it implies that we should ignore the near-term costs, and (taken to the extreme) could justify almost any atrocity in the pursuit of a miniscule reduction of long-term value.”
Posting as an individual who is a consultant, not on behalf of my employer
Let me start off by saying that’s an interesting question, and one I can’t give a highly confident answer to because I don’t know that I’ve ever had a conversation with a colleague about truth qua truth.
That said, my short answer would be: I think many of us care about truth, I think our work can be shaped by factors other than truth-seeking, and I think if the statement of work or client need is explicitly about truth / having the tough conversations, consultants wouldn’t find it especially hard to deliver on that. The only factor particular to consulting that I could see weighing against truth-seeking would be the desire to sell future work to the client… but to me that’s resolved by clients making clear that what the client values is truth, which would keep incentives well-aligned.
My longer answer...
I think most of my colleagues do care about truth, and are willing to take a firm stance on what they believe is right even if it’s a tough message for the client to hear. [Indeed I’ve explicitly heard firm leadership share examples of such behavior… which I think is an indicator that a) it does happen but b) it’s not a given which ties to...]
...I think there’s a recognition that at the end of the day, we have formal signed statements of work regarding what our clients expect us to deliver, and our foremost obligation is to deliver according to that contract (and secondarily, to their satisfaction) rather than to “truth”
If our contracts were structured in a more open-ended manner or explicitly framed around us delivering the truth, I see no reason (other than the aforementioned) why we would do anything other than provide that honest perspective
I wonder the extent to which employees of EA organizations feel competing forces against truth (e.g., I need to keep my job, not rock the boat, say controversial things that could upset donors) - I think you could make a case that consultants are actually better poised to do some of that truth-seeking e.g., if it’s a true one-off contract
To your 2nd question about >70%:
I don’t think this framing is really putting your original question another way (to sprinkle in some consulting-ese I think “the question behind your question” is something else)
That said, my “safe,” not-super-helpful, and please-don’t-selectively-quote-this-out-of-context answer is less than half the time...
...But that’s because most of the work I (and I’d venture to say, most of us) do isn’t about truth-seeking, so it’s not the sort of thing about which reasonable people of good will will have meaningful disagreement. Rather, the work is about further developing a client’s hypothesis, or helping them understand how best to pursue an objective, or helping them execute a process in which they lack expertise [all generally in the service of increasing client profitability]
Posting as an individual who is a consultant, not on behalf of my employer
Hi, one such consultant checking in! I had this post open from the moment I saw it in this week’s EA Forum digest, but… I (like many other consultants) work a silly number of hours during the work week so just reading the post in detail now.
I’m a member of, but don’t run, the EACN network and my take is it’s a group of consultants interested in EA with highly varied degrees of familiarity / interest: from “oh, I think I’ve heard of GiveWell?” to “I’m only working here because GiveWell rejected my job application.”
80,000 Hours’ old career survey pointed me toward management consulting ~7-8 years ago (affirming a path I was already planning on following) and it’s the only job full-time I’ve had. I’d be surprised if any of us had ever had an EA client (closest I’m aware of is Bill & Melinda Gates Foundation), though I’ve unsuccessfully pitched my employer on doing pro-bono work with a top GiveWell charity.
I agree with Niklas that it seems to me it’d make sense for EA groups to start off by hiring existing consultants / consultancies to prove out the use-case and demand before expecting a boutique firm to get off the ground, but… as a matter of practice what I imagine would happen is as follows:
You’d be set up with the global health / social impact / non-profit side of the consultancy (while plenty of us, myself included, do commercial work—and so would never hear about the project)
The “expertise” would come from the more senior members of the consultancy (e.g., Partners), who might know a lot about, say, global health but are less likely to be familiar with EA (both because they’re older and because they’ve built a book of business with the sorts of companies that pay for consulting… which hasn’t been EA)
The “brawn” would come from generalists—which is where there are some folks who are EA-aligned—but who are usually not selected for projects based on their own content expertise
You’d need a ton of consistent demand with a single consultancy to be able to “develop” experts, much less keep up a large enough pool of brawn with EA knowledge to reliably execute this work [which I think cuts in favor of the boutique firm model]. As soon as one project finishes I’m expected to move to the next, so unless something is actively sold and in need of a person at my tenure the very next day I’ll be moved on to something else for 3-6 months and won’t be pulled off even if a great EA project sells 1 week later
All that said, I’d venture to say almost every major corporation and government relies on generalist consultancies to varying degrees, even for fairly technical / specialized work. I think that should at least raise questions on how important EA-familiarity is for the work described above—it may be a narrower slice of work that really demands it than the author of this post imagines. [To be clear, not trying to shill here—I’m too junior to sell work myself—just sharing an “insider” perspective / trying to help re-calibrate priors.]
Really really impressive write-up; thanks for putting this together and hope it sparks more discussion on lead as well as more of these write-ups!
I’m not sure how to understand this line referring to the International Lead Association. Could you clarify if the expectation is that ILA would be an ally vs. an opponent (or does Pure Earth not yet have a belief either way)?
“Pure Earth believes them to be an ally or an opponent on a campaign to clean up informal lead battery recycling but we have not spoken to ILA ourselves.”
Vanguard’s website does not state that they can accept cryptocurrency, but I confirmed with a representative that they take donations of cryptocurrency if the value of the contribution is at least $50,000.
Schwab also told me (in Nov 2020) that they only accept cryptocurrency if the contribution is >$50,000, and their vendors charge a 1% fee on Bitcoin and a $3,500 flat fee for Ethereum. I spoke to Fidelity Charitable who told me they had no minimum contribution for cryptocurrency, but I didn’t inquire about fees.
Yasher Koach—I’m a fan! As an Orthodox Jew myself I’ve been collecting some EA-relevant halakhic/biblical texts on this “source sheet” to eventually get back around to. It needs a lot of fleshing out, not to mention much clearer structure; perhaps this project will be the kick in the pants I’ve needed.
I’m personally still grappling with the same sorts of tension referenced in Raffi’s post (linked above). Though I think a number of halakhic texts align quite neatly with an EA direction, a very well-known / internalized notion in the Orthodox Jewish world is the concept of aniyei ircha kodmim—the poor of your city come first i.e., proximity matters, which of course is… less well-aligned to EA thinking.
Given that, I think there’s particular value in shining a light on some of those halakhic sources which emphasize the relative weighting of need and/or imperative to save lives to help foster more critical thinking among Orthodox Jews with regard to their giving, careers, volunteering etc. Hopeful that can be folded into this project!
Agree with Josh’s take on Jews in EA and Effective Tzedakah (though I’d agree strictly speaking the concept of tzedakah is at least broader than charitable giving). I think “Effective Altruism and Judaism” (maybe EAJ?) is my favorite! That said, RE “EA for Jews”—any chance you can ask the folks at EA for Christians how they feel the name has worked out for them?
Super interesting and clearly-written piece on a topic I knew next-to-nothing about! Definitely feels pertinent to global health interventions in and beyond tobacco
Thanks for cross-posting here and please keep sharing your updates as you write. I love how fact-based & even-handed it feels you’re being in describing the varied issues and perspectives.