Just noticed Sam Bankman-Fried’s 80,000 Hours podcast episode where he sheds some light on his thinking in this regard.
I think the excerpt below is not far from the OP’s request that “if there is no BOTEC and it’s more ‘this seems plausibly good and we have enough money to throw spaghetti at the wall’, please say that clearly and publicly.”
Sam:
I think that being really willing to give significant amounts is a real piece of this. Being willing to give 100 million and not needing anything like certainty for that. We’re not in a position where we’re like, “If you want this level of funding, you better effectively have proof that what you’re going to do is great.” We’re happy to give a lot with not that much evidence and not that much conviction — if we think it’s, in expectation, great. Maybe it’s worth doing more research, but maybe it’s just worth going for. I think that is something where it’s a different style, it’s a different brand. And we, I think in general, are pretty comfortable going out on a limb for what seems like the right thing to do.
Rob:
I guess you might bring a different cultural aspect here because you come from market trading, where you have to take a whole lot of risk and you’ve just got to be comfortable with that or there’s not going to be much out there for you. And also the very risk-taking attitude of going into entrepreneurship — like double-or-nothing all the time in terms of growing the business.
I’ve had a worry that’s been developing over the last year that the effective altruism community might be a bit too conservative about its giving at this point. Because many of us, including me, got our start when our style of giving was pretty cash-starved — it was pretty niche, and so we developed a frugal mindset, an “I’ve got to be careful” mindset.
And on top of that, to be honest, as a purely aesthetic matter, I like being careful and discerning, rather than moving fast and doing lots of stuff that I expect in the future is going to look foolish, or making a lot of bets that could make me look like an idiot down the road. My colleague, Benjamin Todd, estimated last year that there’s $46 billion committed to effective altruist–style philanthropy — of course that figure is flying around all the time, but it’s probably something similar now — and according to his estimates, that figure had been growing at 35% a year over the last six years. So increasingly, it’s been growing much faster than we’ve been able to disburse these funds to really valuable stuff.
So I guess me and other people might want to start thinking that maybe the big risk that we should be worried about is not about being too careless, but rather not giving enough to what look like questionable projects to us now — because the marginal project in 10 years’ time is going to be noticeably more mediocre or noticeably less promising. Or alternatively, we might all be dead from x-risk already because we missed the boat.
Sam:
Completely agree. That is roughly my instinct: that there are a lot of things that you have to go out on a limb for. I think it’s just the right thing to do, and that probably as a movement, we’ve been too conservative on that front. A lot of that is, as you said, coming from a place where there’s a lot less funding and where it made sense to be more conservative.
I also just think, as you said, most people don’t like taking risks. And especially, it’s often a really bad look to say you’re trying to do something great for the world and then you have no impact at all. I think that feels really demoralizing to a lot of people. Even if it was the right thing to do in expectation, it still feels really demoralizing. So I think that basically fighting against that instinct is the right thing to do, and trying to push us as a community to try ambitious things nonetheless.
Very interesting, thanks. I read this as more saying ‘we need to be prepared to back unlikely but potentially impactful things’, and acknowledging the uncertainty in longtermism, rather than saying ‘we don’t think expected value is a good heuristic for giving out grants’, but I’m not confident in that reading. Probably reflects my personal framing more than anything else.
Asking if FTX have done something as explicit as a BOTEC for each grant or if it’s more a case of “this seems plausibly good” (where both use expected value as a heuristic)
If there are BOTECs, requesting they write them all up in a publicly shareable form
Implying that the larger the pot, the more certain you should be (“these things have a much higher than average chance of doing harm. Most mistaken grants will just fail. These grants carry reputational and epistemic risks to EA.”)
I thought Sam’s comments served as partial responses to each of these points. You seem to be essentially challenging FTX to be a lot more certain about the impact of their grants (tell us your reasoning so we can test your assumptions and help you be more sure you’re doing the right thing, hire more staff like Open Phil so you can put a lot more work into these evaluations, reduce the risk of potential downsides because they’re pretty bad) and Sam here essentially seems to be responding “I don’t think we need to be that certain.” I can’t see where the expected value heuristic was ever called into question? Sorry if you thought that’s how I was reading this.
[Edit: Maybe when you say “plausibly good” you mean “negative in expectation but a decent chance of being good”, whereas I read it as “good in expectation but not as the result of an explicit BOTEC”? That might be where the confusion lies. If so, with my top-level comment I was trying to say “This is why FTX might be using heuristics that are even rougher than BOTECs and why they have a much smaller team than Open Phil and why they may not take the time to publish all their reasoning” rather than “This is why they might not be that bothered about expected value and instead are just funding things that might be good”. Hope that makes sense.]
Just noticed Sam Bankman-Fried’s 80,000 Hours podcast episode where he sheds some light on his thinking in this regard.
I think the excerpt below is not far from the OP’s request that “if there is no BOTEC and it’s more ‘this seems plausibly good and we have enough money to throw spaghetti at the wall’, please say that clearly and publicly.”
Sam:
Rob:
Sam:
Very interesting, thanks. I read this as more saying ‘we need to be prepared to back unlikely but potentially impactful things’, and acknowledging the uncertainty in longtermism, rather than saying ‘we don’t think expected value is a good heuristic for giving out grants’, but I’m not confident in that reading. Probably reflects my personal framing more than anything else.
Oh, I read it as more the former too!
I read your post as:
Asking if FTX have done something as explicit as a BOTEC for each grant or if it’s more a case of “this seems plausibly good” (where both use expected value as a heuristic)
If there are BOTECs, requesting they write them all up in a publicly shareable form
Implying that the larger the pot, the more certain you should be (“these things have a much higher than average chance of doing harm. Most mistaken grants will just fail. These grants carry reputational and epistemic risks to EA.”)
I thought Sam’s comments served as partial responses to each of these points. You seem to be essentially challenging FTX to be a lot more certain about the impact of their grants (tell us your reasoning so we can test your assumptions and help you be more sure you’re doing the right thing, hire more staff like Open Phil so you can put a lot more work into these evaluations, reduce the risk of potential downsides because they’re pretty bad) and Sam here essentially seems to be responding “I don’t think we need to be that certain.” I can’t see where the expected value heuristic was ever called into question? Sorry if you thought that’s how I was reading this.
[Edit: Maybe when you say “plausibly good” you mean “negative in expectation but a decent chance of being good”, whereas I read it as “good in expectation but not as the result of an explicit BOTEC”? That might be where the confusion lies. If so, with my top-level comment I was trying to say “This is why FTX might be using heuristics that are even rougher than BOTECs and why they have a much smaller team than Open Phil and why they may not take the time to publish all their reasoning” rather than “This is why they might not be that bothered about expected value and instead are just funding things that might be good”. Hope that makes sense.]