To answer your second question: I think itâs in the nature of seeking âsystemic changeâ that it depends upon speculative judgment-calls, rather than the sort of robust evidence one gets for global health interventions.
I donât think that âcrafting a hypotheticalâ is enough. You need to exercise good judgment to put longtermism into practice. (This is a point Iâve previously made in response to Eric Schwitzgebel too.) Is any given attempt at longtermist outreach more likely to sway (enough) people positively or negatively? Thatâs presumably what the grantmakers have to try to assess, on case-by-case basis. Itâs not like thereâs an algorithm they can use to determine the answer.
Insofar as youâre assuming that nothing could possibly be worth doing unless supported by the robust evidence base of global health interventions, I think youâre making precisely the mistake that the âsystemic changeâ critics (mistakenly) accuse EA of.
The post-hoc rationalization is referring to the âNote that this grant was made at the very peak of the period of very abundant (partially FTX-driven) EA funding where finding good funding opportunities was extremely hard.â
If it wasnât a good opportunity, why was it funded?
That doesnât sound like post-hoc rationalization to me. Theyâre just providing info on how the funding bar has shifted. A mediocre opportunity could be worth funding when the bar is low (as long as the risks were also low).
I do think there are things worth funding for which evidence doesnât exist. The initial RNA vaccine research relied on good judgement around a hypothetical, and had a hard time getting funding for lack of evidence. It ended up being critical to saving millions of lives.
I think there are more ways some sort of evidence can be included in grant making. But the core of the criticism is about judgement, and I think a $100k grant for 6 months of video game developers time, or $50k grants to university student group organizers represent poor judgement (EAIF and LTFF grants). These grants have caused reputational harm to the movement, and that should have been easy to foresee. What has been the hit to fundraising for EA global health and animal welfare causes from the fallout from bad longtermism bets (FTX/âSBF included)?
On the rationalization. Perhaps it isnât a post-hoc rationalization, more of an excuse. It is saying âthe funding bar was low, but we still think the expected value of the video game is more important than 25 livesâ. Thatâs pretty crass. And probably worse than just the $100k counterfactual because of reputational spillover to other causes.
Presumably thereâs some probability X of averting doom that you would consider more important than 25 statistical lives. Iâd also guess that youâd agree that this is true for some rather low-but-nonPascalian probabilities. Eg, I predict that if you thought about the problem even briefly, youâd agree the above claim is true for X=0.001%, not just say 30%.
(To be clear Iâm definitely not saying that the grantâs effect size is >0.001% in expectation).
So then the real disagreement is either a) What X ought to be (where I presume you have a higher number than LTFF), or b) whether the game is above X.[1]
Stated more clearly, I think your disagreement with the grant is âmerelyâ a practical disagreement about effect sizes. Whereas your language here, if taken literally, is not actually sensitive to the effect size.
(My own guess is that the grant was not above the 2022 LTFF bar, but thatâs an entirely different line of reasoning). And of course implicitly I believe the 2022 LTFF bar was above the 2022 GiveWell bar by my lights.
A butterfly flaps its wings and causes a devastating hurricane to form in the tropics. Therefore, we must exterminate butterflies, because there is some small probability X that doing so will avert hurricane disaster.
But it is just as easily the case that the butterfly flaps prevent devastating hurricanes from forming. Therefore we must massively grown their population.
The point being, it can be practically impossible to understand the casual tree and get even the sign right around low probability events.
Thatâs what I take issue withâitâs not just the numbers, itâs the structural uncertainty of cause and effect chains when you consider really low probability events. Expected value is a pretty bad tool for action relevant decision making when you are dealing with such numerical and structural uncertainty. Itâs perhaps better to pick a framework like âitâs robust under multiple decision theoriesâ or âpick something that has the least downside riskâ.
In our instance, two competing plausible structural theories among many are something like:
âgame teaches someone an AI safety concept â makes them more knowledgeable or inspire them to take action â they work on AI safety â solve alignment problem â future savedâ
vs.
âpeople get interested in doing the most good â sees community of people that claim to do that, but that they fund rich people to make video games â causes widespread distrust of the movement â strong social stigma developed against people that care about AI risk â greatly narrowed range of people /â worldviews because people donât want to associate â makes it near impossible to solve alignment problem â future destroyedâ
The justifications for these grants tend to use some simple expected value calculation of a singular rosy hypothetical casual chain. The problem is itâs possible to construct a hypothetical value chain to justify any sort of grant. So you have to do more than just make a rosy casual chain and multiply numbers through. Iâve commented before on some pretty bad ones that donât pass the laugh test among domain experts in the climate and air quality space.
The key lesson from early EA (evidenced based giving in global health) was that it is really hard to understand if the thing you are doing is having an impact, and what the valence of the impact is, for even short, measurable casual chains. EAâs popular causes now (longtermism) seem to jettison that lesson, when it is even more unclear what the impact and sign is through complicated low probability casual chains.
The justifications for these grants tend to use some simple expected value calculation of a singular rosy hypothetical casual chain. The problem is itâs possible to construct a hypothetical value chain to justify any sort of grant. So you have to do more than just make a rosy casual chain and multiply numbers through.
Our approach to making such comparisons strikes some as highly counterintuitive, and noticeably different from that of other âprioritizationâ projects such as Copenhagen Consensus. Rather than focusing on a single metric that all âgood accomplishedâ can be converted into (an approach that has obvious advantages when oneâs goal is to maximize), we tend to rate options based on a variety of criteria using something somewhat closer to (while distinct from) a â1=poor, 5=excellentâ scale, and prioritize options that score well on multiple criteria.
We often take approaches that effectively limit the weight carried by any one criterion, even though, in theory, strong enough performance on an important enough dimension ought to be able to offset any amount of weakness on other dimensions.
⊠I think the cost-effectiveness analysis weâve done of top charities has probably added more value in terms of âcausing us to reflect on our views, clarify our views and debate our views, thereby highlighting new key questionsâ than in terms of âmarking some top charities as more cost-effective than others.â
I mean, there are pretty good theoretical reasons for thinking that anything thatâs genuinely positive for longtermism has higher EV than anything that isnât? Not really sure whatâs gained by calling the view âcrassâ. (The wording may be, but you came up with the wording yourself!)
It sounds like youâre just opposed to strong longtermism. Which is fine, many people are. But then itâs weird to ask questions like, âCanât we all agree that GiveWell is better than very speculative longtermist stuff?â Like, no, obviously strong longtermists are not going to agree with that! Read the paper if you really donât understand why.
These grants have caused reputational harm to the movement, and that should have been easy to foresee. What has been the hit to fundraising for EA global health and animal welfare causes from the fallout from bad longtermism bets (FTX/âSBF included)?
I really donât think itâs fair to conflate speculative-but-inherently-innocent âbetsâ of this sort with SBFâs fraud. The latter sort of norm-breaking is positively threatening to othersâan outright moral violation, as commonly understood. But the âreputational harmâ of simply doing things that seem weird or insufficiently well-motivated to others seems very different to me, and probably not worth going to extremes to avoid (or else you canât do anything that doesnât sufficiently appeal to normies).
Perhaps another way to put it is that even longtermists have obvious reasons to oppose SBFâs fraud (my post that you linked to suggested that it was negative-EV for longtermist goals). But I think strong longtermists should generally feel perfectly comfortable defending speculative grants that are positive-EV and the only âriskâ is that others donât judge them so positively. People are allowed to make different judgments (as long as they donât harm anyone). Let a thousand flowers bloom, and all that.
Insofar as your real message is, âStop doing stuff that looks weird, even if it is perfectly defensible by longtermist lights, simply because I have neartermist values and disagree with it,â then that just doesnât actually seem like a reasonable ask?
I think that longtermism relies on more popular, evidenced-based causes like global health and animal welfare to do its reputational laundering through the EA label. I donât see any benefit to global health and animal welfare causes from longtermism. And for that reason I think it would be better for the movement to split into âeffective altruismâ and âspeculative altruismâ so the more robust global health and animal welfare causes areas donât have to suffer the reputational risk and criticism that is almost entirely directed at the longtermism wing.
Given the movement is essentially driven by Open Philanthropy, and they arenât going to split, I donât see such a large movement split happening. So I may be inclined towards some version of, as you say, âStop doing stuff that looks weird, even if it is perfectly defensible by longtermist lights, simply because I have neartermist values and disagree with it.â The longtermist stuff is maybe like 20% of funding and 80% of reputational risk, and the most important longtermist concerns can be handled without the really weird speculative stuff.
But thatâs irrelevant, because I think this ought to be a pretty clear case of the grant not being defensible by longtermist standards. Paying bay area software development salaries to develop a video game (why not a cheap developer literally anywhere else?) that didnât even get published is hardly defensible. I get that the whole purpose of the fund is to do âhits based givingâ. But itâs created an environment where nothing can be a mistake, because it is expected most things would fail. And if nothing is a mistake, how can the fund learn from mistakes?
Ok, so it sounds like your comparisons with GiveWell were an irrelevant distraction, given that you understand the point of âhits based givingâ. Instead, your real question is: âwhy not [hire] a cheap developer literally anywhere else?â
Iâm guessing the literal answer to that question is that no such cheaper developer applied for funding in the same round with an equivalent project. But we might expand upon your question: should a fund like LTFF, rather than just picking from among the proposals that come to them, try taking some of the ideas from those proposals and finding different (perhaps cheaper) PIs to develop them?
Itâs possible that a more active role in developing promising longtermist projects would be a good use of their time. But I donât find it entirely obvious the way that you seem to. A few thoughts that immediately spring to mind:
(i) My sense of that time period was that finding grantmakers was itself a major bottleneck, and given that longtermism seemed more talent-constrained than money-constrained at that time, having key people spend more time just to save some money presumably would not have seemed a wise tradeoff.
(ii) A software developer that comes to you with an idea presumably has a deeper understanding of it, and so could be expected to do a better job of it, than an external contractor to whom you have to communicate the idea. (That is, external contractors increase risk of project failure due to miscommunication or misunderstanding.)
(iii) Depending on the details, e.g. how specific the idea is, taking an idea from someoneâs grant proposal to a cheaper PI might constitute intellectual theft. It certainly seems uncooperative /â low-integrity, and not a good practice for grant-makers who want to encourage other high-skilled people with good ideas to apply to their fund!
To the downvoters: my understanding of negative karma is that it communicates âthis comment is a negative epistemic contribution; its existence is bad for the discussion.â I canât imagine that anyone of intellectual honesty seriously believes that of my comment. Please use âdisagreeâ votes to communicate disagreement.
[Edit to add: I donât really think people should be downvoting Matthewâs comments either. Itâs a fine conversation to be having!]
To answer your second question: I think itâs in the nature of seeking âsystemic changeâ that it depends upon speculative judgment-calls, rather than the sort of robust evidence one gets for global health interventions.
I donât think that âcrafting a hypotheticalâ is enough. You need to exercise good judgment to put longtermism into practice. (This is a point Iâve previously made in response to Eric Schwitzgebel too.) Is any given attempt at longtermist outreach more likely to sway (enough) people positively or negatively? Thatâs presumably what the grantmakers have to try to assess, on case-by-case basis. Itâs not like thereâs an algorithm they can use to determine the answer.
Insofar as youâre assuming that nothing could possibly be worth doing unless supported by the robust evidence base of global health interventions, I think youâre making precisely the mistake that the âsystemic changeâ critics (mistakenly) accuse EA of.
That doesnât sound like post-hoc rationalization to me. Theyâre just providing info on how the funding bar has shifted. A mediocre opportunity could be worth funding when the bar is low (as long as the risks were also low).
I do think there are things worth funding for which evidence doesnât exist. The initial RNA vaccine research relied on good judgement around a hypothetical, and had a hard time getting funding for lack of evidence. It ended up being critical to saving millions of lives.
I think there are more ways some sort of evidence can be included in grant making. But the core of the criticism is about judgement, and I think a $100k grant for 6 months of video game developers time, or $50k grants to university student group organizers represent poor judgement (EAIF and LTFF grants). These grants have caused reputational harm to the movement, and that should have been easy to foresee. What has been the hit to fundraising for EA global health and animal welfare causes from the fallout from bad longtermism bets (FTX/âSBF included)?
On the rationalization. Perhaps it isnât a post-hoc rationalization, more of an excuse. It is saying âthe funding bar was low, but we still think the expected value of the video game is more important than 25 livesâ. Thatâs pretty crass. And probably worse than just the $100k counterfactual because of reputational spillover to other causes.
Presumably thereâs some probability X of averting doom that you would consider more important than 25 statistical lives. Iâd also guess that youâd agree that this is true for some rather low-but-nonPascalian probabilities. Eg, I predict that if you thought about the problem even briefly, youâd agree the above claim is true for X=0.001%, not just say 30%.
(To be clear Iâm definitely not saying that the grantâs effect size is >0.001% in expectation).
So then the real disagreement is either a) What X ought to be (where I presume you have a higher number than LTFF), or b) whether the game is above X.[1]
Stated more clearly, I think your disagreement with the grant is âmerelyâ a practical disagreement about effect sizes. Whereas your language here, if taken literally, is not actually sensitive to the effect size.
(My own guess is that the grant was not above the 2022 LTFF bar, but thatâs an entirely different line of reasoning). And of course implicitly I believe the 2022 LTFF bar was above the 2022 GiveWell bar by my lights.
A butterfly flaps its wings and causes a devastating hurricane to form in the tropics. Therefore, we must exterminate butterflies, because there is some small probability X that doing so will avert hurricane disaster.
But it is just as easily the case that the butterfly flaps prevent devastating hurricanes from forming. Therefore we must massively grown their population.
The point being, it can be practically impossible to understand the casual tree and get even the sign right around low probability events.
Thatâs what I take issue withâitâs not just the numbers, itâs the structural uncertainty of cause and effect chains when you consider really low probability events. Expected value is a pretty bad tool for action relevant decision making when you are dealing with such numerical and structural uncertainty. Itâs perhaps better to pick a framework like âitâs robust under multiple decision theoriesâ or âpick something that has the least downside riskâ.
In our instance, two competing plausible structural theories among many are something like: âgame teaches someone an AI safety concept â makes them more knowledgeable or inspire them to take action â they work on AI safety â solve alignment problem â future savedâ vs. âpeople get interested in doing the most good â sees community of people that claim to do that, but that they fund rich people to make video games â causes widespread distrust of the movement â strong social stigma developed against people that care about AI risk â greatly narrowed range of people /â worldviews because people donât want to associate â makes it near impossible to solve alignment problem â future destroyedâ
The justifications for these grants tend to use some simple expected value calculation of a singular rosy hypothetical casual chain. The problem is itâs possible to construct a hypothetical value chain to justify any sort of grant. So you have to do more than just make a rosy casual chain and multiply numbers through. Iâve commented before on some pretty bad ones that donât pass the laugh test among domain experts in the climate and air quality space.
The key lesson from early EA (evidenced based giving in global health) was that it is really hard to understand if the thing you are doing is having an impact, and what the valence of the impact is, for even short, measurable casual chains. EAâs popular causes now (longtermism) seem to jettison that lesson, when it is even more unclear what the impact and sign is through complicated low probability casual chains.
So itâs about a lot more than effect sizes.
Worth noting that even GiveWell doesnât rely on a single EV calculation either (however complex). Quoting Holdenâs 10 year old writeup Sequence thinking vs. cluster thinking:
I mean, there are pretty good theoretical reasons for thinking that anything thatâs genuinely positive for longtermism has higher EV than anything that isnât? Not really sure whatâs gained by calling the view âcrassâ. (The wording may be, but you came up with the wording yourself!)
It sounds like youâre just opposed to strong longtermism. Which is fine, many people are. But then itâs weird to ask questions like, âCanât we all agree that GiveWell is better than very speculative longtermist stuff?â Like, no, obviously strong longtermists are not going to agree with that! Read the paper if you really donât understand why.
I really donât think itâs fair to conflate speculative-but-inherently-innocent âbetsâ of this sort with SBFâs fraud. The latter sort of norm-breaking is positively threatening to othersâan outright moral violation, as commonly understood. But the âreputational harmâ of simply doing things that seem weird or insufficiently well-motivated to others seems very different to me, and probably not worth going to extremes to avoid (or else you canât do anything that doesnât sufficiently appeal to normies).
Perhaps another way to put it is that even longtermists have obvious reasons to oppose SBFâs fraud (my post that you linked to suggested that it was negative-EV for longtermist goals). But I think strong longtermists should generally feel perfectly comfortable defending speculative grants that are positive-EV and the only âriskâ is that others donât judge them so positively. People are allowed to make different judgments (as long as they donât harm anyone). Let a thousand flowers bloom, and all that.
Insofar as your real message is, âStop doing stuff that looks weird, even if it is perfectly defensible by longtermist lights, simply because I have neartermist values and disagree with it,â then that just doesnât actually seem like a reasonable ask?
I think that longtermism relies on more popular, evidenced-based causes like global health and animal welfare to do its reputational laundering through the EA label. I donât see any benefit to global health and animal welfare causes from longtermism. And for that reason I think it would be better for the movement to split into âeffective altruismâ and âspeculative altruismâ so the more robust global health and animal welfare causes areas donât have to suffer the reputational risk and criticism that is almost entirely directed at the longtermism wing.
Given the movement is essentially driven by Open Philanthropy, and they arenât going to split, I donât see such a large movement split happening. So I may be inclined towards some version of, as you say, âStop doing stuff that looks weird, even if it is perfectly defensible by longtermist lights, simply because I have neartermist values and disagree with it.â The longtermist stuff is maybe like 20% of funding and 80% of reputational risk, and the most important longtermist concerns can be handled without the really weird speculative stuff.
But thatâs irrelevant, because I think this ought to be a pretty clear case of the grant not being defensible by longtermist standards. Paying bay area software development salaries to develop a video game (why not a cheap developer literally anywhere else?) that didnât even get published is hardly defensible. I get that the whole purpose of the fund is to do âhits based givingâ. But itâs created an environment where nothing can be a mistake, because it is expected most things would fail. And if nothing is a mistake, how can the fund learn from mistakes?
Ok, so it sounds like your comparisons with GiveWell were an irrelevant distraction, given that you understand the point of âhits based givingâ. Instead, your real question is: âwhy not [hire] a cheap developer literally anywhere else?â
Iâm guessing the literal answer to that question is that no such cheaper developer applied for funding in the same round with an equivalent project. But we might expand upon your question: should a fund like LTFF, rather than just picking from among the proposals that come to them, try taking some of the ideas from those proposals and finding different (perhaps cheaper) PIs to develop them?
Itâs possible that a more active role in developing promising longtermist projects would be a good use of their time. But I donât find it entirely obvious the way that you seem to. A few thoughts that immediately spring to mind:
(i) My sense of that time period was that finding grantmakers was itself a major bottleneck, and given that longtermism seemed more talent-constrained than money-constrained at that time, having key people spend more time just to save some money presumably would not have seemed a wise tradeoff.
(ii) A software developer that comes to you with an idea presumably has a deeper understanding of it, and so could be expected to do a better job of it, than an external contractor to whom you have to communicate the idea. (That is, external contractors increase risk of project failure due to miscommunication or misunderstanding.)
(iii) Depending on the details, e.g. how specific the idea is, taking an idea from someoneâs grant proposal to a cheaper PI might constitute intellectual theft. It certainly seems uncooperative /â low-integrity, and not a good practice for grant-makers who want to encourage other high-skilled people with good ideas to apply to their fund!
To the downvoters: my understanding of negative karma is that it communicates âthis comment is a negative epistemic contribution; its existence is bad for the discussion.â I canât imagine that anyone of intellectual honesty seriously believes that of my comment. Please use âdisagreeâ votes to communicate disagreement.
[Edit to add: I donât really think people should be downvoting Matthewâs comments either. Itâs a fine conversation to be having!]