I mean, there are pretty good theoretical reasons for thinking that anything thatâs genuinely positive for longtermism has higher EV than anything that isnât? Not really sure whatâs gained by calling the view âcrassâ. (The wording may be, but you came up with the wording yourself!)
It sounds like youâre just opposed to strong longtermism. Which is fine, many people are. But then itâs weird to ask questions like, âCanât we all agree that GiveWell is better than very speculative longtermist stuff?â Like, no, obviously strong longtermists are not going to agree with that! Read the paper if you really donât understand why.
These grants have caused reputational harm to the movement, and that should have been easy to foresee. What has been the hit to fundraising for EA global health and animal welfare causes from the fallout from bad longtermism bets (FTX/âSBF included)?
I really donât think itâs fair to conflate speculative-but-inherently-innocent âbetsâ of this sort with SBFâs fraud. The latter sort of norm-breaking is positively threatening to othersâan outright moral violation, as commonly understood. But the âreputational harmâ of simply doing things that seem weird or insufficiently well-motivated to others seems very different to me, and probably not worth going to extremes to avoid (or else you canât do anything that doesnât sufficiently appeal to normies).
Perhaps another way to put it is that even longtermists have obvious reasons to oppose SBFâs fraud (my post that you linked to suggested that it was negative-EV for longtermist goals). But I think strong longtermists should generally feel perfectly comfortable defending speculative grants that are positive-EV and the only âriskâ is that others donât judge them so positively. People are allowed to make different judgments (as long as they donât harm anyone). Let a thousand flowers bloom, and all that.
Insofar as your real message is, âStop doing stuff that looks weird, even if it is perfectly defensible by longtermist lights, simply because I have neartermist values and disagree with it,â then that just doesnât actually seem like a reasonable ask?
I think that longtermism relies on more popular, evidenced-based causes like global health and animal welfare to do its reputational laundering through the EA label. I donât see any benefit to global health and animal welfare causes from longtermism. And for that reason I think it would be better for the movement to split into âeffective altruismâ and âspeculative altruismâ so the more robust global health and animal welfare causes areas donât have to suffer the reputational risk and criticism that is almost entirely directed at the longtermism wing.
Given the movement is essentially driven by Open Philanthropy, and they arenât going to split, I donât see such a large movement split happening. So I may be inclined towards some version of, as you say, âStop doing stuff that looks weird, even if it is perfectly defensible by longtermist lights, simply because I have neartermist values and disagree with it.â The longtermist stuff is maybe like 20% of funding and 80% of reputational risk, and the most important longtermist concerns can be handled without the really weird speculative stuff.
But thatâs irrelevant, because I think this ought to be a pretty clear case of the grant not being defensible by longtermist standards. Paying bay area software development salaries to develop a video game (why not a cheap developer literally anywhere else?) that didnât even get published is hardly defensible. I get that the whole purpose of the fund is to do âhits based givingâ. But itâs created an environment where nothing can be a mistake, because it is expected most things would fail. And if nothing is a mistake, how can the fund learn from mistakes?
Ok, so it sounds like your comparisons with GiveWell were an irrelevant distraction, given that you understand the point of âhits based givingâ. Instead, your real question is: âwhy not [hire] a cheap developer literally anywhere else?â
Iâm guessing the literal answer to that question is that no such cheaper developer applied for funding in the same round with an equivalent project. But we might expand upon your question: should a fund like LTFF, rather than just picking from among the proposals that come to them, try taking some of the ideas from those proposals and finding different (perhaps cheaper) PIs to develop them?
Itâs possible that a more active role in developing promising longtermist projects would be a good use of their time. But I donât find it entirely obvious the way that you seem to. A few thoughts that immediately spring to mind:
(i) My sense of that time period was that finding grantmakers was itself a major bottleneck, and given that longtermism seemed more talent-constrained than money-constrained at that time, having key people spend more time just to save some money presumably would not have seemed a wise tradeoff.
(ii) A software developer that comes to you with an idea presumably has a deeper understanding of it, and so could be expected to do a better job of it, than an external contractor to whom you have to communicate the idea. (That is, external contractors increase risk of project failure due to miscommunication or misunderstanding.)
(iii) Depending on the details, e.g. how specific the idea is, taking an idea from someoneâs grant proposal to a cheaper PI might constitute intellectual theft. It certainly seems uncooperative /â low-integrity, and not a good practice for grant-makers who want to encourage other high-skilled people with good ideas to apply to their fund!
I mean, there are pretty good theoretical reasons for thinking that anything thatâs genuinely positive for longtermism has higher EV than anything that isnât? Not really sure whatâs gained by calling the view âcrassâ. (The wording may be, but you came up with the wording yourself!)
It sounds like youâre just opposed to strong longtermism. Which is fine, many people are. But then itâs weird to ask questions like, âCanât we all agree that GiveWell is better than very speculative longtermist stuff?â Like, no, obviously strong longtermists are not going to agree with that! Read the paper if you really donât understand why.
I really donât think itâs fair to conflate speculative-but-inherently-innocent âbetsâ of this sort with SBFâs fraud. The latter sort of norm-breaking is positively threatening to othersâan outright moral violation, as commonly understood. But the âreputational harmâ of simply doing things that seem weird or insufficiently well-motivated to others seems very different to me, and probably not worth going to extremes to avoid (or else you canât do anything that doesnât sufficiently appeal to normies).
Perhaps another way to put it is that even longtermists have obvious reasons to oppose SBFâs fraud (my post that you linked to suggested that it was negative-EV for longtermist goals). But I think strong longtermists should generally feel perfectly comfortable defending speculative grants that are positive-EV and the only âriskâ is that others donât judge them so positively. People are allowed to make different judgments (as long as they donât harm anyone). Let a thousand flowers bloom, and all that.
Insofar as your real message is, âStop doing stuff that looks weird, even if it is perfectly defensible by longtermist lights, simply because I have neartermist values and disagree with it,â then that just doesnât actually seem like a reasonable ask?
I think that longtermism relies on more popular, evidenced-based causes like global health and animal welfare to do its reputational laundering through the EA label. I donât see any benefit to global health and animal welfare causes from longtermism. And for that reason I think it would be better for the movement to split into âeffective altruismâ and âspeculative altruismâ so the more robust global health and animal welfare causes areas donât have to suffer the reputational risk and criticism that is almost entirely directed at the longtermism wing.
Given the movement is essentially driven by Open Philanthropy, and they arenât going to split, I donât see such a large movement split happening. So I may be inclined towards some version of, as you say, âStop doing stuff that looks weird, even if it is perfectly defensible by longtermist lights, simply because I have neartermist values and disagree with it.â The longtermist stuff is maybe like 20% of funding and 80% of reputational risk, and the most important longtermist concerns can be handled without the really weird speculative stuff.
But thatâs irrelevant, because I think this ought to be a pretty clear case of the grant not being defensible by longtermist standards. Paying bay area software development salaries to develop a video game (why not a cheap developer literally anywhere else?) that didnât even get published is hardly defensible. I get that the whole purpose of the fund is to do âhits based givingâ. But itâs created an environment where nothing can be a mistake, because it is expected most things would fail. And if nothing is a mistake, how can the fund learn from mistakes?
Ok, so it sounds like your comparisons with GiveWell were an irrelevant distraction, given that you understand the point of âhits based givingâ. Instead, your real question is: âwhy not [hire] a cheap developer literally anywhere else?â
Iâm guessing the literal answer to that question is that no such cheaper developer applied for funding in the same round with an equivalent project. But we might expand upon your question: should a fund like LTFF, rather than just picking from among the proposals that come to them, try taking some of the ideas from those proposals and finding different (perhaps cheaper) PIs to develop them?
Itâs possible that a more active role in developing promising longtermist projects would be a good use of their time. But I donât find it entirely obvious the way that you seem to. A few thoughts that immediately spring to mind:
(i) My sense of that time period was that finding grantmakers was itself a major bottleneck, and given that longtermism seemed more talent-constrained than money-constrained at that time, having key people spend more time just to save some money presumably would not have seemed a wise tradeoff.
(ii) A software developer that comes to you with an idea presumably has a deeper understanding of it, and so could be expected to do a better job of it, than an external contractor to whom you have to communicate the idea. (That is, external contractors increase risk of project failure due to miscommunication or misunderstanding.)
(iii) Depending on the details, e.g. how specific the idea is, taking an idea from someoneâs grant proposal to a cheaper PI might constitute intellectual theft. It certainly seems uncooperative /â low-integrity, and not a good practice for grant-makers who want to encourage other high-skilled people with good ideas to apply to their fund!