Apologies for maybe sounding harsh: but I think this is plausibly quite wrong and nonsubstantive. I am also somewhat upset that such an important topic is explored in a context where substantial personal incentives are involved.
One reason is that the post that gives justice to the topic should explore possible return curves, and this post doesn’t even contextualize betting with how much money EA had at the time (~$60B)/has now(~$20B) until the middle of the post where it mentions it in passing: “so effectively increase the resources going towards them by more than 2-fold, and perhaps as much as 5-fold.” Arguing that some degree of risk aversion is, indeed, implied by diminishing returns is trivial and has little implications on practicalities.
I wish I had time to write about why I think altruistic actors probably should take a 10% chance of 15B vs. a 100% chance of 1B. Reverse being true would imply a very roughly ≥3x drop in marginal cost-effectiveness upon adding 15B of funding. But I basically think there would be ways to spend money scalably and at current “last dollar” margins.
In GH, this sorta follows from how OP’s bar didn’t change that drastically in response to a substantial change to OP funds (short of $15B, but still), and I think OP’s GH last dollar cost-effectiveness changed even less.
In longtermism, it’s more difficult to argue. But a bunch of grants that pass the current bar are “meh,” and I think we can probably have some large investments that are better than the current ones in the future. If we had much more money in longtermism, buying a big stake in ~TSMC might be a good thing to do (and it preserves option value, among other things). And it’s not unimaginable that labs like Anthropic might want to spend $10Bs in the next decade(s) to match the potential AI R&D expenses of other corporate actors (I wouldn’t say it’s clearly good, but having the option to do so seems beneficial).
I don’t think the analysis above is conclusive or anything. I just want to illustrate what I see as a big methodological flaw of the post (not looking at actual returns curves when talking about diminishing returns) and make a somewhat grounded in reality case for taking substantial bets with positive EV.
Hi Misha — with this post I was simply trying to clarify that I understood and agreed with critics on the basic considerations here, in the face of some understandable confusion about my views (and those of 80,000 Hours).
So saying novel things to avoid being ‘nonsubstantial’ was not the goal.
As for the conclusion being “plausibly quite wrong” — I agree that a plausible case can be made for both the certain $1 billion or the uncertain $15 billion, depending on your empirical beliefs. I don’t consider the issue settled, the points you’re making are interesting, and I’d be keen to read more if you felt like writing them up in more detail.[1]
The question is sufficiently complicated that it would require concentrated analysis by multiple people over an extended period to do it full justice, which I’m not in a position to do.
That work is most naturally done by philanthropic program managers for major donors rather than 80,000 Hours.
I considered adding in some extra math regarding log returns and what that would imply in different scenarios, but opted not to because i) it would take too long to polish, ii) it would probably confuse some readers, iii) it could lead to too much weight being given to a highly simplified model that deviates from reality in important ways. So I just kept it simple.
I’d just note that maintaining a controlling stake in TSMC would tie up >$200 billion. IIRC that’s on the order of 100x as much as has been spent on targeted AI alignment work so far. For that to be roughly as cost-effective as present marginal spending on AI or other existential risks, it would have to be very valuable indeed (or you’d have to think current marginal spending was of very poor value).
Apologies for maybe sounding harsh: but I think this is plausibly quite wrong and nonsubstantive. I am also somewhat upset that such an important topic is explored in a context where substantial personal incentives are involved.
One reason is that the post that gives justice to the topic should explore possible return curves, and this post doesn’t even contextualize betting with how much money EA had at the time (~$60B)/has now(~$20B) until the middle of the post where it mentions it in passing: “so effectively increase the resources going towards them by more than 2-fold, and perhaps as much as 5-fold.” Arguing that some degree of risk aversion is, indeed, implied by diminishing returns is trivial and has little implications on practicalities.
I wish I had time to write about why I think altruistic actors probably should take a 10% chance of 15B vs. a 100% chance of 1B. Reverse being true would imply a very roughly ≥3x drop in marginal cost-effectiveness upon adding 15B of funding. But I basically think there would be ways to spend money scalably and at current “last dollar” margins.
In GH, this sorta follows from how OP’s bar didn’t change that drastically in response to a substantial change to OP funds (short of $15B, but still), and I think OP’s GH last dollar cost-effectiveness changed even less.
In longtermism, it’s more difficult to argue. But a bunch of grants that pass the current bar are “meh,” and I think we can probably have some large investments that are better than the current ones in the future. If we had much more money in longtermism, buying a big stake in ~TSMC might be a good thing to do (and it preserves option value, among other things). And it’s not unimaginable that labs like Anthropic might want to spend $10Bs in the next decade(s) to match the potential AI R&D expenses of other corporate actors (I wouldn’t say it’s clearly good, but having the option to do so seems beneficial).
I don’t think the analysis above is conclusive or anything. I just want to illustrate what I see as a big methodological flaw of the post (not looking at actual returns curves when talking about diminishing returns) and make a somewhat grounded in reality case for taking substantial bets with positive EV.
Hi Misha — with this post I was simply trying to clarify that I understood and agreed with critics on the basic considerations here, in the face of some understandable confusion about my views (and those of 80,000 Hours).
So saying novel things to avoid being ‘nonsubstantial’ was not the goal.
As for the conclusion being “plausibly quite wrong” — I agree that a plausible case can be made for both the certain $1 billion or the uncertain $15 billion, depending on your empirical beliefs. I don’t consider the issue settled, the points you’re making are interesting, and I’d be keen to read more if you felt like writing them up in more detail.[1]
The question is sufficiently complicated that it would require concentrated analysis by multiple people over an extended period to do it full justice, which I’m not in a position to do.
That work is most naturally done by philanthropic program managers for major donors rather than 80,000 Hours.
I considered adding in some extra math regarding log returns and what that would imply in different scenarios, but opted not to because i) it would take too long to polish, ii) it would probably confuse some readers, iii) it could lead to too much weight being given to a highly simplified model that deviates from reality in important ways. So I just kept it simple.
I’d just note that maintaining a controlling stake in TSMC would tie up >$200 billion. IIRC that’s on the order of 100x as much as has been spent on targeted AI alignment work so far. For that to be roughly as cost-effective as present marginal spending on AI or other existential risks, it would have to be very valuable indeed (or you’d have to think current marginal spending was of very poor value).