Why and how to be excited about megaprojects
tl;dr: We should value large expected impact[1] rather than large inputs, but should get especially excited about megaprojects anyway because theyâre a useful tool weâre now unlocking.
tl;dr 2: It previously made sense for EAs to be especially excited about projects with very efficient expected impact (in terms of dollars and labour required). Now that we have more resources, we should probably be especially excited about projects with huge expected impact (especially but not only if theyâre very efficient). Those projects will often be megaprojects. But we should remember that really weâre excited about capacity to achieve lots of impacts, not capacity to absorb lots of inputs.
We should be excited about the blue and green circles, including but not limited to their overlaps with the orange circle. We should not be excited about the rest of the orange circle. Question marks are because estimating impact is hard, man. For more on refuges, see Concrete Biosecurity Projects (some of which could be big).
A lot of people are excited about EA-related megaprojects, and I agree that they should be. But we should remember that megaprojects are basically defined by the size of their inputs (e.g., âproductivelyâ using >$100 million per year), and that we donât intrinsically value the capacity to absorb those inputs. What we really care about is something like maximising the expected moral value of the world. Megaprojects are just one means to that end, and actually we should generally be even more excited about achieving the same impacts using less inputs & smaller projects (if there are ways to do that).
How can we reconcile these thoughts, and why should we in any case still get excited aboutâand pay special attention toâgenerating & executing megaproject ideas?
I suggest we think about this as follows:
Think of a Venn diagram with circles for megaprojects, projects with huge expected impact, and projects with very efficient expected impact.
Very roughly speaking, something like âmaximising moral goodnessâ is the ultimate goal and always has been.
Given that we have limited resources, we should therefore be especially excited about efficient expected impact, and that has indeed been a key focus of EA thus far.
Projects like 80,000 Hours, FHI, and the book Superintelligence were each far smaller than megaprojects, but in my view probably had and still had huge expected impact, to an extent thatâd potentially justify megaproject-level spending if thatâd been necessary. Thatâs great (itâs basically even better than actual megaprojects!), and weâd still love more projects that can punch so far above their weight.
But efficiency is just a proxy. Always choosing the most cost- or labour-efficient option is not likely to maximise expected impact; sometimes itâs worth doing something thatâs less efficient in terms of one or more resources if it has a sufficiently large total impact.
Meanwhile and in contrast, larger projects or large amounts of inputs is also a useful proxy to have in mind when generating/âconsidering project ideas, for three main reasons:
Many things that would achieve huge impact and are worth doing will be large, costly, expensive projects.
As a community and as individuals, weâve often avoided generating, considering, or executed such ambitious ideas, due to our focus on efficiency and perhaps our insufficient ambition/âconfidence. This leaves âmegaprojectsâ as a fairly untapped area of potentially worthwhile ideas.
Coming up with, prioritizing among, and (especially) executing large projects involves some relatively distinct and generalisable skills, so the more we do that, the more we unlock the ability to do more of it in future.
This is basically a matter of people/âteams/âcommunities testing fit and building career capital.
This is one reason to sometimes pursue large projects rather than smaller projects with similar/âgreater impact. But this still isnât a matter of valuing the size of projects as an end in itself.
As weâve gained and continue to gain more resources (especially money, human capital, political influence), efficiency is becoming a somewhat less useful proxy to focus on, âlarge projectsâ is becoming a somewhat more useful proxy to focus on, and weâre sort-of âunlockingâ an additional space of project options with huge expected impact.
So we should now be explicitly focusing a decent chunk of our attention on coming up with, prioritizing among, and executing megaproject ideas with huge expected impact, and on building capacity to do those things. (If we donât make an explicit effort to do that, weâll continue neglecting it via inertia.)
But we should remember that this is in addition to our community working toward smaller and/âor more efficient projects. And we should remember that really we should first and foremost be extremely ambitious in terms of impacts, and just willing to alsoâas a means to that endâbe extremely ambitious in terms of inputs absorbed.
Epistemic status: I spent ~30 mins writing a shortform version of this, then ~1 hour turning that into this post. I feel quite confident that what Iâm trying to get across is basically true, but only moderately confident that Iâve gotten my message across clearly or that this is a very useful message to get across. The title feels kind-of misleading/âoff but was the best I could quickly come up with.
Acknowledgements: My thanks to Linch Zhang for conversations that informed my thinking here & for a useful comment on my shortform (though I imagine his thinking and framing would differ from mine at least a bit).
This post represents my personal views only.
- ^
Iâm using âexpected impactâ as a shorthand for âexpected net-positive counterfactual moral impactâ.
- EA ProÂjects Iâd Like to See by 13 Mar 2022 18:12 UTC; 153 points) (
- What is the new EA quesÂtion? by 2 Mar 2022 20:40 UTC; 95 points) (
- Why and how to start a piÂlot proÂject in EA by 13 Jun 2022 8:02 UTC; 52 points) (
- EA UpÂdates for FeÂbruÂary 2022 by 28 Jan 2022 11:33 UTC; 28 points) (
- 23 Jan 2022 8:57 UTC; 13 points) 's comment on MichaelAâs Quick takes by (
- 24 Jan 2022 13:43 UTC; 6 points) 's comment on MichaelAâs Quick takes by (
So my understanding is as follows.
Imagine that we had these five projects (and only these projects) in the EA portfolio:
Alpha: Spend $100,000 to produce 1000 units of impact (after which Alpha will be exhausted and will produce no more units of impact; you canât buy it twice)
Beta: Spend $100,000,000 to produce 200,000 units of impact (after which Beta will be exhausted and will produce no more units of impact; you canât buy it twice)
Gamma: Spend $1,000,000,000 to produce 300,000 units of impact (after which Gamma will be exhausted and will produce no more units of impact; you canât buy it twice)
GiveDeltaly: Spent any amount of money to produce a unit of impact for each $2000 spent (GiveDeltaly cannot be exhausted and you can buy it as many times as you want).
Research: Spend $200,000 to create a new opportunity with the same âspend X for Yâ of Alpha, Beta, Gamma, or GiveDeltaly.
Early EA (say ~2013), with relatively fewer resources (we didnât have $100M to spend), wouldâve been ecstatic about Alpha because it only costs $100 to buy one unit of impact, which is much better than Betaâs $500 per unit, GiveDeltalyâs $2000 per unit, or Gammaâs $3333.33 per unit.
But âmodernâ EA, with lots of money and a shortage of opportunities to spend it on would gladly buy Alpha first but would be more excited by Beta because it allows us to deploy more of our portfolio at a better effectiveness.
(And no one would be excited by Gammaâeven though itâs a huge megaproject, it doesnât beat our baseline of GiveDeltaly.)
~
Now letâs think of things as allocating an EA bank account and use Research. What should we use Research for? Early EA would want us to focus our research efforts on finding another opportunity like Alpha since it is very cost-effective! But modern EA would rather we look for opportunities like Betaâeven though it is less effective than Alpha, it can use up 1000x more funds!
Like say we have an EA bank account with $2,000,000,000. If we followed modern EA advice and bought Alpha, bought Beta, bought Research and used it to find another Beta, and bought the second Beta, and then put the remainder into GiveDeltaly, weâd have 1,350,350 units of impact.
But if we followed Early EA advice and bought Alpha, bought Beta, bought Research and used it to find another Alpha, and bought the second Alpha, and then put the remainder into GiveDeltaly, weâd have 1,151,800 units of impact. Lower total impact even though we used research to find a more cost-effective intervention!
This implies the scalability of the projects we identify can matter just as much, if not more than, the cost-effectiveness of the project! I think this scalability mindset is often missed by people who focus mainly on cost-effectiveness and is the main reason IMO to think more about megaprojects.
But this does also imply that scalability isnât the only thing that mattersâno one wants to spend a dollar on Gamma even though it is very scalable.
Very well put!
I would add that Scalability is already implicitly there in the ITN/âSSN framework. At least if you take 80,000 Hoursâ description of Solvability at face value (i.e. âif we doubled the resources dedicated to solving this problem, what fraction of the problem would we expect to solve?â). Albeit, this is just my observation and not a common opinion.
With limited investment, more scalable projects will tend to have higher cost-effectiveness because they will still have plenty of room for more funding.
What is happening with the âmodernâ view is that with more wealth Scalability matters in two ways: i) as before as a heuristic for the marginal value of the next dollar, and ii) as a heuristic for how many dollars are worth pumping into an opportunity.
So, Scalability has always mattered, but it has become even more important.
I agree with your final three paragraphs, but:
You seem to be implying that Scalability was one of the terms in ITN/âSSN, which I think it never was.
The Ss have been Scale and Solvability, which arenât the same as Scalability
iirc, Charity Entrepreneurship does account for scalability in their own weighted factor models or frameworks, but thatâs separate from ITN
I donât think the ITN/âSSN frameworks made the points in my post or in your final three paragraphs clear.
Those are primarilyframeworks for prioritizing among problems, not projects.
âif we doubled the resources dedicated to solving this problem, what fraction of the problem would we expect to solve?â doesnât tell me how scalable a given project is. âresources dedicated to solving this problemâ would mean things like total resources dedicated to solving wild animal suffering or extreme climate change risks, not resources dedicated toward a given project.
You could have cases where a given project could grow to 100 times its current size without losing much cost-effectiveness per dollar and yet the cost-effectiveness was fairly low to begin with or the problem area itâs related to isnât very tractable.
You could also have cases where a project is very cost-effective and is in a very tractable area but isnât very scalable.
Scale, Tractability, and Neglectedness are also often used to evaluate intervention or project ideas, but in that case Scale is used to mean things like âHow big would the impacts be if the project were successful?â or âHow big a problem is this aiming to tackle?â, rather than things like âHow large can this project grow to while remaining somewhat cost-effective?â
Yes, what I was trying to say was that in my opinion the word âScalabilityâ is a good match for 80âČ000 Hours stated definition of Solvability. In practice, Solvability and Tractability are not used as if they represent Scalability. I think this is a shame as: a) I think Scalability makes sense given the mathematical intuition for ITN developed by Owen Cotton-Barratt, and b) I think there is a risk of circular logic in how people use Solvability/âTractability (e.g. they judge them based on a sense of the marginal cost-effectiveness of work on a problem).
I agree that ITN/âSSN are clearly framed as frameworks for problems not projects.
I agree with your examples in your point 2. Iâm not sure if youâre making a larger point though? For projects we can just define scalability as: âif we doubled the resources dedicated to this project, by what fraction would we increase its impact?â.
Regarding your point 3, for me âHow big would the impacts be if the project were successful?â and âHow large can this project grow to while remaining somewhat cost-effective?â are the same thing in practice. That is, my natural instinct is to define success as expanding to the limits of reasonable cost-effectiveness. I would say this is scale at the âsolution-levelâ.
âHow big a problem is this aiming to tackle?â is different, of course, as itâs at the âproblem levelâ.
By the way, you can also define scale as âHow much impact has this project had so far?â.
However you define Scale if you then divided it by the amount resources invested to achieve that scale, youâll get an âaverageâ cost-effectiveness. But, to get the marginal cost-effectiveness you need to factor in Scalability, because as the project grows its impact per unit will generally be declining. Whether we call the marginal value being closer to the average good âsolvabilityâ or good âscalabilityâ seems like a matter of taste.
In any case, my goal with these comments is mostly just to agree that Scalability is important.
I completely agree with everything you said (and my previous comment was trying to convey a part of this, admittedly in much less transparent way).
I agree with the spirit of this post (and have upvoted it) but I think it kind of obscures the really simple thing going on: the (expected) impact of a project is by definition the cost-effectiveness (also called efficiency) times the cost (or resources).
A 2-fold increase in one, while keeping the other fixed, is literally the same as having the roles reversed.
The question then is what projects we are able to execute, that is, both come up with an efficient idea, and have the resources to execute it. When resources are scarce, you really want to squeeze as much as you can from the efficiency part. Now that we have more resources, we should be more lax, and increase our total impact by pursuing less efficient ideas that still achieve high impact. Right now it starts to look like thereâs much more resources ready to be deployed, than projects which are able to absorb them.
But doubling the cost also doubles the cost (in addition to impact), while doubling the cost-effectiveness doubles only the impact. Thatâs a pretty big difference!
Like if we could either make 80k twice as big in terms of quality-adjusted employees while keeping impact per quality-adjusted employee constant, or do the inverse, we should very likely prefer the inverse, since that leaves more talent available for other projects. (I say âvery likelyâ because, as noted in the post, it can be valuable to practice running big things so weâre more able to run other big things.)
So I disagree that your simple summary of whatâs going on is a sufficient and clear picture (though your equation itself is obviously correct).
Separately, I agree with your second paragraph with respect to money, but mildly disagree with the final sentence specifically with respect to talent, or at least âvetted and trainedâ talentâthatâs less scarce than it used to be, but still scarce enough that itâs not simply like thereâs a surplus relative to projects that can absorb it. (Though more project ideas or early stage projects would still help us more productively absorb certain specific people, and Iâd also say thereâs kind of a surplus of less vetted and trained talent.)
I simply disagree with your conclusionâit all boils down to what we have at hand. Doubling the cost-effectiveness also requires work, it doesnât happen by magic. If you are not constrained by highly effective projects which can use your resources, sure, go for it. As it seems though, we have much more resources than current small scale projects are able to absorb, and thereâs a lot of âleft-overâ resources. Thus, it makes sense to start allocating resources to some less effective stuff.
Doubling the cost effectiveness while maintaining cost absorbed, and doubling cost absorbed while maintaining cost effectiveness, would both take work (scaling without dilution/âbreaking is also hard). Probably one tends to be harder, but thatâd vary a lot between cases. But if we could achieve either for free by magic, or alternatively if we assume an equal hardness for either, then doubling cost effectiveness would very likely be better, for the reason stated above. (And thatâs sufficient for âliterally the sameâ to have been an inaccurate claim.)
I think thatâs just fairly obvious. Like if you really imagine you could press a button to have either effect on 80k for free or for the same cost either way, I think you really should want to press the âmore cost effectiveâ button, otherwise youâre basically spending extra talent for no reason. (With the caveat given above. Also a caveat that absorbing talent also helps build their career capitalâshouldâve mentioned that earlier. But still thatâs probably less good than them doing some other option and 80k getting the extra impact without extra labour.)
As noted above, weâre still fairly constrained on some resources, esp. certain types of talent. We donât have left overs of all types of resources. (E.g. I could very easily swap from my current job into any of several other high impact jobs, but wonât because thereâs only 1 me and I think my current job is the best use of current me, and I know several other people in this position. With respect to such people, there are left over positions/âproject ideas, not left over resources-in-the-form-of-people.)
One thing to note is that Iâm not convinced that Superintelligence isnât a megaproject (if we think of something as a megaproject if itâs both a) very high EV and b) very high cost), if you consider Nick Bostrom et.alâs counterfactual value of time to be very high (which seems pretty plausible to me).
fwiw Iâve never heard megaprojects defined in terms of opportunity cost of inputs (such that even projects costing just a small number of actual people or actual dollarsârather than e.g. projects productively absorbing $100 million dollarsâcould count).
It might be useful to have a term to match that definition/âconcept of yours, but I donât think the term should be megaproject because that term is already taken and this definition/âconcept is quite different. (If we had both meanings of megaproject in common use, then it would be harder to have this kind of conversation.)
Looking back, the Superintelligence objection is great. I have since resolved the question to my own satisfaction with this comment.
Thank you, this gets at something that had been bothering me about the megaprojects discourse, and your diagram articulates it very well. I also agree that efficiency is not the most important consideration once you get to a certain level of ambition.
With that said, it seems important to point out that planning/âdue diligence, piloting, and early-stage growth capital for potentially effective megaprojects could often still meet or exceed the efficiency bar from an expected-value standpoint, albeit with a much higher probability of failure than e.g. GiveWellâs recommended charities.
[Edit: added âdue diligenceâ to âplanningâ since not all megaprojects can be piloted easily.]
Yeah, I agree with that.
Though I also think your comment could be read as implying that you think megaprojects wonât themselves be cost-effective /â labour-effective /â in other senses efficient, relative to some bar like 80k or FHI or GiveWellâs recommended charities or ACEâs recommended charities. (Were you indeed thinking that?)
I think I disagree with that. That is, Iâd guess that at least a few megaprojects that would be worth doing if we had the right founders will also clear the relevant efficiency bar. (I also think that at least a few wonât clear the relevant efficiency bar. And, of course, most megaprojects that arenât worth doing will also not clear the relevant efficiency bar.)
I havenât attempted any relevant Fermi estimates or even really properly qualitatively thought about this before. My tentative disagreement is just based on the following fuzzy thoughts:
The set âmegaprojects that would be worth doing if we had the right foundersâ is probably fairly large, so it wouldnât be that hard for at least a few to clear those bars?
It seems plausible there are multiple ambitious ideas that could make like >1000x as large a dent in the worldâs problems as other things weâre excited about do, while absorbing >1000x as many resources, such that overall theyâre similarly efficient?
Megaprojects can benefit from economies of scale
(But now that Iâve started to draft this reply, I realise that this might be an important question, that its answer isnât immediately obvious, and that Iâve hardly thought about it at all and I donât feel confident about my fuzzy thoughts on it. Also, in any case, this wouldnât mean I overall disagree with your comment and wouldnât change my views on what I said in the post itself.)