Imagine that we had these five projects (and only these projects) in the EA portfolio:
Alpha: Spend $100,000 to produce 1000 units of impact (after which Alpha will be exhausted and will produce no more units of impact; you can’t buy it twice)
Beta: Spend $100,000,000 to produce 200,000 units of impact (after which Beta will be exhausted and will produce no more units of impact; you can’t buy it twice)
Gamma: Spend $1,000,000,000 to produce 300,000 units of impact (after which Gamma will be exhausted and will produce no more units of impact; you can’t buy it twice)
GiveDeltaly: Spent any amount of money to produce a unit of impact for each $2000 spent (GiveDeltaly cannot be exhausted and you can buy it as many times as you want).
Research: Spend $200,000 to create a new opportunity with the same “spend X for Y” of Alpha, Beta, Gamma, or GiveDeltaly.
Early EA (say ~2013), with relatively fewer resources (we didn’t have $100M to spend), would’ve been ecstatic about Alpha because it only costs $100 to buy one unit of impact, which is much better than Beta’s $500 per unit, GiveDeltaly’s $2000 per unit, or Gamma’s $3333.33 per unit.
But “modern” EA, with lots of money and a shortage of opportunities to spend it on would gladly buy Alpha first but would be more excited by Beta because it allows us to deploy more of our portfolio at a better effectiveness.
(And no one would be excited by Gamma—even though it’s a huge megaproject, it doesn’t beat our baseline of GiveDeltaly.)
~
Now let’s think of things as allocating an EA bank account and use Research. What should we use Research for? Early EA would want us to focus our research efforts on finding another opportunity like Alpha since it is very cost-effective! But modern EA would rather we look for opportunities like Beta—even though it is less effective than Alpha, it can use up 1000x more funds!
Like say we have an EA bank account with $2,000,000,000. If we followed modern EA advice and bought Alpha, bought Beta, bought Research and used it to find another Beta, and bought the second Beta, and then put the remainder into GiveDeltaly, we’d have 1,350,350 units of impact.
But if we followed Early EA advice and bought Alpha, bought Beta, bought Research and used it to find another Alpha, and bought the second Alpha, and then put the remainder into GiveDeltaly, we’d have 1,151,800 units of impact. Lower total impact even though we used research to find a more cost-effective intervention!
This implies the scalability of the projects we identify can matter just as much, if not more than, the cost-effectiveness of the project! I think this scalability mindset is often missed by people who focus mainly on cost-effectiveness and is the main reason IMO to think more about megaprojects.
But this does also imply that scalability isn’t the only thing that matters—no one wants to spend a dollar on Gamma even though it is very scalable.
I would add that Scalability is already implicitly there in the ITN/SSN framework. At least if you take 80,000 Hours’ description of Solvability at face value (i.e. “if we doubled the resources dedicated to solving this problem, what fraction of the problem would we expect to solve?”). Albeit, this is just my observation and not a common opinion.
With limited investment, more scalable projects will tend to have higher cost-effectiveness because they will still have plenty of room for more funding.
What is happening with the ‘modern’ view is that with more wealth Scalability matters in two ways: i) as before as a heuristic for the marginal value of the next dollar, and ii) as a heuristic for how many dollars are worth pumping into an opportunity.
So, Scalability has always mattered, but it has become even more important.
You seem to be implying that Scalability was one of the terms in ITN/SSN, which I think it never was.
The Ss have been Scale and Solvability, which aren’t the same as Scalability
iirc, Charity Entrepreneurship does account for scalability in their own weighted factor models or frameworks, but that’s separate from ITN
I don’t think the ITN/SSN frameworks made the points in my post or in your final three paragraphs clear.
Those are primarilyframeworks for prioritizing among problems, not projects.
“if we doubled the resources dedicated to solving this problem, what fraction of the problem would we expect to solve?” doesn’t tell me how scalable a given project is. “resources dedicated to solving this problem” would mean things like total resources dedicated to solving wild animal suffering or extreme climate change risks, not resources dedicated toward a given project.
You could have cases where a given project could grow to 100 times its current size without losing much cost-effectiveness per dollar and yet the cost-effectiveness was fairly low to begin with or the problem area it’s related to isn’t very tractable.
You could also have cases where a project is very cost-effective and is in a very tractable area but isn’t very scalable.
Scale, Tractability, and Neglectedness are also often used to evaluate intervention or project ideas, but in that case Scale is used to mean things like “How big would the impacts be if the project were successful?” or “How big a problem is this aiming to tackle?”, rather than things like “How large can this project grow to while remaining somewhat cost-effective?”
Yes, what I was trying to say was that in my opinion the word ‘Scalability’ is a good match for 80′000 Hours stated definition of Solvability. In practice, Solvability and Tractability are not used as if they represent Scalability. I think this is a shame as: a) I think Scalability makes sense given the mathematical intuition for ITN developed by Owen Cotton-Barratt, and b) I think there is a risk of circular logic in how people use Solvability/Tractability (e.g. they judge them based on a sense of the marginal cost-effectiveness of work on a problem).
I agree that ITN/SSN are clearly framed as frameworks for problems not projects.
I agree with your examples in your point 2. I’m not sure if you’re making a larger point though? For projects we can just define scalability as: “if we doubled the resources dedicated to this project, by what fraction would we increase its impact?”.
Regarding your point 3, for me “How big would the impacts be if the project were successful?” and “How large can this project grow to while remaining somewhat cost-effective?” are the same thing in practice. That is, my natural instinct is to define success as expanding to the limits of reasonable cost-effectiveness. I would say this is scale at the ‘solution-level’.
“How big a problem is this aiming to tackle?” is different, of course, as it’s at the ‘problem level’.
By the way, you can also define scale as “How much impact has this project had so far?”.
However you define Scale if you then divided it by the amount resources invested to achieve that scale, you’ll get an ‘average’ cost-effectiveness. But, to get the marginal cost-effectiveness you need to factor in Scalability, because as the project grows its impact per unit will generally be declining. Whether we call the marginal value being closer to the average good ‘solvability’ or good ‘scalability’ seems like a matter of taste.
In any case, my goal with these comments is mostly just to agree that Scalability is important.
So my understanding is as follows.
Imagine that we had these five projects (and only these projects) in the EA portfolio:
Alpha: Spend $100,000 to produce 1000 units of impact (after which Alpha will be exhausted and will produce no more units of impact; you can’t buy it twice)
Beta: Spend $100,000,000 to produce 200,000 units of impact (after which Beta will be exhausted and will produce no more units of impact; you can’t buy it twice)
Gamma: Spend $1,000,000,000 to produce 300,000 units of impact (after which Gamma will be exhausted and will produce no more units of impact; you can’t buy it twice)
GiveDeltaly: Spent any amount of money to produce a unit of impact for each $2000 spent (GiveDeltaly cannot be exhausted and you can buy it as many times as you want).
Research: Spend $200,000 to create a new opportunity with the same “spend X for Y” of Alpha, Beta, Gamma, or GiveDeltaly.
Early EA (say ~2013), with relatively fewer resources (we didn’t have $100M to spend), would’ve been ecstatic about Alpha because it only costs $100 to buy one unit of impact, which is much better than Beta’s $500 per unit, GiveDeltaly’s $2000 per unit, or Gamma’s $3333.33 per unit.
But “modern” EA, with lots of money and a shortage of opportunities to spend it on would gladly buy Alpha first but would be more excited by Beta because it allows us to deploy more of our portfolio at a better effectiveness.
(And no one would be excited by Gamma—even though it’s a huge megaproject, it doesn’t beat our baseline of GiveDeltaly.)
~
Now let’s think of things as allocating an EA bank account and use Research. What should we use Research for? Early EA would want us to focus our research efforts on finding another opportunity like Alpha since it is very cost-effective! But modern EA would rather we look for opportunities like Beta—even though it is less effective than Alpha, it can use up 1000x more funds!
Like say we have an EA bank account with $2,000,000,000. If we followed modern EA advice and bought Alpha, bought Beta, bought Research and used it to find another Beta, and bought the second Beta, and then put the remainder into GiveDeltaly, we’d have 1,350,350 units of impact.
But if we followed Early EA advice and bought Alpha, bought Beta, bought Research and used it to find another Alpha, and bought the second Alpha, and then put the remainder into GiveDeltaly, we’d have 1,151,800 units of impact. Lower total impact even though we used research to find a more cost-effective intervention!
This implies the scalability of the projects we identify can matter just as much, if not more than, the cost-effectiveness of the project! I think this scalability mindset is often missed by people who focus mainly on cost-effectiveness and is the main reason IMO to think more about megaprojects.
But this does also imply that scalability isn’t the only thing that matters—no one wants to spend a dollar on Gamma even though it is very scalable.
Very well put!
I would add that Scalability is already implicitly there in the ITN/SSN framework. At least if you take 80,000 Hours’ description of Solvability at face value (i.e. “if we doubled the resources dedicated to solving this problem, what fraction of the problem would we expect to solve?”). Albeit, this is just my observation and not a common opinion.
With limited investment, more scalable projects will tend to have higher cost-effectiveness because they will still have plenty of room for more funding.
What is happening with the ‘modern’ view is that with more wealth Scalability matters in two ways: i) as before as a heuristic for the marginal value of the next dollar, and ii) as a heuristic for how many dollars are worth pumping into an opportunity.
So, Scalability has always mattered, but it has become even more important.
I agree with your final three paragraphs, but:
You seem to be implying that Scalability was one of the terms in ITN/SSN, which I think it never was.
The Ss have been Scale and Solvability, which aren’t the same as Scalability
iirc, Charity Entrepreneurship does account for scalability in their own weighted factor models or frameworks, but that’s separate from ITN
I don’t think the ITN/SSN frameworks made the points in my post or in your final three paragraphs clear.
Those are primarilyframeworks for prioritizing among problems, not projects.
“if we doubled the resources dedicated to solving this problem, what fraction of the problem would we expect to solve?” doesn’t tell me how scalable a given project is. “resources dedicated to solving this problem” would mean things like total resources dedicated to solving wild animal suffering or extreme climate change risks, not resources dedicated toward a given project.
You could have cases where a given project could grow to 100 times its current size without losing much cost-effectiveness per dollar and yet the cost-effectiveness was fairly low to begin with or the problem area it’s related to isn’t very tractable.
You could also have cases where a project is very cost-effective and is in a very tractable area but isn’t very scalable.
Scale, Tractability, and Neglectedness are also often used to evaluate intervention or project ideas, but in that case Scale is used to mean things like “How big would the impacts be if the project were successful?” or “How big a problem is this aiming to tackle?”, rather than things like “How large can this project grow to while remaining somewhat cost-effective?”
Yes, what I was trying to say was that in my opinion the word ‘Scalability’ is a good match for 80′000 Hours stated definition of Solvability. In practice, Solvability and Tractability are not used as if they represent Scalability. I think this is a shame as: a) I think Scalability makes sense given the mathematical intuition for ITN developed by Owen Cotton-Barratt, and b) I think there is a risk of circular logic in how people use Solvability/Tractability (e.g. they judge them based on a sense of the marginal cost-effectiveness of work on a problem).
I agree that ITN/SSN are clearly framed as frameworks for problems not projects.
I agree with your examples in your point 2. I’m not sure if you’re making a larger point though? For projects we can just define scalability as: “if we doubled the resources dedicated to this project, by what fraction would we increase its impact?”.
Regarding your point 3, for me “How big would the impacts be if the project were successful?” and “How large can this project grow to while remaining somewhat cost-effective?” are the same thing in practice. That is, my natural instinct is to define success as expanding to the limits of reasonable cost-effectiveness. I would say this is scale at the ‘solution-level’.
“How big a problem is this aiming to tackle?” is different, of course, as it’s at the ‘problem level’.
By the way, you can also define scale as “How much impact has this project had so far?”.
However you define Scale if you then divided it by the amount resources invested to achieve that scale, you’ll get an ‘average’ cost-effectiveness. But, to get the marginal cost-effectiveness you need to factor in Scalability, because as the project grows its impact per unit will generally be declining. Whether we call the marginal value being closer to the average good ‘solvability’ or good ‘scalability’ seems like a matter of taste.
In any case, my goal with these comments is mostly just to agree that Scalability is important.
I completely agree with everything you said (and my previous comment was trying to convey a part of this, admittedly in much less transparent way).