One generic back-of-the-envelope calculation from me:
Assume that when you try to do EA outreach, you get the following funnel:
~10% (90% CI[1] 3%-30%) of people you reach out to will be open to being influenced by EA
~10% (90% CI 5%-20%) of people who are reached and are open to being influenced by EA will actually take the action of learning more about EA
~20% (90% CI 5%-40%) of people who learn more about EA actually become EA in some meaningful way (e.g., take GWWC pledge or equivalent)
Thus we expect outreach to a particular person to produce ~0.002 EAs on average.
Now assume an EA has the same expected impact as a typical GWWC member, and assume a typical GWWC member donates ~$24K/yr for ~6 years, making the total value of an EA worth ~$126,000 in donations, discounting at 4%. I imagine the actual mean EA is likely more valuable than that given a long right tail of impact.
Note that these numbers are pretty much made up[2] and each number ought to be refined with further research—something I’m working on and others should too. Also keep in mind that obviously these numbers will vary a lot based on the specific type of outreach being considered and so should be modified for modeling the specific thing being done. But hopefully this is a useful example.
But basically from this you get it being worth ~$252 to market effective altruism to a particular person and break even. So if a dinner markets EA to ten people that otherwise would not have been marketed to, it will be worth ~$2500 to run just that one dinner. So spending $5000 to run a bunch of dinners can make sense.
Also note that of course EA marketing is not a single-touchpoint-and-then-done-forever system, so you will frequently be spending time/money on the same person multiple times. But this is hopefully made up for by the person becoming more likely to convert (both from self-selection and from the outreach).
Note: This is personal to just me, and does not reflect the views of Rethink Priorities or the Effective Altruism Infrastructure Fund or any other EA institution.
Hopefully even though a lot of this is completely made up, it’s useful as a scaffold/demonstration and eventually we can collect more data to try to refine these numbers.
But basically from this you get it being worth ~$252 to market effective altruism to a particular person and break even.
I don’t think that’s how it works. Your reasoning here is basically the same as “I value having Internet connection at $50,000/year, so it’s worth it for me to pay that much for it.”
The flaw is that, taking the market price of a good/service as given, your willingness to pay for it only dictates whether you should get it, now how much you should pay for it. If you value people at a certain level of talent at $1M/career, that only means that, so long as it’s not impossible to recruit such talent for less than $1M, you should recruit it. But if you can recruit it for $100,000, whether you value it at $100,001 or $1M or $1010does not matter: you should pay $100,000, and no more. Foregoing consumer surplus has opportunity costs.
To put it more explicitly: suppose you value 1 EA with talent X at $1M. Suppose it is possible to recruit, in expectation, one such EA for $100,000. If you pay $1M/EA instead, the opportunity cost of doing so is 10 EAs for each person you recruit, so the expected value of the action is −9 EAs per recruit, and you are in no way breaking even.
Of course, the assumption I made in the previous paragraph, that both the value of an EA and the cost of recruiting one are constant, does not reflect reality: if we had a million EAs, the cost of an additional recruit would be higher and its value would be lower, if we hold other EA assets constant, and so the opportunity cost isn’t constant. But my main point, that you should pay no more than the market price for goods and services if you want to break even (taking into account time costs and everything), still stands.
I agree with what you are saying that yes, we ideally should rank order all the possible ways to market EA and only take those that get the best (quality adjusted) EAs per $ spent, regardless of our value of EAs—that is, we should maximize return on investment.
**However, in practice, as we do not currently yet have enough EA marketing opportunities to saturate our billions of dollars in potential marketing budget, it would be an easier decision procedure to simply fund every opportunity that meets some target ROI threshold and revise that ROI threshold over time as we learn more about our opportunities and budget. ** We’d also ideally set ourselves to learn-by-doing when engaging in this outreach work.
are we building ways to learn by doing into these programmes?
The discussions on post suggest that it’s at least plausible that the answers are ‘no’, ‘anything that seems plausibly good’ and ‘no’, which I think would be concerning for most people, irrespective of where you sit on the various debates/continuums within EA.
This varies grantmaker-to-grantmaker but I personally try to get an ROI that is at least 10x better than donating the equivalent amount to AMF.
I’d really like to help programs build more learning by doing. That seems like a large gap worth addressing. Right now I find myself without enough capacity to do it, so hopefully someone else will do it, or I’ll eventually figure out how to get myself or someone at Rethink Priorities to work on it (especially given that we’ve been hiring a lot more).
I imagine the actual mean EA is likely more valuable than that given a long right tail of impact.
This still sounds like a strong understatement to me – it seems that some people will have vastly more impact. Quick example that gestures in this direction: assuming that there are 5000 EAs, Sam Bankman-Fried is donating $20 billion, and all other 1999 4999 EAs have no impact whatsoever, the mean impact of EAs is $4 million, not $126k. That’s a factor of 30x, so a framing like “likely vastly more valuable” would seem more appropriate to me.
One reason to be lower than this per recruited EA is that you might think that the people who need to be recruited are systematically less valuable on average than the people who don’t need to be. Possibly not a huge adjustment in any case, but worth considering.
Personally I think going for something like 50k doesn’t make sense, as I expect that the 5k (or even 500) most engaged people will have a much higher impact than the others.
Also, my guess of how CEA/FTX are thinking about this is actually that they assume an even smaller number (perhaps 2k or so?) because they’re aiming for highly engaged people, and don’t pay as much attention to how many less engaged people they’re causing.
Peter was using a bar of “actually become EA in some meaningful way (e.g., take GWWC pledge or equivalent)”. GWWC is 8k on its own, though there’s probably been substantial attrition.
But yes, because we expect impact to be power-lawish if you order all plausible EAs by impact there will probably not be any especially compelling places to draw a line.
One generic back-of-the-envelope calculation from me:
Assume that when you try to do EA outreach, you get the following funnel:
~10% (90% CI[1] 3%-30%) of people you reach out to will be open to being influenced by EA
~10% (90% CI 5%-20%) of people who are reached and are open to being influenced by EA will actually take the action of learning more about EA
~20% (90% CI 5%-40%) of people who learn more about EA actually become EA in some meaningful way (e.g., take GWWC pledge or equivalent)
Thus we expect outreach to a particular person to produce ~0.002 EAs on average.
Now assume an EA has the same expected impact as a typical GWWC member, and assume a typical GWWC member donates ~$24K/yr for ~6 years, making the total value of an EA worth ~$126,000 in donations, discounting at 4%. I imagine the actual mean EA is likely more valuable than that given a long right tail of impact.
Note that these numbers are pretty much made up[2] and each number ought to be refined with further research—something I’m working on and others should too. Also keep in mind that obviously these numbers will vary a lot based on the specific type of outreach being considered and so should be modified for modeling the specific thing being done. But hopefully this is a useful example.
But basically from this you get it being worth ~$252 to market effective altruism to a particular person and break even. So if a dinner markets EA to ten people that otherwise would not have been marketed to, it will be worth ~$2500 to run just that one dinner. So spending $5000 to run a bunch of dinners can make sense.
Also note that of course EA marketing is not a single-touchpoint-and-then-done-forever system, so you will frequently be spending time/money on the same person multiple times. But this is hopefully made up for by the person becoming more likely to convert (both from self-selection and from the outreach).
Note: This is personal to just me, and does not reflect the views of Rethink Priorities or the Effective Altruism Infrastructure Fund or any other EA institution.
According to me, using my intuition forecaster powers
Hopefully even though a lot of this is completely made up, it’s useful as a scaffold/demonstration and eventually we can collect more data to try to refine these numbers.
I don’t think that’s how it works. Your reasoning here is basically the same as “I value having Internet connection at $50,000/year, so it’s worth it for me to pay that much for it.”
The flaw is that, taking the market price of a good/service as given, your willingness to pay for it only dictates whether you should get it, now how much you should pay for it. If you value people at a certain level of talent at $1M/career, that only means that, so long as it’s not impossible to recruit such talent for less than $1M, you should recruit it. But if you can recruit it for $100,000, whether you value it at $100,001 or $1M or $1010 does not matter: you should pay $100,000, and no more. Foregoing consumer surplus has opportunity costs.
To put it more explicitly: suppose you value 1 EA with talent X at $1M. Suppose it is possible to recruit, in expectation, one such EA for $100,000. If you pay $1M/EA instead, the opportunity cost of doing so is 10 EAs for each person you recruit, so the expected value of the action is −9 EAs per recruit, and you are in no way breaking even.
Of course, the assumption I made in the previous paragraph, that both the value of an EA and the cost of recruiting one are constant, does not reflect reality: if we had a million EAs, the cost of an additional recruit would be higher and its value would be lower, if we hold other EA assets constant, and so the opportunity cost isn’t constant. But my main point, that you should pay no more than the market price for goods and services if you want to break even (taking into account time costs and everything), still stands.
I agree with what you are saying that yes, we ideally should rank order all the possible ways to market EA and only take those that get the best (quality adjusted) EAs per $ spent, regardless of our value of EAs—that is, we should maximize return on investment.
**However, in practice, as we do not currently yet have enough EA marketing opportunities to saturate our billions of dollars in potential marketing budget, it would be an easier decision procedure to simply fund every opportunity that meets some target ROI threshold and revise that ROI threshold over time as we learn more about our opportunities and budget. ** We’d also ideally set ourselves to learn-by-doing when engaging in this outreach work.
Absolutely. And so the questions are:
have we defined that ROI threshold?
what is it?
are we building ways to learn by doing into these programmes?
The discussions on post suggest that it’s at least plausible that the answers are ‘no’, ‘anything that seems plausibly good’ and ‘no’, which I think would be concerning for most people, irrespective of where you sit on the various debates/continuums within EA.
This varies grantmaker-to-grantmaker but I personally try to get an ROI that is at least 10x better than donating the equivalent amount to AMF.
I’d really like to help programs build more learning by doing. That seems like a large gap worth addressing. Right now I find myself without enough capacity to do it, so hopefully someone else will do it, or I’ll eventually figure out how to get myself or someone at Rethink Priorities to work on it (especially given that we’ve been hiring a lot more).
This still sounds like a strong understatement to me – it seems that some people will have vastly more impact. Quick example that gestures in this direction: assuming that there are 5000 EAs, Sam Bankman-Fried is donating $20 billion, and all other
19994999 EAs have no impact whatsoever, the mean impact of EAs is $4 million, not $126k. That’s a factor of 30x, so a framing like “likely vastly more valuable” would seem more appropriate to me.One reason to be lower than this per recruited EA is that you might think that the people who need to be recruited are systematically less valuable on average than the people who don’t need to be. Possibly not a huge adjustment in any case, but worth considering.
Yeah I fully agree with this; that’s partly why I wrote “gestures”. Probably should have flagged it more explicitly from the beginning.
Should be 4999
I know this isn’t your main point, but that’s ~1/10 what I would have guessed. 5k is only 3x the people who attended EAG London this year.
Personally I think going for something like 50k doesn’t make sense, as I expect that the 5k (or even 500) most engaged people will have a much higher impact than the others.
Also, my guess of how CEA/FTX are thinking about this is actually that they assume an even smaller number (perhaps 2k or so?) because they’re aiming for highly engaged people, and don’t pay as much attention to how many less engaged people they’re causing.
Peter was using a bar of “actually become EA in some meaningful way (e.g., take GWWC pledge or equivalent)”. GWWC is 8k on its own, though there’s probably been substantial attrition.
But yes, because we expect impact to be power-lawish if you order all plausible EAs by impact there will probably not be any especially compelling places to draw a line.