But basically from this you get it being worth ~$252 to market effective altruism to a particular person and break even.
I don’t think that’s how it works. Your reasoning here is basically the same as “I value having Internet connection at $50,000/year, so it’s worth it for me to pay that much for it.”
The flaw is that, taking the market price of a good/service as given, your willingness to pay for it only dictates whether you should get it, now how much you should pay for it. If you value people at a certain level of talent at $1M/career, that only means that, so long as it’s not impossible to recruit such talent for less than $1M, you should recruit it. But if you can recruit it for $100,000, whether you value it at $100,001 or $1M or $1010does not matter: you should pay $100,000, and no more. Foregoing consumer surplus has opportunity costs.
To put it more explicitly: suppose you value 1 EA with talent X at $1M. Suppose it is possible to recruit, in expectation, one such EA for $100,000. If you pay $1M/EA instead, the opportunity cost of doing so is 10 EAs for each person you recruit, so the expected value of the action is −9 EAs per recruit, and you are in no way breaking even.
Of course, the assumption I made in the previous paragraph, that both the value of an EA and the cost of recruiting one are constant, does not reflect reality: if we had a million EAs, the cost of an additional recruit would be higher and its value would be lower, if we hold other EA assets constant, and so the opportunity cost isn’t constant. But my main point, that you should pay no more than the market price for goods and services if you want to break even (taking into account time costs and everything), still stands.
I agree with what you are saying that yes, we ideally should rank order all the possible ways to market EA and only take those that get the best (quality adjusted) EAs per $ spent, regardless of our value of EAs—that is, we should maximize return on investment.
**However, in practice, as we do not currently yet have enough EA marketing opportunities to saturate our billions of dollars in potential marketing budget, it would be an easier decision procedure to simply fund every opportunity that meets some target ROI threshold and revise that ROI threshold over time as we learn more about our opportunities and budget. ** We’d also ideally set ourselves to learn-by-doing when engaging in this outreach work.
are we building ways to learn by doing into these programmes?
The discussions on post suggest that it’s at least plausible that the answers are ‘no’, ‘anything that seems plausibly good’ and ‘no’, which I think would be concerning for most people, irrespective of where you sit on the various debates/continuums within EA.
This varies grantmaker-to-grantmaker but I personally try to get an ROI that is at least 10x better than donating the equivalent amount to AMF.
I’d really like to help programs build more learning by doing. That seems like a large gap worth addressing. Right now I find myself without enough capacity to do it, so hopefully someone else will do it, or I’ll eventually figure out how to get myself or someone at Rethink Priorities to work on it (especially given that we’ve been hiring a lot more).
I don’t think that’s how it works. Your reasoning here is basically the same as “I value having Internet connection at $50,000/year, so it’s worth it for me to pay that much for it.”
The flaw is that, taking the market price of a good/service as given, your willingness to pay for it only dictates whether you should get it, now how much you should pay for it. If you value people at a certain level of talent at $1M/career, that only means that, so long as it’s not impossible to recruit such talent for less than $1M, you should recruit it. But if you can recruit it for $100,000, whether you value it at $100,001 or $1M or $1010 does not matter: you should pay $100,000, and no more. Foregoing consumer surplus has opportunity costs.
To put it more explicitly: suppose you value 1 EA with talent X at $1M. Suppose it is possible to recruit, in expectation, one such EA for $100,000. If you pay $1M/EA instead, the opportunity cost of doing so is 10 EAs for each person you recruit, so the expected value of the action is −9 EAs per recruit, and you are in no way breaking even.
Of course, the assumption I made in the previous paragraph, that both the value of an EA and the cost of recruiting one are constant, does not reflect reality: if we had a million EAs, the cost of an additional recruit would be higher and its value would be lower, if we hold other EA assets constant, and so the opportunity cost isn’t constant. But my main point, that you should pay no more than the market price for goods and services if you want to break even (taking into account time costs and everything), still stands.
I agree with what you are saying that yes, we ideally should rank order all the possible ways to market EA and only take those that get the best (quality adjusted) EAs per $ spent, regardless of our value of EAs—that is, we should maximize return on investment.
**However, in practice, as we do not currently yet have enough EA marketing opportunities to saturate our billions of dollars in potential marketing budget, it would be an easier decision procedure to simply fund every opportunity that meets some target ROI threshold and revise that ROI threshold over time as we learn more about our opportunities and budget. ** We’d also ideally set ourselves to learn-by-doing when engaging in this outreach work.
Absolutely. And so the questions are:
have we defined that ROI threshold?
what is it?
are we building ways to learn by doing into these programmes?
The discussions on post suggest that it’s at least plausible that the answers are ‘no’, ‘anything that seems plausibly good’ and ‘no’, which I think would be concerning for most people, irrespective of where you sit on the various debates/continuums within EA.
This varies grantmaker-to-grantmaker but I personally try to get an ROI that is at least 10x better than donating the equivalent amount to AMF.
I’d really like to help programs build more learning by doing. That seems like a large gap worth addressing. Right now I find myself without enough capacity to do it, so hopefully someone else will do it, or I’ll eventually figure out how to get myself or someone at Rethink Priorities to work on it (especially given that we’ve been hiring a lot more).