I think it would be cool to have in the summary some metrics referring to the overall cost-effectiveness of your 4 programmes. I suppose these would have to refer to previous years, as the impact of your programmes is not immediate. I still think they would be helpful, as the metrics you mention now (in the summary above) refer to impact and cost separately.
This gave the following picture for 2018–2019 (all figures in estimated DIPY per FTE and not robust):
Website (6.5)
Podcast (4.1) and advising (3.8)
Job board (2.9) and headhunting (2.5)
~~~
We did a scrappy internal update to our above 2020 analysis, but haven’t prioritised cleaning it up /​ coming to agreements internally and presenting it externally. (We think that cost effectiveness per FTE has reduced, as we say in the review, but are not sure how much.)
The basic reasoning for that is:
We’ve found that these analyses aren’t especially valuable for people who are making decisions about whether to invest their resources in 80k (most importantly—donors and staff (or potential staff)). These groups tend to either a) give these analyses little weight in their decision making, or b) prefer to engage with the source material directly and run their own analysis, so that they’re able to understand and trust the conclusions more.
The updates since 2020 don’t seem dramatic to me: we haven’t repeated the case studies/​DIPY analysis, Open Phil hasn’t repeated their survey, the user survey figures didn’t update me substantially, and the results of the EA survey have seemed similar to previous years. (I’m bracketing out marketing here, which I do think seems significantly less cost effective per dollar, though not necessarily per FTE.)
Coming to figures that we’re happy to stand behind here takes a long time.
~~~
This appendix of the 2022 review might also be worth looking at—it shows FTEs and a sample of lead metrics for each programme.
Thanks for sharing!
I think it would be cool to have in the summary some metrics referring to the overall cost-effectiveness of your 4 programmes. I suppose these would have to refer to previous years, as the impact of your programmes is not immediate. I still think they would be helpful, as the metrics you mention now (in the summary above) refer to impact and cost separately.
Thanks for the thought!
You might be interested in the analysis we did in 2020. To pull out the phrase that I think most closely captures what you’re after:
~~~
We did a scrappy internal update to our above 2020 analysis, but haven’t prioritised cleaning it up /​ coming to agreements internally and presenting it externally. (We think that cost effectiveness per FTE has reduced, as we say in the review, but are not sure how much.)
The basic reasoning for that is:
We’ve found that these analyses aren’t especially valuable for people who are making decisions about whether to invest their resources in 80k (most importantly—donors and staff (or potential staff)). These groups tend to either a) give these analyses little weight in their decision making, or b) prefer to engage with the source material directly and run their own analysis, so that they’re able to understand and trust the conclusions more.
The updates since 2020 don’t seem dramatic to me: we haven’t repeated the case studies/​DIPY analysis, Open Phil hasn’t repeated their survey, the user survey figures didn’t update me substantially, and the results of the EA survey have seemed similar to previous years. (I’m bracketing out marketing here, which I do think seems significantly less cost effective per dollar, though not necessarily per FTE.)
Coming to figures that we’re happy to stand behind here takes a long time.
~~~
This appendix of the 2022 review might also be worth looking at—it shows FTEs and a sample of lead metrics for each programme.