Could you explain your Impact-Adjusted Significant Plan Changes to those of us who don’t understand the system? E.g What does an “rated-1000” plan change look like and how does that compare to a “rated-1″? I imagine the former is something like a top maths bod going from working on nothing to working on AI safety but that’s just my assumption. I really don’t know what these mean in practice, so some illustrative examples would be nice.
Following comments made by others about CEA’s somewhat self-flagellatory review, it seems a bit odd and unnecessarily self-critical to describe something as a challenge if you’ve conscious chosen to de-prioritise it. In this case:
(iii) we had to abandon our target to triple IASPC (iv) rated-1 plan changes from introductory content didn’t grow as we stopped focusing on them.
By analogy, it’s curious if tell you 1) a challenge for me this year was that I didn’t run a marathon this year and 2) I decided running marathons wasn’t that important to me (full disclosure humblebrag: I did run(/walk) a marathon this year).
Unfortunately, many of the details are sensitive, so we don’t publicly release most of our case studies.
We also intend for our ratings to roughly line up with how many “donor dollars” each plan change is worth. Our latest estimates were that a rated-1 plan change is worth $7000 donor dollars on average; whereas a rated-100 is over $1m i.e. it’s equal in value to an additional $1m donated to where our donors would have given otherwise.
With the IASPC target, I listed it as a mistake rather than merely a reprioritisation because:
We could have anticipated some of these problems earlier if we had spent more time thinking about our plans and metrics, which would have made us more effective for several months.
Thanks for this Ben. Two comments.
Could you explain your Impact-Adjusted Significant Plan Changes to those of us who don’t understand the system? E.g What does an “rated-1000” plan change look like and how does that compare to a “rated-1″? I imagine the former is something like a top maths bod going from working on nothing to working on AI safety but that’s just my assumption. I really don’t know what these mean in practice, so some illustrative examples would be nice.
Following comments made by others about CEA’s somewhat self-flagellatory review, it seems a bit odd and unnecessarily self-critical to describe something as a challenge if you’ve conscious chosen to de-prioritise it. In this case:
By analogy, it’s curious if tell you 1) a challenge for me this year was that I didn’t run a marathon this year and 2) I decided running marathons wasn’t that important to me (full disclosure humblebrag: I did run(/walk) a marathon this year).
Hey Michael,
A typical rated-1 is someone saying they took the GWWC pledge due to us, and are at the median in terms of how much we expect them to donate.
Rated-10 means we’d trade that plan change for 10 rated-1s.
You can see more explanation of typical rated 10 and higher plan changes from 2017 here: https://80000hours.org/2017/12/annual-review/#what-did-the-plan-changes-consist-of
Some case studies of top plan changes here: https://80000hours.org/2017/12/annual-review/#value-of-top-plan-changes
Unfortunately, many of the details are sensitive, so we don’t publicly release most of our case studies.
We also intend for our ratings to roughly line up with how many “donor dollars” each plan change is worth. Our latest estimates were that a rated-1 plan change is worth $7000 donor dollars on average; whereas a rated-100 is over $1m i.e. it’s equal in value to an additional $1m donated to where our donors would have given otherwise.
With the IASPC target, I listed it as a mistake rather than merely a reprioritisation because: