I just wanted to say, as a wannabe cause prioritizer, congratulations! I’ve been very impressed with your team’s productivity and process. I really admire the time pressure to create actionable and practical results, even if it may be hard, highly uncertain, and probably wrong! :D
I’ve left critical comments, but I don’t want that to distract from me telling you how impressive you guys have been (and I don’t use that term lightly), and I really hope you continue to do this and improve upon your results.
My understanding of why MIRI’s expected returns didn’t come out on top is that you have a strong prior against any org being able to do that much good, and because MIRI’s expect impact was so high variance (i.e uncertain), it didn’t cause your model to update in any particular direction very much.
What confuses me is this: It feels like if I hadn’t thought of astronomical waste / xrisk, and found a great org like AMF, hearing those arguments should make me update strongly that I’m looking at the wrong areas. Yet, the argument that it’s high potential cancels out with your prior means I could’ve been right the whole time, even before I took into account far future considerations.
Which seems implausible. The whole point of astronomical waste is that you should update your probability of being able to have an outsized impact.
I’m not sure which part of your model I’m disagreeing with, but would appreciate knowing if you do?
Regrettably, we were not able to choose shortlisted organisations as planned. My original intention was that we would choose organisations in a systematic, principled way, shortlisting those which had highest expected impact given our evidence by the time of the shortlist deadline. This proved too difficult, however, so we resorted to choosing the shortlist based on a mixture of our hunches about expected impact and the intellectual value of finding out more about an organisation and comparing it to the others.
[...]
Later, we realised that understanding the impact of the Good Food Institute was too difficult, so we replaced it with Animal Charity Evaluators on our shortlist. Animal Charity Evaluators finds advocates for highly effective opportunities to improve the lives of animals.
If quantitative models were used for these decisions I’d be interested in seeing them.
That second quote in particular seems to be a good example of what some might call measurability bias. Understandable, of course—it’s hard to give out a prize on the basis of raw hunches—but nevertheless we should work towards finding ways to avoid it.
Kudos to OPP for being so transparent in their thought process though!
I just wanted to say, as a wannabe cause prioritizer, congratulations! I’ve been very impressed with your team’s productivity and process. I really admire the time pressure to create actionable and practical results, even if it may be hard, highly uncertain, and probably wrong! :D
I’ve left critical comments, but I don’t want that to distract from me telling you how impressive you guys have been (and I don’t use that term lightly), and I really hope you continue to do this and improve upon your results.
My understanding of why MIRI’s expected returns didn’t come out on top is that you have a strong prior against any org being able to do that much good, and because MIRI’s expect impact was so high variance (i.e uncertain), it didn’t cause your model to update in any particular direction very much.
What confuses me is this: It feels like if I hadn’t thought of astronomical waste / xrisk, and found a great org like AMF, hearing those arguments should make me update strongly that I’m looking at the wrong areas. Yet, the argument that it’s high potential cancels out with your prior means I could’ve been right the whole time, even before I took into account far future considerations.
Which seems implausible. The whole point of astronomical waste is that you should update your probability of being able to have an outsized impact.
I’m not sure which part of your model I’m disagreeing with, but would appreciate knowing if you do?
[...]
If quantitative models were used for these decisions I’d be interested in seeing them.
That second quote in particular seems to be a good example of what some might call measurability bias. Understandable, of course—it’s hard to give out a prize on the basis of raw hunches—but nevertheless we should work towards finding ways to avoid it.
Kudos to OPP for being so transparent in their thought process though!