I think “this approach” (making uncertainty explicit) is important, necessary, and correct...
I’d pair it with “letting the user specify parameters/distributions over moral uncertainty things” (and perhaps even subjective beliefs about different types of evidence).
I think (epistemic basis—mostly gut feeling) it will likely will make a difference in how charities and interventions rank against each other. At first pass, it may lead to ‘basically the same ranking’ (or at least, not a strong change). But I suspect that if it is made part of a longer-term careful practice, some things will switch order, and this is meaningful.
It will also enable evaluation of a wider set of charities/interventions. If we make uncertainty explicit, we can feel more comfortable evaluating cases where there is much less empirical evidence.
So I think ‘some organization’ should be doing this, and I expect this will happen soon; whether that is GiveWell doing it or someone else.
Further opinions/endorsement …
I think “this approach” (making uncertainty explicit) is important, necessary, and correct...
I’d pair it with “letting the user specify parameters/distributions over moral uncertainty things” (and perhaps even subjective beliefs about different types of evidence).
I think (epistemic basis—mostly gut feeling) it will likely will make a difference in how charities and interventions rank against each other. At first pass, it may lead to ‘basically the same ranking’ (or at least, not a strong change). But I suspect that if it is made part of a longer-term careful practice, some things will switch order, and this is meaningful.
It will also enable evaluation of a wider set of charities/interventions. If we make uncertainty explicit, we can feel more comfortable evaluating cases where there is much less empirical evidence.
So I think ‘some organization’ should be doing this, and I expect this will happen soon; whether that is GiveWell doing it or someone else.