For what it’s worth, I’m a philosopher and I’ve not only offered to help GiveWell improve its moral weights, but repeatedly pressed them to do so over the years. I’m not sure why, but they’ve never shown any interest. I’ve since given up. Perhaps others will have more luck.
Thanks for the comment! I really enjoyed reading the work on WELLBYs by HLI.
I personally think GiveWell fell into the epistemic problem of prioritizing current functionality (even when it is based upon an unjustified belief) over potential counterfactual impact of establishing something new. I think they know its bad, but unaware of how bad moral weight currently is.
For what it’s worth, I’m a philosopher and I’ve not only offered to help GiveWell improve its moral weights, but repeatedly pressed them to do so over the years. I’m not sure why, but they’ve never shown any interest. I’ve since given up. Perhaps others will have more luck.
Thanks for the comment! I really enjoyed reading the work on WELLBYs by HLI.
I personally think GiveWell fell into the epistemic problem of prioritizing current functionality (even when it is based upon an unjustified belief) over potential counterfactual impact of establishing something new. I think they know its bad, but unaware of how bad moral weight currently is.