I agree this is valuable, thank you for doing this.
I’ll just echo something Matt said about possible lack of independence...
Prior to doing our formal Delphi process for determining our moral weights, we at SoGive had been using a placeholder set of moral weights. The placeholder was heavily influenced by GiveWell’s moral weights.
Our process did then incorporate lots of other perspectives, including a survey of the EA community, and a survey of the wider population, as well as explicit exhortations to think things through independently. Despite all these things, I think it’s possible that our process might have ended up anchoring on the previous placeholder weights, i.e. indirectly anchoring on GiveWell’s moral weights. I don’t think anyone in the team was looking at or aware of FP’s or HLI’s moral weights, so I don’t expect there was any direct influence there.
I agree this is valuable, thank you for doing this.
I’ll just echo something Matt said about possible lack of independence...
Prior to doing our formal Delphi process for determining our moral weights, we at SoGive had been using a placeholder set of moral weights. The placeholder was heavily influenced by GiveWell’s moral weights.
Our process did then incorporate lots of other perspectives, including a survey of the EA community, and a survey of the wider population, as well as explicit exhortations to think things through independently. Despite all these things, I think it’s possible that our process might have ended up anchoring on the previous placeholder weights, i.e. indirectly anchoring on GiveWell’s moral weights. I don’t think anyone in the team was looking at or aware of FP’s or HLI’s moral weights, so I don’t expect there was any direct influence there.