I also mention this in my response to your other comment, but in case others didn’t notice that: my current best guess for how we can reasonably compare across cause areas is to use something like WALYs. For animals my guess is we’ll adjust WALYs with some measure of brain complexity.
In general the rankings will be super sensitive to assumptions. Through really high quality research we might be able to reduce disagreements a little, but no matter what there will still be lots of disagreements about assumptions.
I mentioned in the post that the default ranking might eventually become some blend of rankings from many EA orgs. Nathan has a good suggestion below about using surveys to do this blending. A key point is that you can factor out just the differences in assumptions between two rankings and survey people about which assumptions they find most credible.
I think you highlight something really important at the end of your post about the benefit of making these assumptions explicit.
I also mention this in my response to your other comment, but in case others didn’t notice that: my current best guess for how we can reasonably compare across cause areas is to use something like WALYs. For animals my guess is we’ll adjust WALYs with some measure of brain complexity.
In general the rankings will be super sensitive to assumptions. Through really high quality research we might be able to reduce disagreements a little, but no matter what there will still be lots of disagreements about assumptions.
I mentioned in the post that the default ranking might eventually become some blend of rankings from many EA orgs. Nathan has a good suggestion below about using surveys to do this blending. A key point is that you can factor out just the differences in assumptions between two rankings and survey people about which assumptions they find most credible.
I think you highlight something really important at the end of your post about the benefit of making these assumptions explicit.