Especially around AI, there seem to be a bunch of key considerations that many people disagree about—so it’s tricky to have a strong set of agreements to do evaluation around.
One could try to make the evaluation criteria worldview-agnostic – focusing on things like the quality of their research and workplace culture – and let individuals donate to the best orgs working on problems that are high priority to them.
I think having recommendations in each subfield would make sense. But how many subfields have a consensus standard for how to evaluate such things as “quality of . . . research”?
One could try to make the evaluation criteria worldview-agnostic – focusing on things like the quality of their research and workplace culture – and let individuals donate to the best orgs working on problems that are high priority to them.
I think having recommendations in each subfield would make sense. But how many subfields have a consensus standard for how to evaluate such things as “quality of . . . research”?