I think it could be bad if it relies too much on a particular worldview for its conclusions, which causes people to unnecessarily anchor on it. Seems like it could also be bad from a certain perspective if you think that it could lead to preferred treatment for longtermist causes which are easier to evaluate (eg. climate change relative to AI safety).
Probably depends on the cost and counterfactual, right? I doubt many people will think having a longtermist evaluator is bad if it’s free.
I think it could be bad if it relies too much on a particular worldview for its conclusions, which causes people to unnecessarily anchor on it. Seems like it could also be bad from a certain perspective if you think that it could lead to preferred treatment for longtermist causes which are easier to evaluate (eg. climate change relative to AI safety).