We should try to make some EA sentiments and principles (e.g., scope sensitivity, thinking hard about ethics) a core part of the AI safety field
On a literal interpretation of this statement, I disagree, because I don’t think trying to inject those principles will be cost-effective. But I do think people should adopt those principles in AI safety (and also in every other cause area).
On a literal interpretation of this statement, I disagree, because I don’t think trying to inject those principles will be cost-effective. But I do think people should adopt those principles in AI safety (and also in every other cause area).