Fair point. I’m actually pretty comfortable calling such reasoning “non-EA”, even if it led to joining pretty idiosyncratically-EA projects like alignment.
Actually, I guess there could be people attracted to specific EA projects from “non-EA” lines of reasoning across basically all cause areas?
Fair point. I’m actually pretty comfortable calling such reasoning “non-EA”, even if it led to joining pretty idiosyncratically-EA projects like alignment.
Actually, I guess there could be people attracted to specific EA projects from “non-EA” lines of reasoning across basically all cause areas?
Very reasonable, since it’s not grounded in altruism!