I also got the same feeling, but then discarded it because this is not supposed to be a prioritisation argument, simply motivation.
It doesn’t need to (suspiciously) claim that AI safety so happens to also be best for your other interests, just that it helps there too, and that that’s nice to know :)
So long as make your commitments based on solid rational reasoning, it’s ok to lean into sources of motivation that wouldn’t be intellectually persuasive, but motivate you nonetheless.
I also got the same feeling, but then discarded it because this is not supposed to be a prioritisation argument, simply motivation.
It doesn’t need to (suspiciously) claim that AI safety so happens to also be best for your other interests, just that it helps there too, and that that’s nice to know :)
So long as make your commitments based on solid rational reasoning, it’s ok to lean into sources of motivation that wouldn’t be intellectually persuasive, but motivate you nonetheless.