I also got the same feeling, but then discarded it because this is not supposed to be a prioritisation argument, simply motivation.
It doesn’t need to (suspiciously) claim that AI safety so happens to also be best for your other interests, just that it helps there too, and that that’s nice to know :)
So long as make your commitments based on solid rational reasoning, it’s ok to lean into sources of motivation that wouldn’t be intellectually persuasive, but motivate you nonetheless.
This is really strongly giving off suspicious convergence vibes, especially “AI safety means you don’t have to choose a cause”.
Also, “AI is better than us” is kind of scary religious talk. It sounds like we are worshipping a god and trying to summon it :)
I also got the same feeling, but then discarded it because this is not supposed to be a prioritisation argument, simply motivation.
It doesn’t need to (suspiciously) claim that AI safety so happens to also be best for your other interests, just that it helps there too, and that that’s nice to know :)
So long as make your commitments based on solid rational reasoning, it’s ok to lean into sources of motivation that wouldn’t be intellectually persuasive, but motivate you nonetheless.