being pushed into actions that have a very low probability of producing value, because the reward would be extremely high in the unlikely event they did work out
I haven’t watched the video, but I assumed it’s going to say “AI Safety is not a Pascal’s Mugging because the probability of AI x-risk is nontrivially high.” So someone who comes into the video with the assumption that AI risk is a clear Pascal’s Mugging since they view it as “a rhetorical move that introduces huge moral stakes into the world-view in order to push people into drastically altering their actions and priorities” would be pretty unhappy with the video and think that there was a bait-and-switch.
Yes this is the definition I would prefer.
I haven’t watched the video, but I assumed it’s going to say “AI Safety is not a Pascal’s Mugging because the probability of AI x-risk is nontrivially high.” So someone who comes into the video with the assumption that AI risk is a clear Pascal’s Mugging since they view it as “a rhetorical move that introduces huge moral stakes into the world-view in order to push people into drastically altering their actions and priorities” would be pretty unhappy with the video and think that there was a bait-and-switch.