I understand expected values, but think about what these longtermist calculations say: a tiny chance of lowering existential risk (a vanishingly small probability of improving the likelihood that quadzillions of happy robots will take over the universe) is more important than, say, stopping something like the Holocaust. Seriously. If a longtermist was alive in 1938 and knew what was going on in Nazi Germany, they would turn down the opportunity to influence public opinion and policy: “An asteroid might hit Earth someday. The numbers prove we must focus on that.”
I think a longtermist in 1938 may well have come to the conclusion that failing to oppose the Holocaust (and Nazism more broadly) would also be bad from a longtermist perspective. This is because it would increase the likelihood of a long-term totalitarian state that isn’t interested in improving the overall welfare of sentient beings.
I think a longtermist in 1938 may well have come to the conclusion that failing to oppose the Holocaust (and Nazism more broadly) would also be bad from a longtermist perspective. This is because it would increase the likelihood of a long-term totalitarian state that isn’t interested in improving the overall welfare of sentient beings.