So your argument doesn’t seems to save existential risk work. The only way to get a non-trivial P(high influence | long future) with your prior seems to be by conditioning on an additional observation “we’re extremely early”. As I argued here, that’s somewhat sketchy to do.
As you wrote, the future being short “doesn’t necessarily imply that xrisk work doesn’t have much impact because the future might just be short in terms of people in our anthropic reference class”.
Another thought that comes to mind is that there may exist many evolved civilizations that their behavior is correlated with our behavior. If so, us deciding to work hard on reducing x-risks means it’s more likely that those other civilizations would also decide—during early centuries—to work hard on reducing x-risks.
As you wrote, the future being short “doesn’t necessarily imply that xrisk work doesn’t have much impact because the future might just be short in terms of people in our anthropic reference class”.
Another thought that comes to mind is that there may exist many evolved civilizations that their behavior is correlated with our behavior. If so, us deciding to work hard on reducing x-risks means it’s more likely that those other civilizations would also decide—during early centuries—to work hard on reducing x-risks.