I think an important difference is the explicitness of credences. I expect most of the short-timeline AI people to have explicit probability distributions and I expect them to behave accordingly. This would then definitely entail retirement savings etc. as (from my personal encounters) many have non-negligible probability mass on AGI after their lifetime.
I’m sure there are also the “99.9% within the next 20 years”-people but I’m sure they’re doing better within their subculture than early Christians did and usually don’t risk unemployment, starvation, or ostracism.
I think an important difference is the explicitness of credences. I expect most of the short-timeline AI people to have explicit probability distributions and I expect them to behave accordingly. This would then definitely entail retirement savings etc. as (from my personal encounters) many have non-negligible probability mass on AGI after their lifetime.
I’m sure there are also the “99.9% within the next 20 years”-people but I’m sure they’re doing better within their subculture than early Christians did and usually don’t risk unemployment, starvation, or ostracism.