I read this post, where a tentative implication of recent AI advacements was:
“AI risk is no longer a future thing, it’s a ‘maybe I and everyone I love will die pretty damn soon’ thing. Working to prevent existential catastrophe from AI is no longer a philosophical discussion and requires not an ounce of goodwill toward humanity, it requires only a sense of self-preservation.”
Do you believe that or something similar? Are you living as if you believe that? What does living that life look like?
I think that “99% business as usual” for several years is still going to be a “good enough” strategy for most people, even if the threat of AI catastrophe or mass unemployment is imminent within the next two decades. The specifics of timelines does not really change my point, but even if “99% of fully-remote jobs will be automatable in roughly 6-8 years”, there are several steps between this and most of the human workforce being displaced that I suspect will take another 5-20 years. Even with AGI being achieved, not everything is equally tractable to automate. I suspect that AI-to-hardware-solution timelines may be reasonably slower in progression, e.g. achieving reliable robot automation may continue to be difficult for a number of years.