What actually changes about what you’d work on if you concluded that improving the future is more important on the current margin than trying to reduce the chance of (total) extinction (or vice versa)?
It felt surprisingly hard to come up with important examples of this, I think because there is some (suspicious?) convergence between both extinction prevention and trajectory changing via improving the caution and wisdom with which we transition to ASI. This both makes extinction less likely (through more focus on alignment and control work, and perhaps slowing capabilities progress or differential accelerating safety-oriented AI applications) and improves the value of surviving futures (by making human takeovers, suffering digital minds etc less likely).
But maybe this is just focusing on the wrong resolution. Breaking down ‘making the ASI transition wiser’, if we are mainly focused on extinction, AI control looks especially promising but less so otherwise. Digital sentience and rights work looks better if trajectory changes dominate, though not entirely. Improving company and government (especially USG) understanding of relevant issues seems good for both.
Obviously, asteroids, supervolcanoes, etc work looks worse if preventing extinction is less important.
Biorisk I’m less sure about—non-AI mediated extinction from bio seems very unlikely, but what would a GCR pandemic do to future values? Probably ~neutral in expectation, but plausibly it could lead to the demise of liberal democratic institutions (bad), or to a post-recovery world that is more scared and committed to global cooperation to prevent that recurring (good).
What actually changes about what you’d work on if you concluded that improving the future is more important on the current margin than trying to reduce the chance of (total) extinction (or vice versa)?
Curious for takes from anyone!
It felt surprisingly hard to come up with important examples of this, I think because there is some (suspicious?) convergence between both extinction prevention and trajectory changing via improving the caution and wisdom with which we transition to ASI. This both makes extinction less likely (through more focus on alignment and control work, and perhaps slowing capabilities progress or differential accelerating safety-oriented AI applications) and improves the value of surviving futures (by making human takeovers, suffering digital minds etc less likely).
But maybe this is just focusing on the wrong resolution. Breaking down ‘making the ASI transition wiser’, if we are mainly focused on extinction, AI control looks especially promising but less so otherwise. Digital sentience and rights work looks better if trajectory changes dominate, though not entirely. Improving company and government (especially USG) understanding of relevant issues seems good for both.
Obviously, asteroids, supervolcanoes, etc work looks worse if preventing extinction is less important.
Biorisk I’m less sure about—non-AI mediated extinction from bio seems very unlikely, but what would a GCR pandemic do to future values? Probably ~neutral in expectation, but plausibly it could lead to the demise of liberal democratic institutions (bad), or to a post-recovery world that is more scared and committed to global cooperation to prevent that recurring (good).