Nate Soares’ take here was that an AI takeover would most likely lead to an “unconscious meh” scenario, where “The outcome is worse than the “Pretty Good” scenario, but isn’t worse than an empty universe-shard” and “there’s little or no conscious experience in our universe-shard’s future. E.g., our universe-shard is tiled with tiny molecular squiggles (a.k.a. “molecular paperclips”).” Whereas humanity boosted by ASI would probably lead to a better outcome.
That was also the most common view in the polls in the comments there.
How about ‘On the margin, work on reducing the chance of our extinction is the work that most increases the value of the future’?
As I see it, the main issue with the framing in this post is that the work to reduce the chances of extinction might be the exact same work as the work to increase EV conditional on survival. In particular, preventing AI takeover might be the most valuable work for both. In which case the question would be asking to compare the overall marginal value of those takeover-prevention actions with the overall marginal value of those same actions.
(At first glance it’s an interesting coincidence for the same actions to help the most with both, but on reflection it’s not that unusual for these to align. Being in a serious car crash is really bad, both because you might die and because it could make your life much worse if you survive. Similarly with serious illness. Or, for nations/cities/tribes throughout history, losing a war where you’re conquered could lead to the conquerors killing you or doing other bad things to you. Avoiding something bad that might be fatal can be very valuable both for avoiding death and for the value conditional on survival.)