Work related to AI trajectories can still be important even if you think the expected value of the far future is net negative (as I do, relative to my roughly negative-utilitarian values). In addition to alignment, we can also work on reducing s-risks that would result from superintelligence. This work tends to be somewhat different from ordinary AI alignment, although some types of alignment work may reduce s-risks also. (Some alignment work might increase s-risks.)
If you’re not a longtermist or think we’re too clueless about the long-run future, then this work would be less worthwhile. That said, AI will still be hugely disruptive even in the next few years, so we should pay some attention to it regardless of what else we’re doing.
Work related to AI trajectories can still be important even if you think the expected value of the far future is net negative (as I do, relative to my roughly negative-utilitarian values). In addition to alignment, we can also work on reducing s-risks that would result from superintelligence. This work tends to be somewhat different from ordinary AI alignment, although some types of alignment work may reduce s-risks also. (Some alignment work might increase s-risks.)
If you’re not a longtermist or think we’re too clueless about the long-run future, then this work would be less worthwhile. That said, AI will still be hugely disruptive even in the next few years, so we should pay some attention to it regardless of what else we’re doing.