I think the main question here is: What can we do today to make the world better in the future? If you believe AI could make the world a lot worse, or even just lock in the already existing state, it seems really valuable to do work on that not happening. If you additionally believe AI could solve problems such as wild animal suffering or unhappy humans then it seems like an even more area problem to spend your time on.
(I think this might be less clear for biorisk where the main concern really is extinction.)
“the world is not designed for humans”
I think our descendants will unlikely be flesh-and-blood humans but rather digital forms of sentience: https://www.cold-takes.com/how-digital-people-could-change-the-world/
I think the main question here is: What can we do today to make the world better in the future? If you believe AI could make the world a lot worse, or even just lock in the already existing state, it seems really valuable to do work on that not happening. If you additionally believe AI could solve problems such as wild animal suffering or unhappy humans then it seems like an even more area problem to spend your time on.
(I think this might be less clear for biorisk where the main concern really is extinction.)