Highlighting one aspect of it: I agree that being generally silent on prioritization across recommended actions is a way in which WWOTF lacks EA-helpfulness that it could have had. This is just a matter of time and space constraints. For chapters 2-7, my main aim was to respond to someone who says, “You’re saying we can improve the long-term future?!? That’s crazy!”, where my response is “Agree it seems crazy, but actually we can improve the long-term future in lots of ways!”
I wasn’t aiming to respond to someone who says “Ok, I buy that we can improve the long-term future. But what’s top-priority?” That would take another few books to do (e.g. one book alone on the magnitude of AI x-risk), and would also be less “timeless”, as our priorities might well change over the coming years.
On the “how much does AI and pandemics need longtermism”—I respond to that line of thinking a bit here (also linked to in the OP).
Thanks so much Alexander — It’s a good thread!
Highlighting one aspect of it: I agree that being generally silent on prioritization across recommended actions is a way in which WWOTF lacks EA-helpfulness that it could have had. This is just a matter of time and space constraints. For chapters 2-7, my main aim was to respond to someone who says, “You’re saying we can improve the long-term future?!? That’s crazy!”, where my response is “Agree it seems crazy, but actually we can improve the long-term future in lots of ways!”
I wasn’t aiming to respond to someone who says “Ok, I buy that we can improve the long-term future. But what’s top-priority?” That would take another few books to do (e.g. one book alone on the magnitude of AI x-risk), and would also be less “timeless”, as our priorities might well change over the coming years.
On the “how much does AI and pandemics need longtermism”—I respond to that line of thinking a bit here (also linked to in the OP).