(Note: I have edited this comment after finding even more reasons to agree with Neel)
I find your answer really convincing so you made me change my mind!
On truth-seeking without the whole “EA is a question”: If someone made the case for existential risks using quantitative and analytical thinking, that would work. We should just focus on just conveying these ideas in a rational and truth-seeking way.
On cause-neutrality: Independently of what you said, you may make the case that the probability of finding a cause that is even higher impact than AI and bio is extremely low, given how bad those risks are; and given that we have given a few years of analysis already. We could have an organization focused on finding cause X, but the vast majority of people interested in reducing existential risks should just focus on that directly.
On getting to EA principles and ideas: Also, if people get interested in EA through existential risks, they can also go to EA principles later on; just like people get interested in EA through charity effectiveness; and change their minds if they find something even better to work on.
Moreover, if we do more outreach that is “action oriented”, we may just find more “action oriented people”… which actually sounds good? We do need way more action.
(Note: I have edited this comment after finding even more reasons to agree with Neel)
I find your answer really convincing so you made me change my mind!
On truth-seeking without the whole “EA is a question”: If someone made the case for existential risks using quantitative and analytical thinking, that would work. We should just focus on just conveying these ideas in a rational and truth-seeking way.
On cause-neutrality: Independently of what you said, you may make the case that the probability of finding a cause that is even higher impact than AI and bio is extremely low, given how bad those risks are; and given that we have given a few years of analysis already. We could have an organization focused on finding cause X, but the vast majority of people interested in reducing existential risks should just focus on that directly.
On getting to EA principles and ideas: Also, if people get interested in EA through existential risks, they can also go to EA principles later on; just like people get interested in EA through charity effectiveness; and change their minds if they find something even better to work on.
Moreover, if we do more outreach that is “action oriented”, we may just find more “action oriented people”… which actually sounds good? We do need way more action.