But we want to make sure that the “truth-seeking” norms of this movement stay really really high.
I think there’s two similar but different things here—truth-seeking and cause neutrality. Truth-seeking is the general point of ‘it’s really important to find truth, look past biases, care about evidence, etc’ and cause neutrality is the specific form of truth seeking that impact between different causes can differ enormously and that it’s worth looking past cached thoughts and the sunk cost fallacy to be open to moving to other causes.
I think truth-seeking can be conveyed well without cause neutrality—if you don’t truth-seek, you will be a much less effective person working on global development. I think this is pretty obvious, and can be made with any of the classic examples (PlayPumps, Scared Straight, etc).
People may absorb the idea of truth-seeking without cause neutrality. And I think I feel kinda fine about this? Like, I want the EA movement to still retain cause neutrality. And I’d be pro talking about it. But I’d be happy with intro materials getting people who want to work on AI and bio without grokking cause neutrality.
In particular, I want to distinguish between ‘cause switching because another cause is even more important’ and ‘cause switching because my cause is way less important than I thought’. I don’t really expect to see another cause way more important than AI or bio? Something comparably important, or maybe 2-5x more important, maybe? But my fair value on AI extinction within my lifetime is 10-20%. This is really high!!! I don’t really see there existing future causes that are way more important than that. And, IMO, the idea of truth-seeking conveyed well should be sufficient to get people to notice if their cause is way less important than they thought in absolute terms (eg, work on AI is not at all tractable).
(Note: I have edited this comment after finding even more reasons to agree with Neel)
I find your answer really convincing so you made me change my mind!
On truth-seeking without the whole “EA is a question”: If someone made the case for existential risks using quantitative and analytical thinking, that would work. We should just focus on just conveying these ideas in a rational and truth-seeking way.
On cause-neutrality: Independently of what you said, you may make the case that the probability of finding a cause that is even higher impact than AI and bio is extremely low, given how bad those risks are; and given that we have given a few years of analysis already. We could have an organization focused on finding cause X, but the vast majority of people interested in reducing existential risks should just focus on that directly.
On getting to EA principles and ideas: Also, if people get interested in EA through existential risks, they can also go to EA principles later on; just like people get interested in EA through charity effectiveness; and change their minds if they find something even better to work on.
Moreover, if we do more outreach that is “action oriented”, we may just find more “action oriented people”… which actually sounds good? We do need way more action.
I think there’s two similar but different things here—truth-seeking and cause neutrality. Truth-seeking is the general point of ‘it’s really important to find truth, look past biases, care about evidence, etc’ and cause neutrality is the specific form of truth seeking that impact between different causes can differ enormously and that it’s worth looking past cached thoughts and the sunk cost fallacy to be open to moving to other causes.
I think truth-seeking can be conveyed well without cause neutrality—if you don’t truth-seek, you will be a much less effective person working on global development. I think this is pretty obvious, and can be made with any of the classic examples (PlayPumps, Scared Straight, etc).
People may absorb the idea of truth-seeking without cause neutrality. And I think I feel kinda fine about this? Like, I want the EA movement to still retain cause neutrality. And I’d be pro talking about it. But I’d be happy with intro materials getting people who want to work on AI and bio without grokking cause neutrality.
In particular, I want to distinguish between ‘cause switching because another cause is even more important’ and ‘cause switching because my cause is way less important than I thought’. I don’t really expect to see another cause way more important than AI or bio? Something comparably important, or maybe 2-5x more important, maybe? But my fair value on AI extinction within my lifetime is 10-20%. This is really high!!! I don’t really see there existing future causes that are way more important than that. And, IMO, the idea of truth-seeking conveyed well should be sufficient to get people to notice if their cause is way less important than they thought in absolute terms (eg, work on AI is not at all tractable).
(Note: I have edited this comment after finding even more reasons to agree with Neel)
I find your answer really convincing so you made me change my mind!
On truth-seeking without the whole “EA is a question”: If someone made the case for existential risks using quantitative and analytical thinking, that would work. We should just focus on just conveying these ideas in a rational and truth-seeking way.
On cause-neutrality: Independently of what you said, you may make the case that the probability of finding a cause that is even higher impact than AI and bio is extremely low, given how bad those risks are; and given that we have given a few years of analysis already. We could have an organization focused on finding cause X, but the vast majority of people interested in reducing existential risks should just focus on that directly.
On getting to EA principles and ideas: Also, if people get interested in EA through existential risks, they can also go to EA principles later on; just like people get interested in EA through charity effectiveness; and change their minds if they find something even better to work on.
Moreover, if we do more outreach that is “action oriented”, we may just find more “action oriented people”… which actually sounds good? We do need way more action.