Speaking for myself, the main reason I don’t get involved in AI stuff is because I feel clueless about what the correct action might be (and how valuable it might be, in expectation). I think there is a pretty strong argument that EA involvement in AI risk has made things worse, not better, and I wouldn’t want to make things even worse.
Yes, this was the biggest reason why I was considering to exit AI safety. I grappled with this question multiple months. Complex cluelessness triggered a small identity crisis for me haha.
“If you can’t predict the second and third order effects of your actions, what is the point of trying to do good in the first place?” Open Phil funding OpenAI is a classical example here.
But here is why I am still going:
I’m doing no one a favour by coming to the conclusion the risk that it’s just not tractable at all is too high, so I’m just not going to do it at all. AGI is still going to happen. It’s still going to be determined by a relatively small number of people. They’re going to, on average, both care less about humanity and have thought less rigorously about what’s most tractable. So I’m not really doing anyone a favor by dropping out.
More concretely:
Even if object-level actions are not tractable, the EV of doing meta-research still seems to significantly outweigh other cause areas. Positively steering the singularity remains for me to be the most important challenge of our time (assuming one subscribes to longtermism and acknowledges both the vast potential of the future and the severe risks of s-risks).
Even if we live in a world where there is a 99% chance of being entirely clueless about effective actions and only a 1% chance of identifying a few robust strategies, it is still highly worthwhile to focus on meta-research aimed at discovering those strategies.
Speaking for myself, the main reason I don’t get involved in AI stuff is because I feel clueless about what the correct action might be (and how valuable it might be, in expectation). I think there is a pretty strong argument that EA involvement in AI risk has made things worse, not better, and I wouldn’t want to make things even worse.
Yes, this was the biggest reason why I was considering to exit AI safety. I grappled with this question multiple months. Complex cluelessness triggered a small identity crisis for me haha.
“If you can’t predict the second and third order effects of your actions, what is the point of trying to do good in the first place?” Open Phil funding OpenAI is a classical example here.
But here is why I am still going:
I’m doing no one a favour by coming to the conclusion the risk that it’s just not tractable at all is too high, so I’m just not going to do it at all. AGI is still going to happen. It’s still going to be determined by a relatively small number of people. They’re going to, on average, both care less about humanity and have thought less rigorously about what’s most tractable. So I’m not really doing anyone a favor by dropping out.
More concretely:
Even if object-level actions are not tractable, the EV of doing meta-research still seems to significantly outweigh other cause areas. Positively steering the singularity remains for me to be the most important challenge of our time (assuming one subscribes to longtermism and acknowledges both the vast potential of the future and the severe risks of s-risks).
Even if we live in a world where there is a 99% chance of being entirely clueless about effective actions and only a 1% chance of identifying a few robust strategies, it is still highly worthwhile to focus on meta-research aimed at discovering those strategies.