Re: bioweapons convention: Good point, so maybe not as straightforward as I described.
Re: predicting AI: You can always not publish the research you are doing or only inform safety-focused institutions about it. I agree that there are some possible downsides to knowing more precisely when AI will be developed, but there seem to be much worse downsides to not knowing when AI will be developed (mainly that nobody is preparing for it policy- and coordination-wise) I think the biggest risk is getting governments too excited about AI. So I’m actually not super confident that any work on this is 10x more likely to be positive.
Re: policy & alignment: I’m very confident, that there is some form of alignment work that is not speeding up capabilities, especially the more abstract one. Though I agree on interpretability. On policy, I would also be surprised if every avenue of governance was as risky as you describe. Especially laying out big picture strategies and monitoring AI development seem pretty low-risk.
Overall, I think you have done a good job scrutinizing my claims and I’m much less confident now. Still, I’d be really surprised if every type of longtermist work was as risky as your examples—especially for someone as safety-conscious as you are. (Actually, one very positive thing might be criticizing different approaches and showing their downsides)
I share your sentiment: there must be some form of alignment work that is not speeding up capabilities, some form of longtermist work that isn’t risky… right?
Why are the examples so elusive? I think this is the core of the present forum post.
15 years ago, when GiveWell started, the search for good interventions was difficult. It required a lot of research, trials, reasoning etc. to find the current recommendations. We are at a similar point for work targeting the far future… except that we can’t do experiments, don’t have feedback, don’t have historical examples[1], etc. This makes the question a much harder one. It also means that “do research on good interventions” isn’t a good answer either, since this research is so intractable.
Re: bioweapons convention: Good point, so maybe not as straightforward as I described.
Re: predicting AI: You can always not publish the research you are doing or only inform safety-focused institutions about it. I agree that there are some possible downsides to knowing more precisely when AI will be developed, but there seem to be much worse downsides to not knowing when AI will be developed (mainly that nobody is preparing for it policy- and coordination-wise)
I think the biggest risk is getting governments too excited about AI. So I’m actually not super confident that any work on this is 10x more likely to be positive.
Re: policy & alignment: I’m very confident, that there is some form of alignment work that is not speeding up capabilities, especially the more abstract one. Though I agree on interpretability. On policy, I would also be surprised if every avenue of governance was as risky as you describe. Especially laying out big picture strategies and monitoring AI development seem pretty low-risk.
Overall, I think you have done a good job scrutinizing my claims and I’m much less confident now. Still, I’d be really surprised if every type of longtermist work was as risky as your examples—especially for someone as safety-conscious as you are. (Actually, one very positive thing might be criticizing different approaches and showing their downsides)
Thanks a lot for your responses!
I share your sentiment: there must be some form of alignment work that is not speeding up capabilities, some form of longtermist work that isn’t risky… right?
Why are the examples so elusive? I think this is the core of the present forum post.
15 years ago, when GiveWell started, the search for good interventions was difficult. It required a lot of research, trials, reasoning etc. to find the current recommendations. We are at a similar point for work targeting the far future… except that we can’t do experiments, don’t have feedback, don’t have historical examples[1], etc. This makes the question a much harder one. It also means that “do research on good interventions” isn’t a good answer either, since this research is so intractable.
Ian Morris in this podcast episode discusses to what degree history is contingent, i.e., past events have influenced the future for a long time.