I’ll interpret this question as “what are the most plausible ways for you to lose confidence in MIRI’s effectiveness and/or leave MIRI?” Here are a few ways that could happen for me:
I could be convinced that I was wrong about the type and quality of AI alignment research that the external community is able to do. There’s some inferential distance here, so I’m not expecting to explain my model in full, but in brief, I currently expect that there are a few types of important research that academia and industry won’t do by default. If I was convinced that either (a) there are no such gaps or (b) they will be filled by academia and industry as a matter of course, then I would downgrade my assessment of the importance of MIRI accordingly.
I could learn that our research path was doomed, for one reason or another, and simultaneously learn that repurposing our skill/experience/etc. for other purposes was not worth the opportunity cost of all our time and effort.
I’ll interpret this question as “what are the most plausible ways for you to lose confidence in MIRI’s effectiveness and/or leave MIRI?” Here are a few ways that could happen for me:
I could be convinced that I was wrong about the type and quality of AI alignment research that the external community is able to do. There’s some inferential distance here, so I’m not expecting to explain my model in full, but in brief, I currently expect that there are a few types of important research that academia and industry won’t do by default. If I was convinced that either (a) there are no such gaps or (b) they will be filled by academia and industry as a matter of course, then I would downgrade my assessment of the importance of MIRI accordingly.
I could learn that our research path was doomed, for one reason or another, and simultaneously learn that repurposing our skill/experience/etc. for other purposes was not worth the opportunity cost of all our time and effort.