In my current view, MIRI’s main contributions are (1) producing research on highly-capable aligned AI that won’t be produced by default by academia or industry; (2) helping steer academia and industry towards working on aligned AI; and (3) producing strategic knowledge of how to reduce existential risk from highly-capable AI. I think (1) and (3) are MIRI’s current strong suits. This is not easy to verify without technical background and domain knowledge, but at least for my own thinking I’m impressed enough with these points to find MIRI very worthwhile to work with.
If (1) were not strong, and (2) were no stronger than currently, I would trust (3) somewhat less, and I would give up on MIRI. If (1) became difficult or impossible because (2) was done, i.e. if academia and/or industry were already doing all the important safety research, I’d see MIRI as much less crucial, unless there was a pivot to remaining neglected tasks in reducing existential risk from AI. If (2) looked too difficult (though there is already significant success, in part due to MIRI, FHI, and FLI), and (1) were not proceeding fast enough, and my “time until game-changing AI” estimates were small enough, then I’d probably do something different.
I’ll interpret this question as “what are the most plausible ways for you to lose confidence in MIRI’s effectiveness and/or leave MIRI?” Here are a few ways that could happen for me:
I could be convinced that I was wrong about the type and quality of AI alignment research that the external community is able to do. There’s some inferential distance here, so I’m not expecting to explain my model in full, but in brief, I currently expect that there are a few types of important research that academia and industry won’t do by default. If I was convinced that either (a) there are no such gaps or (b) they will be filled by academia and industry as a matter of course, then I would downgrade my assessment of the importance of MIRI accordingly.
I could learn that our research path was doomed, for one reason or another, and simultaneously learn that repurposing our skill/experience/etc. for other purposes was not worth the opportunity cost of all our time and effort.
What kind of things, if true, would convince you that MIRI was not worth donating to? What would make you give up on MIRI?
In my current view, MIRI’s main contributions are (1) producing research on highly-capable aligned AI that won’t be produced by default by academia or industry; (2) helping steer academia and industry towards working on aligned AI; and (3) producing strategic knowledge of how to reduce existential risk from highly-capable AI. I think (1) and (3) are MIRI’s current strong suits. This is not easy to verify without technical background and domain knowledge, but at least for my own thinking I’m impressed enough with these points to find MIRI very worthwhile to work with.
If (1) were not strong, and (2) were no stronger than currently, I would trust (3) somewhat less, and I would give up on MIRI. If (1) became difficult or impossible because (2) was done, i.e. if academia and/or industry were already doing all the important safety research, I’d see MIRI as much less crucial, unless there was a pivot to remaining neglected tasks in reducing existential risk from AI. If (2) looked too difficult (though there is already significant success, in part due to MIRI, FHI, and FLI), and (1) were not proceeding fast enough, and my “time until game-changing AI” estimates were small enough, then I’d probably do something different.
By (3), do you mean the publications that are listed under “forecasting” on MIRI’s publications page?
I’ll interpret this question as “what are the most plausible ways for you to lose confidence in MIRI’s effectiveness and/or leave MIRI?” Here are a few ways that could happen for me:
I could be convinced that I was wrong about the type and quality of AI alignment research that the external community is able to do. There’s some inferential distance here, so I’m not expecting to explain my model in full, but in brief, I currently expect that there are a few types of important research that academia and industry won’t do by default. If I was convinced that either (a) there are no such gaps or (b) they will be filled by academia and industry as a matter of course, then I would downgrade my assessment of the importance of MIRI accordingly.
I could learn that our research path was doomed, for one reason or another, and simultaneously learn that repurposing our skill/experience/etc. for other purposes was not worth the opportunity cost of all our time and effort.