You will have to be sure that the researchers actually know what you mean though. AI researchers are already concerned about accidents in the narrow sense, and they could respond positively to the idea of preventing AI accidents merely because they have something else in mind (like keeping self driving cars safe or something like that).
If accept this switch to language that is appealing at the expense of precision then eventually you will reach a motte-and-bailey situation where the motte is the broad idea of ‘preventing accidents’ and the bailey is the specific long-term AGI scheme outlined by Bostrom and MIRI. You’ll get fewer funny looks, but only by conflating and muddling the issues.
You will have to be sure that the researchers actually know what you mean though. AI researchers are already concerned about accidents in the narrow sense, and they could respond positively to the idea of preventing AI accidents merely because they have something else in mind (like keeping self driving cars safe or something like that).
If accept this switch to language that is appealing at the expense of precision then eventually you will reach a motte-and-bailey situation where the motte is the broad idea of ‘preventing accidents’ and the bailey is the specific long-term AGI scheme outlined by Bostrom and MIRI. You’ll get fewer funny looks, but only by conflating and muddling the issues.