The sink source should then be owned by a team seen as extremely responsible, reliable, and committed to safety above all else. I recommend FHI or MIRI (or both!) take on that role.
Were this to happen, these orgs would not be seen as the appropriate ‘owners’ by most folk in mainstream AI (I say as a fan of both). Their work is not really well-known outside of EA/Bay area circles (other than people having heard of Bostrom as the ’superintelligence guy;).
One possible path would be for a high-reputation network to take on this role. E.g. something like the partnership on AI’s safety-critical AI group (which has a number of long-term safety folk on it as well as near-term safety) or something similar.
The process might be normalised by focusing on reviewing/advising on risky/dual use AI research in then near-term—e.g. research that highlights new ways of doing adversarial attacks on current systems, or enables new surveillance capabilities (e.g. https://arxiv.org/abs/1808.07301). This could help set the precedents for, and establish the institutions needed for safety review for AGI-relevant research (right now I think it would be too hard to say in most cases what would constitute a ‘risky’ piece of research from an AGI perspective, given most of it for now would look like building blocks of fundamental research).
Great summary, thanks.
Were this to happen, these orgs would not be seen as the appropriate ‘owners’ by most folk in mainstream AI (I say as a fan of both). Their work is not really well-known outside of EA/Bay area circles (other than people having heard of Bostrom as the ’superintelligence guy;).
One possible path would be for a high-reputation network to take on this role. E.g. something like the partnership on AI’s safety-critical AI group (which has a number of long-term safety folk on it as well as near-term safety) or something similar. The process might be normalised by focusing on reviewing/advising on risky/dual use AI research in then near-term—e.g. research that highlights new ways of doing adversarial attacks on current systems, or enables new surveillance capabilities (e.g. https://arxiv.org/abs/1808.07301). This could help set the precedents for, and establish the institutions needed for safety review for AGI-relevant research (right now I think it would be too hard to say in most cases what would constitute a ‘risky’ piece of research from an AGI perspective, given most of it for now would look like building blocks of fundamental research).