Should we expect AI companies to reduce risk through self-governance? This post investigates six historical cases, of which the two most successful were the Asilomar conference on recombinant DNA, and the actions of Leo Szilard and other physicists in 1939 (around the development of the atomic bomb). It is hard to make any confident conclusions, but the author identifies the following five factors that make self-governance more likely:
1. The risks are salient. 2. If self-governance doesn’t happen, then the government will step in with regulation (which is expected to be poorly designed). 3. The field is small, so that coordination is easier. 4. There is support from gatekeepers (e.g. academic journals). 5. There is support from credentialed scientists.
Planned summary for the Alignment Newsletter: