The publication of “Superintelligence” by Nick Bostrom in July 2014 and its successful communication have been hugely impactful in establishing the field of AI safety, notably by getting recommendations from Bill Gates, Stephen Hawkin, and Elon Musk.
Some ideas:
The publication of “Superintelligence” by Nick Bostrom in July 2014 and its successful communication have been hugely impactful in establishing the field of AI safety, notably by getting recommendations from Bill Gates, Stephen Hawkin, and Elon Musk.
The Future of Life Institute’s organization of the “Beneficial AI conferences”, including facilitating the signing of the Open Letter on Artificial Intelligence and the Asilomar Conference, which established foundational AI principles
Probably the launching of several organizations with a focus on AI Safety. See more here (but need prioritization and attribution to the EA movement).