Anyone have good sources on EA’s role in establishing AI Safety as a research field? (Specifically, sources that readers who don’t already trust the EA movement would find compelling.)
The publication of “Superintelligence” by Nick Bostrom in July 2014 and its successful communication have been hugely impactful in establishing the field of AI safety, notably by getting recommendations from Bill Gates, Stephen Hawkin, and Elon Musk.
Do you have a specific definition of AI Safety in mind? From my (biased) point of view, it looks like large fractions of work that is explicitly branded “AI Safety” is done by people who are at least somewhat adjacent to the EA community. But this becomes a lot less true if you widen the definition to include all work that could be called “AI Safety” (so anything that could conceivably help with avoiding any kind of dangerous malfunction of AI systems, including small scale and easily fixable problems).
Anyone have good sources on EA’s role in establishing AI Safety as a research field? (Specifically, sources that readers who don’t already trust the EA movement would find compelling.)
“Concrete problems in AI safety” was written by EAs and has 1.8k citations
Some ideas:
The publication of “Superintelligence” by Nick Bostrom in July 2014 and its successful communication have been hugely impactful in establishing the field of AI safety, notably by getting recommendations from Bill Gates, Stephen Hawkin, and Elon Musk.
The Future of Life Institute’s organization of the “Beneficial AI conferences”, including facilitating the signing of the Open Letter on Artificial Intelligence and the Asilomar Conference, which established foundational AI principles
Probably the launching of several organizations with a focus on AI Safety. See more here (but need prioritization and attribution to the EA movement).
Do you have a specific definition of AI Safety in mind? From my (biased) point of view, it looks like large fractions of work that is explicitly branded “AI Safety” is done by people who are at least somewhat adjacent to the EA community. But this becomes a lot less true if you widen the definition to include all work that could be called “AI Safety” (so anything that could conceivably help with avoiding any kind of dangerous malfunction of AI systems, including small scale and easily fixable problems).