I think that one reason as to why it is hard to break into the AI safety field, is because people are almost exclusively working on either a) alignment theory, which is very hard, and largely mirrors slow-moving academic work, or b) AI governance, which is somewhat dry and also inherently slow-moving.
I think we need more diversity in the field, and more focus on creating infrastructure for collaboration. We need coordinators, strategists, ethicists, startups, people working on control problems, people working on engineering, auditors, networkers, lobbyists, and more.
--
I am a strategist. My personal top-interests in AI safety are: a) industry collaboration mechanisms b) transparency mechanisms c) shared ontologies and shared ethical frameworks.
None of these are traditional AI safety focus areas.
--
Diversity makes categorization harder, but we should not be purists. One example idea I was briefly working on: A self-funded, decentralized (ledger-based) insurance scheme for young people. Most would join due to job safety concerns, some for AI safety concerns. Is this an AI safety topic, or something else? Many would say something else, but then again, if implemented, it would build awareness and a mobilized network. Many small streams form a river...?
I think that one reason as to why it is hard to break into the AI safety field, is because people are almost exclusively working on either a) alignment theory, which is very hard, and largely mirrors slow-moving academic work, or b) AI governance, which is somewhat dry and also inherently slow-moving.
I think we need more diversity in the field, and more focus on creating infrastructure for collaboration. We need coordinators, strategists, ethicists, startups, people working on control problems, people working on engineering, auditors, networkers, lobbyists, and more.
--
I am a strategist. My personal top-interests in AI safety are: a) industry collaboration mechanisms b) transparency mechanisms c) shared ontologies and shared ethical frameworks.
None of these are traditional AI safety focus areas.
--
Diversity makes categorization harder, but we should not be purists. One example idea I was briefly working on: A self-funded, decentralized (ledger-based) insurance scheme for young people. Most would join due to job safety concerns, some for AI safety concerns. Is this an AI safety topic, or something else? Many would say something else, but then again, if implemented, it would build awareness and a mobilized network. Many small streams form a river...?