We’re currently at a weird point where there’s a lot of interest in AI—news coverage, investment, etc. It feels weird to not be trying to shape the conversation on AI risk more than we are now. I’m well aware that this sort of thing can backfire, and I’m aware that most people are highly sceptical of trying not to “politicise” issues like these, but it might be a good idea.
If it was written by, say, Toby Ord—or anyone sufficiently detached from American left/right politics, with enough prestige, background, and experience with writing books like these—I feel like it might be really valuable.
It might also be more approachable than other books covering AI risk, like, say, Superintelligence. It might also seem a little more concrete, because it might cover scenarios that are easier for most people to imagine/scenarios that are more near-term, and less “sci-fi”.
I think this would be a great idea! I would be curious to know in case someone is working on something like this already, and if not, it would be great to have this.
In my understanding, going from manuscript completion to publication probably takes 1-2 years. This is long enough that new developments in AI capabilities/regulations/treaties would come about, but worse, AI governance is a fast growing academic field right now. I imagine that the state-of-the-art in AI gov research/analysis frameworks could look quite different in a couple of years.
Would an AI governance book that covered the present landscape of gov-related topics (maybe like a book version of the FHI’s AI Governance Research Agenda?) be useful?
We’re currently at a weird point where there’s a lot of interest in AI—news coverage, investment, etc. It feels weird to not be trying to shape the conversation on AI risk more than we are now. I’m well aware that this sort of thing can backfire, and I’m aware that most people are highly sceptical of trying not to “politicise” issues like these, but it might be a good idea.
If it was written by, say, Toby Ord—or anyone sufficiently detached from American left/right politics, with enough prestige, background, and experience with writing books like these—I feel like it might be really valuable.
It might also be more approachable than other books covering AI risk, like, say, Superintelligence. It might also seem a little more concrete, because it might cover scenarios that are easier for most people to imagine/scenarios that are more near-term, and less “sci-fi”.
Thoughts on this?
I think this would be a great idea! I would be curious to know in case someone is working on something like this already, and if not, it would be great to have this.
In my understanding, going from manuscript completion to publication probably takes 1-2 years. This is long enough that new developments in AI capabilities/regulations/treaties would come about, but worse, AI governance is a fast growing academic field right now. I imagine that the state-of-the-art in AI gov research/analysis frameworks could look quite different in a couple of years.