Thanks! I thought this was great. I really like the goals of fostering a more in-depth discussion and understanding skeptics’ viewpoints.
I’m not sure about modeling a follow-up project on Skeptical Science, which is intended (in large part) to rebut misinformation about climate change. There’s essentially consensus in the scientific community that human beings are causing climate change, so such a project seems appropriate.
Is there an equally high level of expert consensus on the existential risks posed by AI?
Have all of the strongest of the AI safety skeptics’ arguments been thoroughly debunked using evidence, logic, and reason?
If the answer to either of these questions is “no,” then maybe more foundational work (in the vein of this interview project) should be done first. I like your idea of using double crux interviews to determine which arguments are the most important.
One other idea would be to invite some prominent skeptics and proponents to synthesize the best of their arguments and debate them, live or in writing, with an emphasis on clear, jargon-free language (maybe such a project already exists?).
Is there an equally high level of expert consensus on the existential risks posed by AI?
There isn’t. I think a strange but true and important fact about the problem is that it just isn’t a field of study in the same way e.g. climate science is — as argued in this Cold Takes post. So it’s unclear who the relevant “experts” should be. Technical AI researchers are maybe the best choice, but they’re still not a good one; they’re in the business of making progress locally, not forecasting what progress will be globally and what effects that will have.
Thanks! I agree—AI risk is at a much earlier stage of development as a field. Even as the field develops and experts can be identified, I would not expect a very high degree of consensus. Expert consensus is more achievable for existential risks such as climate science and asteroid impacts that can be mathematically modeled with high historical accuracy—there’s less to dispute on empirical / logical grounds.
A campaign to educate skeptics seems appropriate for a mature field with high consensus, whereas constructively engaging skeptics supports the advancement of a nascent field with low consensus.
One other idea would be to invite some prominent skeptics and proponents to synthesize the best of their arguments and debate them, live or in writing, with an emphasis on clear, jargon-free language (maybe such a project already exists?).
Thanks! I thought this was great. I really like the goals of fostering a more in-depth discussion and understanding skeptics’ viewpoints.
I’m not sure about modeling a follow-up project on Skeptical Science, which is intended (in large part) to rebut misinformation about climate change. There’s essentially consensus in the scientific community that human beings are causing climate change, so such a project seems appropriate.
Is there an equally high level of expert consensus on the existential risks posed by AI?
Have all of the strongest of the AI safety skeptics’ arguments been thoroughly debunked using evidence, logic, and reason?
If the answer to either of these questions is “no,” then maybe more foundational work (in the vein of this interview project) should be done first. I like your idea of using double crux interviews to determine which arguments are the most important.
One other idea would be to invite some prominent skeptics and proponents to synthesize the best of their arguments and debate them, live or in writing, with an emphasis on clear, jargon-free language (maybe such a project already exists?).
There isn’t. I think a strange but true and important fact about the problem is that it just isn’t a field of study in the same way e.g. climate science is — as argued in this Cold Takes post. So it’s unclear who the relevant “experts” should be. Technical AI researchers are maybe the best choice, but they’re still not a good one; they’re in the business of making progress locally, not forecasting what progress will be globally and what effects that will have.
Thanks! I agree—AI risk is at a much earlier stage of development as a field. Even as the field develops and experts can be identified, I would not expect a very high degree of consensus. Expert consensus is more achievable for existential risks such as climate science and asteroid impacts that can be mathematically modeled with high historical accuracy—there’s less to dispute on empirical / logical grounds.
A campaign to educate skeptics seems appropriate for a mature field with high consensus, whereas constructively engaging skeptics supports the advancement of a nascent field with low consensus.
This is a pretty good idea!