Policymakers and people in industry, at least till ChatGPT had no idea what was going on (e.g at the AI World Summit, 2 months ago very few people even knew about GPT-3). SOTA large language models are not really properly deployed, so nobody cared about them or even knew about them (till ChatGPT at least).
As you point out yourself, what makes people interested in developing AGI is progress in AI, not the public discussion of potential dangers. “Nobody cared about” LLMs is certainly not true—I’m pretty sure the relevant people watched them closely. That many people aren’t concerned about AGI or doubting its feasibility by now only means that THOSE people will not pursue it, and any public discussion will probably not change their minds. There are others who think very differently, like the people at OpenAI, Deepmind, Google, and (I suspect) a lot of others who communicate less openly about what they do.
I agree that [a common understanding of the dangers] would be something good to have. But the question is: is it even possible to have such a thing?
I think that within the scientific community, it’s roughly possible (but then your book/outreach medium must be highly targeted towards that community). Within the general public, I think that it’s ~impossible.
I don’t think you can easily separate the scientific community from the general public. Even scientific papers are read by journalists, who often publish about them in a simplified or distorted way. Already there are many alarming posts and articles out there, as well as books like Stuart Russell’s “Human Compatible” (which I think is very good and helpful), so keeping the lid on the possibility of AGI and its profound impacts is way too late (it was probably too late already when Arthur C. Clarke wrote “2001 - A Space Odyssey”). Not talking about the dangers of uncontrollable AI for fear that this may lead to certain actors investing even more heavily in the field is both naive and counterproductive in my view.
And I would strongly recommend not publishing your book as long as you haven’t done that.
I will definitely publish it, but I doubt very much that it will have a large impact. There are many other writers out there with a much larger audience who write similar books.
I also hope that a lot of people who have thought about these issues have proofread your book because it’s the kind of thing that could really increase P(doom) substantially.
I’m currently in the process of translating it to English so I can do just that. I’ll send you a link as soon as I’m finished. I’ll also invite everyone else in the AI safety community (I’m probably going to post an invite on LessWrong).
Concerning the Putin quote, I don’t think that Russia is at the forefront of development, but China certainly is. Xi has said similar things in public, and I doubt very much that we know how much they currently spend on training their AIs. The quotes are not relevant, though, I just mentioned them to make the point that there is already a lot of discussion about the enormous impact AI will have on our future. I really can’t see how discussing the risks should be damaging, while discussing the great potential of AGI for humanity should not.
“Nobody cared about” LLMs is certainly not true—I’m pretty sure the relevant people watched them closely.
What do you mean by “the relevant people”? I would love that we talk about specifics here and operationalize what we mean. I’m pretty sure E. Macron haven’t thought deeply about AGI (i.e has never thought for more than 1h about timelines) and I’m at 50% that if he had any deep understanding of what changes it will bring, he would already be racing. Likewise for Israel, which is a country which has strong track record of becoming leads in technologies that are crucial for defense.
That many people aren’t concerned about AGI or doubting its feasibility by now only means that THOSE people will not pursue it, and any public discussion will probably not change their minds.
I think here you wrongly assume that people have even understood what are the implications of AGI and that they can’t update at all once the first systems will start being deployed. The situation where what you say could be true is if you think that most of your arguments hold because of ChatGPT. I think it’s quite plausible that since ChatGPT and probably even more in 2023 there will be deployments that may make mostly everyone that matter aware of AGI. I don’t have a good sense yet of how policymakers have updated yet.
Already there are many alarming posts and articles out there, as well as books like Stuart Russell’s “Human Compatible” (which I think is very good and helpful), so keeping the lid on the possibility of AGI and its profound impacts is way too late
Yeah, I realize thanks to this part that a lot of the debate should happen on specifics rather that at a high-level as we’re doing here. Thus, chatting about your book in particular will be helpful for that.
I’m currently in the process of translating it to English so I can do just that. I’ll send you a link as soon as I’m finished. I’ll also invite everyone else in the AI safety community (I’m probably going to post an invite on LessWrong).
Great! Thanks for doing that!
while discussing the great potential of AGI for humanity should not.
FYI I don’t think that it’s true.
Regarding all our discussion, I realized I didn’t mention a fairly important argument: a major failure mode specifically regarding risks is the following reaction from ~any country: “Omg, China is developing bad AGIs, so let’s develop safe AGIs first!”.
This can happen in two ways:
Misuse as the mainline scenario that people are envisioning. Basically, if you’re mostly concerned about misuse, racing to be the first to have the AGI makes sense. And because misuse is way easier to understand than accidental risk, I expect this to be ~the default.
Overestimating one’s competence. Even if you believed in AGI accidental X-risks, you could still race thinking that you’re better than the others and that could increase the chances of X-risk.
Thanks a lot for engaging with my arguments. I still think that you’re substantially overconfident about the positive aspects of communicating AGI X-risks to the general public but I appreciate the fact that you took the time to consider and answer to my arguments.
As you point out yourself, what makes people interested in developing AGI is progress in AI, not the public discussion of potential dangers. “Nobody cared about” LLMs is certainly not true—I’m pretty sure the relevant people watched them closely. That many people aren’t concerned about AGI or doubting its feasibility by now only means that THOSE people will not pursue it, and any public discussion will probably not change their minds. There are others who think very differently, like the people at OpenAI, Deepmind, Google, and (I suspect) a lot of others who communicate less openly about what they do.
I don’t think you can easily separate the scientific community from the general public. Even scientific papers are read by journalists, who often publish about them in a simplified or distorted way. Already there are many alarming posts and articles out there, as well as books like Stuart Russell’s “Human Compatible” (which I think is very good and helpful), so keeping the lid on the possibility of AGI and its profound impacts is way too late (it was probably too late already when Arthur C. Clarke wrote “2001 - A Space Odyssey”). Not talking about the dangers of uncontrollable AI for fear that this may lead to certain actors investing even more heavily in the field is both naive and counterproductive in my view.
I will definitely publish it, but I doubt very much that it will have a large impact. There are many other writers out there with a much larger audience who write similar books.
I’m currently in the process of translating it to English so I can do just that. I’ll send you a link as soon as I’m finished. I’ll also invite everyone else in the AI safety community (I’m probably going to post an invite on LessWrong).
Concerning the Putin quote, I don’t think that Russia is at the forefront of development, but China certainly is. Xi has said similar things in public, and I doubt very much that we know how much they currently spend on training their AIs. The quotes are not relevant, though, I just mentioned them to make the point that there is already a lot of discussion about the enormous impact AI will have on our future. I really can’t see how discussing the risks should be damaging, while discussing the great potential of AGI for humanity should not.
What do you mean by “the relevant people”? I would love that we talk about specifics here and operationalize what we mean. I’m pretty sure E. Macron haven’t thought deeply about AGI (i.e has never thought for more than 1h about timelines) and I’m at 50% that if he had any deep understanding of what changes it will bring, he would already be racing. Likewise for Israel, which is a country which has strong track record of becoming leads in technologies that are crucial for defense.
I think here you wrongly assume that people have even understood what are the implications of AGI and that they can’t update at all once the first systems will start being deployed. The situation where what you say could be true is if you think that most of your arguments hold because of ChatGPT. I think it’s quite plausible that since ChatGPT and probably even more in 2023 there will be deployments that may make mostly everyone that matter aware of AGI. I don’t have a good sense yet of how policymakers have updated yet.
Yeah, I realize thanks to this part that a lot of the debate should happen on specifics rather that at a high-level as we’re doing here. Thus, chatting about your book in particular will be helpful for that.
Great! Thanks for doing that!
FYI I don’t think that it’s true.
Regarding all our discussion, I realized I didn’t mention a fairly important argument: a major failure mode specifically regarding risks is the following reaction from ~any country: “Omg, China is developing bad AGIs, so let’s develop safe AGIs first!”.
This can happen in two ways:
Misuse as the mainline scenario that people are envisioning. Basically, if you’re mostly concerned about misuse, racing to be the first to have the AGI makes sense. And because misuse is way easier to understand than accidental risk, I expect this to be ~the default.
Overestimating one’s competence. Even if you believed in AGI accidental X-risks, you could still race thinking that you’re better than the others and that could increase the chances of X-risk.
Thanks a lot for engaging with my arguments. I still think that you’re substantially overconfident about the positive aspects of communicating AGI X-risks to the general public but I appreciate the fact that you took the time to consider and answer to my arguments.