One of EA’s most important and unusual beliefs is that superintelligent AGI is imminently possible. While ideally effective altruism is just an ethical framework that can be paired with any set of empirical beliefs, it is a very important fact that people in this community hold extremely unusual beliefs about the empirical question of AI progress.
Nobody I have ever met outside of the EA sphere seriously believes that superintelligent computer systems could take over the world within decades. I study computer science in school, I work in the field of data science, and everybody I know anticipates progress-as-usual for the foreseeable future. GPT-3 is a cool NLP algorithm, but it doesn’t spell world takeover anytime soon. The stock market would arguably agree, with DeepMind receiving a valuation of only $400M in 2014, though more recent progress within Google and Facebook has not received public financial valuations. The AI Impacts investigation into the history of technological progress revealed just how rare it is for a single innovation to bring decades worth of progress on an important metric. Much more likely in my opinion is the gradual and progressive acceleration of progress in AI and ML systems where the 21st century sees a booming Silicon Valley, but no clear “takeoff point” of discontinuous progress, with the possibility that the supposed impacts of AGI (such as automating most of the labor force or >2xing the global GDP growth rate) do not emerge for a century or centuries.
To be clear, I agree that unprecedented AI progress is possible and important. There are some strong object-level arguments, particularly Ajeya’s OpenPhil analysis of the size of the human brain vs. the size of our biggest computers. These arguments have helped convince influential experts to write books, conduct research, and bring attention to the problem of AGI safety. Perhaps the more persuasive argument is that no matter how slim the chances are, the chance cannot be disproven, and the impact of such a transformation would be so great that a group of people should be seriously thinking about. But it shouldn’t be a surprise when other groups do not take the superintelligence revolution seriously, nor should it be a surprise if the revolution does not come this century.
Nobody I have ever met outside of the EA sphere seriously believes that superintelligent computer systems could take over the world within decades.
Yep, this roughly matches my impressions. I think very, very few people really believe that superintelligence systems will be that influential.
One notable exception, of course though, would be the AGI companies themselves. I’m fairly confident that people in these groups really do think that they have a good shot at making AGI, and that it will be transformative.
This would be an example of Response 1 that I listed.
As to the question of, “Since everyone else besides AGI companies and select longtermists doesn’t seem to think this is an issue, maybe it isn’t an issue?”; I specifically am not that interested in discussion of that question. This sort of question is just very different and gets discussed in depth elsewhere.
But I think the discrepancy is interesting to understand, to better understand why society at large is doing what it’s doing.
Agreed, and I don’t have any specific explanation of why government is unconcerned with dramatic progress in AI. As usual, government seems just a bit slow to catch up to the cutting edge of technological development and academic thought. Charles_Guthmann’s point on the ages of people in government seems relevant. Appreciate your response though, I wasn’t sure if others had the same perceptions.
I think very, very few people really believe that superintelligence systems will be that influential.
A lot of prominent scientists, technologists and intellectuals outside of EA have warned about advanced artificial intelligence too. Stephen Hawking, Elon Musk, Bill Gates, Sam Harris, everyone on this open letter back in 2015 etc.
I agree that the number of people really concerned about this is strikingly small given the emphasis longtermist EAs put on it. But I think these many counter-examples warn us that it’s not just EAs and the AGI labs being overconfident or out of left field.
Counterpoint on market sentiment: Anthropic raised a $124M Series A with few staff and no public facing product. The money comes from a handful of individuals including Jaan Tallin and Eric Schmidt, which makes unusual beliefs more likely to govern the bid (think unilateralist’s curse). But this seems like it has to be a financial bet on the possibility of incredible AI progress.
Separate question: Anthropic seems to be composed largely of people from OpenAI, another well-funded and socially-minded AGI company. Why did they leave OpenAI?
I think market sentiment is a bit complicated. Very few investors are talking about AGI, but organizations like OpenAI still seem to think that talking about AGI is good marketing for them (for talent, and I’m sure for money, later on).
I think most of the Anthropic investment was from people close to effective altruism: Jaan Tallinn, Dustin Moskovitz, and Center for Emerging Risk Research, for example. https://www.anthropic.com/news/announcement
On why those people left OpenAI, I’m not at all an expert here. I think it’s common for different teams to have different ways of seeing things, and wanting independence. In this case, I think there weren’t all too many reasons to stay part of the same org (it’s easy enough to get funding independently, as is evidenced by the Anthropic funding). I guess if Anthropic stayed close to OpenAI, it could have been part of scaling GPT-3 and similar, but I’m not sure how valuable that was to the rest of the team (especially in comparison to having more freedom to do things their own ways). I’d note that right now, there seem to be several more technical alignment focused people at Anthropic.
One of EA’s most important and unusual beliefs is that superintelligent AGI is imminently possible. While ideally effective altruism is just an ethical framework that can be paired with any set of empirical beliefs, it is a very important fact that people in this community hold extremely unusual beliefs about the empirical question of AI progress.
Nobody I have ever met outside of the EA sphere seriously believes that superintelligent computer systems could take over the world within decades. I study computer science in school, I work in the field of data science, and everybody I know anticipates progress-as-usual for the foreseeable future. GPT-3 is a cool NLP algorithm, but it doesn’t spell world takeover anytime soon. The stock market would arguably agree, with DeepMind receiving a valuation of only $400M in 2014, though more recent progress within Google and Facebook has not received public financial valuations. The AI Impacts investigation into the history of technological progress revealed just how rare it is for a single innovation to bring decades worth of progress on an important metric. Much more likely in my opinion is the gradual and progressive acceleration of progress in AI and ML systems where the 21st century sees a booming Silicon Valley, but no clear “takeoff point” of discontinuous progress, with the possibility that the supposed impacts of AGI (such as automating most of the labor force or >2xing the global GDP growth rate) do not emerge for a century or centuries.
To be clear, I agree that unprecedented AI progress is possible and important. There are some strong object-level arguments, particularly Ajeya’s OpenPhil analysis of the size of the human brain vs. the size of our biggest computers. These arguments have helped convince influential experts to write books, conduct research, and bring attention to the problem of AGI safety. Perhaps the more persuasive argument is that no matter how slim the chances are, the chance cannot be disproven, and the impact of such a transformation would be so great that a group of people should be seriously thinking about. But it shouldn’t be a surprise when other groups do not take the superintelligence revolution seriously, nor should it be a surprise if the revolution does not come this century.
Epistemic Status: Possibly overstated.
EDIT: Here’s a better summary of my views. https://forum.effectivealtruism.org/posts/7ZZpWPq5iqkLMmt25/aogara-s-shortform?commentId=xZFEv84LGqbRFwt4G
Yep, this roughly matches my impressions. I think very, very few people really believe that superintelligence systems will be that influential.
One notable exception, of course though, would be the AGI companies themselves. I’m fairly confident that people in these groups really do think that they have a good shot at making AGI, and that it will be transformative.
This would be an example of Response 1 that I listed.
As to the question of, “Since everyone else besides AGI companies and select longtermists doesn’t seem to think this is an issue, maybe it isn’t an issue?”; I specifically am not that interested in discussion of that question. This sort of question is just very different and gets discussed in depth elsewhere.
But I think the discrepancy is interesting to understand, to better understand why society at large is doing what it’s doing.
Agreed, and I don’t have any specific explanation of why government is unconcerned with dramatic progress in AI. As usual, government seems just a bit slow to catch up to the cutting edge of technological development and academic thought. Charles_Guthmann’s point on the ages of people in government seems relevant. Appreciate your response though, I wasn’t sure if others had the same perceptions.
A lot of prominent scientists, technologists and intellectuals outside of EA have warned about advanced artificial intelligence too. Stephen Hawking, Elon Musk, Bill Gates, Sam Harris, everyone on this open letter back in 2015 etc.
I agree that the number of people really concerned about this is strikingly small given the emphasis longtermist EAs put on it. But I think these many counter-examples warn us that it’s not just EAs and the AGI labs being overconfident or out of left field.
Counterpoint on market sentiment: Anthropic raised a $124M Series A with few staff and no public facing product. The money comes from a handful of individuals including Jaan Tallin and Eric Schmidt, which makes unusual beliefs more likely to govern the bid (think unilateralist’s curse). But this seems like it has to be a financial bet on the possibility of incredible AI progress.
Separate question: Anthropic seems to be composed largely of people from OpenAI, another well-funded and socially-minded AGI company. Why did they leave OpenAI?
I think market sentiment is a bit complicated. Very few investors are talking about AGI, but organizations like OpenAI still seem to think that talking about AGI is good marketing for them (for talent, and I’m sure for money, later on).
I think most of the Anthropic investment was from people close to effective altruism: Jaan Tallinn, Dustin Moskovitz, and Center for Emerging Risk Research, for example.
https://www.anthropic.com/news/announcement
On why those people left OpenAI, I’m not at all an expert here. I think it’s common for different teams to have different ways of seeing things, and wanting independence. In this case, I think there weren’t all too many reasons to stay part of the same org (it’s easy enough to get funding independently, as is evidenced by the Anthropic funding). I guess if Anthropic stayed close to OpenAI, it could have been part of scaling GPT-3 and similar, but I’m not sure how valuable that was to the rest of the team (especially in comparison to having more freedom to do things their own ways). I’d note that right now, there seem to be several more technical alignment focused people at Anthropic.