I believe you that you’re honestly speaking for your own views, and for the views of lots of other people in ML. From experience, I know that there are also lots of people in ML who do think AGI is likely to kill us all, and choose to work on advancing capabilities anyway. (With the justification Eliezer highlighted, and in many cases with other justifications, though I don’t think these are adequate.)
I’d be interested to hear your views about this, and why you don’t think superintelligence risk is a reason to pause scaling today. I can imagine a variety of reasons someone might think this, but I have no idea what your reason is, and I think conversation about this is often quite productive.
What experiences tell you there are also lots of people in ML who do think AGI is likely to kill us all, and choose to work on advancing capabilities anyway?
It’s hard to have strong confidence in these numbers, but surveys of AI developers who publish at prestigious conferences on probability of AGI-caused “causing human extinction or similarly permanent and severe disempowerment of the human species? ” often gets you numbers in the single-digit percentage points.
This is a meaningfully different claim than “likely to kill us all” which is implicitly >50%, but not that different in moral terms. The optimal level of extinction risk that humanity should be willing to incur is not 0, but it should be quite low.
@Linch Have you ever met any of these engineers who work on advancing AI in spite of thinking that the “most likely result … is that literally everyone on Earth will die.” I have never met anyone so thoroughly depraved. Mr. Yudkowsky and @RobBensinger think our field has many such people. I wonder if there is a disconnect in the polls. I wonder if people at MIRI have actually talked to AI engineers who admit to this abomination. What do you even say to someone so contemptible? Perhaps there are no such people.
I think it is much more likely that these MIRI folks have worked themselves into a corner of an echo chamber than it is that our field has attracted so many low-lifes who would sooner kill every last human than walk away from a job.
I don’t think I’ve met people working on AGI who has P(doom) >50%. I think I fairly often talk to people at e.g. OpenAI or DeepMind who believe it’s 0.1%-10% however. And again, I don’t find the difference that morally significant between probabilistically killing people at 5% vs 50% is that significant.
I don’t know how useful it is to conceptualize AI engineers who actively believe >50% P(doom) as evil or “low-lifes”, while giving a pass to people who have lower probabilities of doom. My guess is that it isn’t, and it would be better if we have an honest perspective overall. Relatedly, it’s better for people to be able to honestly admit “many people will see my work as evil but I’m doing it for xyz reasons anyway” rather than delude themselves otherwise and come up with increasingly implausible analogies, or refuse to engage at all.
I think it is much more likely that these MIRI folks have worked themselves into a corner of an echo chamber than it is that our field has attracted so many low-lifes who would sooner kill every last human than walk away from a job.
I agree this is a confusing situation. My guess is most people compartmentalize and/or don’t think of what they’re doing as that critical to advancing the doomsday machine, and/or they think other people will get there first and/or they think AGI is so far away that current efforts don’t matter, etc.
I would bet that most people who work in petroleum companies[1] (and for that matter, consumers) don’t think regularly about their consequences on climate change, marketers for tobacco companies don’t think about their impacts on lung cancer, Google engineers at Project Maven don’t think too hard about how their work accelerates drone warfare, etc. I quite like the bookThank You for Smokingfor some of this mentality.
Of course probabilistically “killing all of humanity” is axiologically worse in scope than causing lung cancer or civilian casualties of drones or arguably marginal effects on climate change. But scope neglect is a well-known problem with human psychology, and we shouldn’t be too surprised that people’s psychology is not extremely sensitive to magnitude.
I agree with @Linch. People find ways of rationalising what they do being OK, whether it is working for oil companies, or tobacco companies, or arms dealers, or AI capabilities. I don’t consider anyone a “low-life” really, but perhaps aren’t acting rationally if we assume doing net good for the world is an important goal for them (which it isn’t for a lot of people as well)
I also agree I don’t see a huge difference between the practical outworking 5% and 50% probablity of doom. Both probabilities should cause anyone who even thinks there is a small possibility that their work could contribute to that disaster to immediately stop and do something else. Given we are talking about existential risk, if those OpenAI or DeepMind people even believe the probability is 0.1% then they should probably lay down their tools and reconsider.
But we are human, and have specific skills, and pride, and families to feed so we justify to ourselves doing things which are bad all the time. This doesn’t make us “low-lifes”, just flawed humans.
I do not believe @RobBensinger ’s and Yudkowsky’s claim that “there are also lots of people in ML who do think AGI is likely to kill us all, and choose to work on advancing capabilities anyway.”
I believe you that you’re honestly speaking for your own views, and for the views of lots of other people in ML. From experience, I know that there are also lots of people in ML who do think AGI is likely to kill us all, and choose to work on advancing capabilities anyway. (With the justification Eliezer highlighted, and in many cases with other justifications, though I don’t think these are adequate.)
I’d be interested to hear your views about this, and why you don’t think superintelligence risk is a reason to pause scaling today. I can imagine a variety of reasons someone might think this, but I have no idea what your reason is, and I think conversation about this is often quite productive.
What experiences tell you there are also lots of people in ML who do think AGI is likely to kill us all, and choose to work on advancing capabilities anyway?
It’s hard to have strong confidence in these numbers, but surveys of AI developers who publish at prestigious conferences on probability of AGI-caused “causing human extinction or similarly permanent and severe disempowerment of the human species? ” often gets you numbers in the single-digit percentage points.
This is a meaningfully different claim than “likely to kill us all” which is implicitly >50%, but not that different in moral terms. The optimal level of extinction risk that humanity should be willing to incur is not 0, but it should be quite low.
@Linch Have you ever met any of these engineers who work on advancing AI in spite of thinking that the “most likely result … is that literally everyone on Earth will die.”
I have never met anyone so thoroughly depraved.
Mr. Yudkowsky and @RobBensinger think our field has many such people.
I wonder if there is a disconnect in the polls. I wonder if people at MIRI have actually talked to AI engineers who admit to this abomination. What do you even say to someone so contemptible? Perhaps there are no such people.
I think it is much more likely that these MIRI folks have worked themselves into a corner of an echo chamber than it is that our field has attracted so many low-lifes who would sooner kill every last human than walk away from a job.
I don’t think I’ve met people working on AGI who has P(doom) >50%. I think I fairly often talk to people at e.g. OpenAI or DeepMind who believe it’s 0.1%-10% however. And again, I don’t find the difference that morally significant between probabilistically killing people at 5% vs 50% is that significant.
I don’t know how useful it is to conceptualize AI engineers who actively believe >50% P(doom) as evil or “low-lifes”, while giving a pass to people who have lower probabilities of doom. My guess is that it isn’t, and it would be better if we have an honest perspective overall. Relatedly, it’s better for people to be able to honestly admit “many people will see my work as evil but I’m doing it for xyz reasons anyway” rather than delude themselves otherwise and come up with increasingly implausible analogies, or refuse to engage at all.
I agree this is a confusing situation. My guess is most people compartmentalize and/or don’t think of what they’re doing as that critical to advancing the doomsday machine, and/or they think other people will get there first and/or they think AGI is so far away that current efforts don’t matter, etc.
I would bet that most people who work in petroleum companies[1] (and for that matter, consumers) don’t think regularly about their consequences on climate change, marketers for tobacco companies don’t think about their impacts on lung cancer, Google engineers at Project Maven don’t think too hard about how their work accelerates drone warfare, etc. I quite like the bookThank You for Smoking for some of this mentality.
Of course probabilistically “killing all of humanity” is axiologically worse in scope than causing lung cancer or civilian casualties of drones or arguably marginal effects on climate change. But scope neglect is a well-known problem with human psychology, and we shouldn’t be too surprised that people’s psychology is not extremely sensitive to magnitude.
For the record, I’m not pure here and I in fact do fly.
I agree with @Linch. People find ways of rationalising what they do being OK, whether it is working for oil companies, or tobacco companies, or arms dealers, or AI capabilities. I don’t consider anyone a “low-life” really, but perhaps aren’t acting rationally if we assume doing net good for the world is an important goal for them (which it isn’t for a lot of people as well)
I also agree I don’t see a huge difference between the practical outworking 5% and 50% probablity of doom. Both probabilities should cause anyone who even thinks there is a small possibility that their work could contribute to that disaster to immediately stop and do something else. Given we are talking about existential risk, if those OpenAI or DeepMind people even believe the probability is 0.1% then they should probably lay down their tools and reconsider.
But we are human, and have specific skills, and pride, and families to feed so we justify to ourselves doing things which are bad all the time. This doesn’t make us “low-lifes”, just flawed humans.
I do not believe @RobBensinger ’s and Yudkowsky’s claim that “there are also lots of people in ML who do think AGI is likely to kill us all, and choose to work on advancing capabilities anyway.”