Ah, gotcha. Yes, I agree that if your expected reduction in p(doom) is less than around 1% per year of pause, and you assign zero value to future lives, then pausing is bad on utilitarian grounds
Note that my post was not about my actual numerical beliefs, but about a lower bound that I considered highly defensible—I personally expect notably higher than 1%/year reduction and was taking that as given, but on reflection I at least agree that that’s a more controversial belief (I also think that a true pause is nigh impossible)
I expect there are better solutions that achieve many of the benefits of pausing while still enabling substantially better biotech research, but that’s nitpicking
I’m not super sure what you mean by individualistic. I was modelling this as utilitarian but assigning literally zero value to future people. From a purely selfish perspective, I’m in my mid-20s and my chances of dying from natural causes in the next say 20 years are pretty damn low, and this means that given my background beliefs about doom and timelines, slowing down AI is great deal from my perspective. While if I expected to die from old age in the next 5 years I would be a lot more opposed
I’m not super sure what you mean by individualistic. I was modelling this as utilitarian but assigning literally zero value to future people. From a purely selfish perspective, I’m in my mid-20s and my chances of dying from natural causes in the next say 20 years are pretty damn low, and this means that given my background beliefs about doom and timelines, slowing down AI is great deal from my perspective. While if I expected to die from old age in the next 5 years I would be a lot more opposed
A typical 25 year old man in the United States has around a 4.3% chance of dying before they turn 45 according to these actuarial statistics from 2019 (the most recent non-pandemic year in the data). I wouldn’t exactly call that “pretty damn low”, though opinions on these things differ. This is comparable to my personal credence that AIs will kill me in the next 20 years. And if AI goes well, it will probably make life really awesome. So from this narrowly selfish point of view I’m still not really convinced pausing is worth it.
Perhaps more importantly: do you not have any old family members that you care about?
4% is higher than I thought! Presumably much of that is people who had pre-existing conditions which I don’t or people who got into eg a car accidents which AI probably somewhat reduces, but this seems a lot more complicated and indirect to me.
But this isn’t really engaging with my cruxes. it seems pretty unlikely to me that we will pause until we have pretty capable and impressive AIs and to me much of the non-doom scenarios comes from uncertainty about when we will get powerful ai and how capable it will be. And I expect this to be much clearer the closer we get to these systems, or at the very least the empirical uncertainty about whether it’ll happen will be a lot clearer. I would be very surprised if there was the political will to do anything about this before we got a fair bit closer to the really scary systems.
And yep, I totally put more than 4% chance that I get killed by AI in the next 20 years. But I can see this is a more controversial belief and one that requires higher standards of evidence to argue for. If I imagine a hypothetical world where I know that in 2 years we could have aligned super intelligent AI with 98% probability and it would kill us all with 2% probability. Or we could pause for 20 years and that would get it from 98 to 99%, then I guess from a selfish perspective I can kind of see your point. But I know I do value humanity not going extinct a fair amount even if I think that total utilitarianism is silly. But I observe that I’m finding this debate kind of slippery and I’m afraid that I’m maybe moving the goalposts here because I disagree on many counts so it’s not clear what exactly my cruxes are, or where I’m just attacking points in what you say that seem off
I do think that the title of your post is broadly reasonable though. I’m an advocate for making AI x-risk cases that are premised on common sense morality like “human extinction would be really really bad”, and utilitarianism in the true philosophical sense is weird and messy and has pathological edge cases and isn’t something that I fully trust in extreme situations
I think what you’re saying about your own personal tradeoffs makes a lot of sense. Since I think we’re in agreement on a bunch of points here, I’ll just zero in on your last remark, since I think we still might have an important lingering disagreement:
I do think that the title of your post is broadly reasonable though. I’m an advocate for making AI x-risk cases that are premised on common sense morality like “human extinction would be really really bad”, and utilitarianism in the true philosophical sense is weird and messy and has pathological edge cases and isn’t something that I fully trust in extreme situations
I’m not confident, but I suspect that your perception of what common sense morality says is probably a bit inaccurate. For example, suppose you gave people the choice between the following scenarios:
In scenario A, their lifespan, along with the lifespans of everyone currently living, would be extended by 100 years. Everyone in the world would live for 100 years in utopia. At the end of this, however, everyone would peacefully and painlessly die, and then the world would be colonized by a race of sentient aliens.
In scenario B, everyone would receive just 2 more years to live. During this 2 year interval, life would be hellish and brutal. However, at the end of this, everyone would painfully die and be replaced by a completely distinct set of biological humans, ensuring that the human species is preserved.
In scenario A, humanity goes extinct, but we have a good time for 100 years. In scenario B, humanity is preserved, but we all die painfully in misery.
I suspect most people would probably say that scenario A is far preferable to scenario B, despite the fact that in scenario A, humanity goes extinct.
To be clear, I don’t think this scenario is directly applicable to the situation with AI. However, I think this thought experiment suggests that, while people might have some preference for avoiding human extinction, it’s probably not anywhere near the primary thing that people care about.
Based on people’s revealed preferences (such as how they spend their time, and who they spend their money on), most people care a lot about themselves and their family, but not much about the human species as an abstract concept that needs to be preserved. In a way, it’s probably the effective altruist crowd that is unusual in this respect by caring so much about human extinction, since most people don’t give the topic much thought at all.
This got me curious so I had deep research make me a report on my probability of dying from different causes. It estimates that in the next 20 years I’ve maybe a 1 and 1⁄2 to 3% Chance of death, of which 0.5-1% is chronic illness where it’ll probably help a lot. Infectious diseases is less than .1%, Doesn’t really matter. Accidents are .5 to 1%, AI probably helps but kind of unclear. .5 to 1% on other, mostly suicide. Plausibly AI also leads to substantially improved mental health treatments which helps there? So yeah, I buy that having AGI today Vs in twenty years has small but non trivial costs to my chances of being alive when it happens
Ah, gotcha. Yes, I agree that if your expected reduction in p(doom) is less than around 1% per year of pause, and you assign zero value to future lives, then pausing is bad on utilitarian grounds
Note that my post was not about my actual numerical beliefs, but about a lower bound that I considered highly defensible—I personally expect notably higher than 1%/year reduction and was taking that as given, but on reflection I at least agree that that’s a more controversial belief (I also think that a true pause is nigh impossible)
I expect there are better solutions that achieve many of the benefits of pausing while still enabling substantially better biotech research, but that’s nitpicking
I’m not super sure what you mean by individualistic. I was modelling this as utilitarian but assigning literally zero value to future people. From a purely selfish perspective, I’m in my mid-20s and my chances of dying from natural causes in the next say 20 years are pretty damn low, and this means that given my background beliefs about doom and timelines, slowing down AI is great deal from my perspective. While if I expected to die from old age in the next 5 years I would be a lot more opposed
A typical 25 year old man in the United States has around a 4.3% chance of dying before they turn 45 according to these actuarial statistics from 2019 (the most recent non-pandemic year in the data). I wouldn’t exactly call that “pretty damn low”, though opinions on these things differ. This is comparable to my personal credence that AIs will kill me in the next 20 years. And if AI goes well, it will probably make life really awesome. So from this narrowly selfish point of view I’m still not really convinced pausing is worth it.
Perhaps more importantly: do you not have any old family members that you care about?
4% is higher than I thought! Presumably much of that is people who had pre-existing conditions which I don’t or people who got into eg a car accidents which AI probably somewhat reduces, but this seems a lot more complicated and indirect to me.
But this isn’t really engaging with my cruxes. it seems pretty unlikely to me that we will pause until we have pretty capable and impressive AIs and to me much of the non-doom scenarios comes from uncertainty about when we will get powerful ai and how capable it will be. And I expect this to be much clearer the closer we get to these systems, or at the very least the empirical uncertainty about whether it’ll happen will be a lot clearer. I would be very surprised if there was the political will to do anything about this before we got a fair bit closer to the really scary systems.
And yep, I totally put more than 4% chance that I get killed by AI in the next 20 years. But I can see this is a more controversial belief and one that requires higher standards of evidence to argue for. If I imagine a hypothetical world where I know that in 2 years we could have aligned super intelligent AI with 98% probability and it would kill us all with 2% probability. Or we could pause for 20 years and that would get it from 98 to 99%, then I guess from a selfish perspective I can kind of see your point. But I know I do value humanity not going extinct a fair amount even if I think that total utilitarianism is silly. But I observe that I’m finding this debate kind of slippery and I’m afraid that I’m maybe moving the goalposts here because I disagree on many counts so it’s not clear what exactly my cruxes are, or where I’m just attacking points in what you say that seem off
I do think that the title of your post is broadly reasonable though. I’m an advocate for making AI x-risk cases that are premised on common sense morality like “human extinction would be really really bad”, and utilitarianism in the true philosophical sense is weird and messy and has pathological edge cases and isn’t something that I fully trust in extreme situations
I think what you’re saying about your own personal tradeoffs makes a lot of sense. Since I think we’re in agreement on a bunch of points here, I’ll just zero in on your last remark, since I think we still might have an important lingering disagreement:
I’m not confident, but I suspect that your perception of what common sense morality says is probably a bit inaccurate. For example, suppose you gave people the choice between the following scenarios:
In scenario A, their lifespan, along with the lifespans of everyone currently living, would be extended by 100 years. Everyone in the world would live for 100 years in utopia. At the end of this, however, everyone would peacefully and painlessly die, and then the world would be colonized by a race of sentient aliens.
In scenario B, everyone would receive just 2 more years to live. During this 2 year interval, life would be hellish and brutal. However, at the end of this, everyone would painfully die and be replaced by a completely distinct set of biological humans, ensuring that the human species is preserved.
In scenario A, humanity goes extinct, but we have a good time for 100 years. In scenario B, humanity is preserved, but we all die painfully in misery.
I suspect most people would probably say that scenario A is far preferable to scenario B, despite the fact that in scenario A, humanity goes extinct.
To be clear, I don’t think this scenario is directly applicable to the situation with AI. However, I think this thought experiment suggests that, while people might have some preference for avoiding human extinction, it’s probably not anywhere near the primary thing that people care about.
Based on people’s revealed preferences (such as how they spend their time, and who they spend their money on), most people care a lot about themselves and their family, but not much about the human species as an abstract concept that needs to be preserved. In a way, it’s probably the effective altruist crowd that is unusual in this respect by caring so much about human extinction, since most people don’t give the topic much thought at all.
This got me curious so I had deep research make me a report on my probability of dying from different causes. It estimates that in the next 20 years I’ve maybe a 1 and 1⁄2 to 3% Chance of death, of which 0.5-1% is chronic illness where it’ll probably help a lot. Infectious diseases is less than .1%, Doesn’t really matter. Accidents are .5 to 1%, AI probably helps but kind of unclear. .5 to 1% on other, mostly suicide. Plausibly AI also leads to substantially improved mental health treatments which helps there? So yeah, I buy that having AGI today Vs in twenty years has small but non trivial costs to my chances of being alive when it happens