Even though I’ve been in the AI Safety space for ~ 2 years, I can’t shake the feeling that every living thing dying painlessly in its sleep overnight (due to AI killing us) isn’t as bad as (i.e. is ‘better’ than) hundreds of millions of people living in poverty and/or hundreds of billions of animals being tortured.
This makes me suspicious of my motivations. I think I do the work partly because I kinda feel the loss of future generations, but mainly because AI Safety still feels so neglected (and my counter factual impact here is larger).
I don’t think s-risks play much of a role in this, although they have in the past (here, I define disempowerment due to AGI, or authoritarian use of AGI as s-risks).
Thanks for sharing. I suspect most of the hundreds of millions of people living in poverty would disagree with you, though, and would prefer not to painlessly die in their sleep tonight.
I don’t think he’s talking past you. His point seems that the vast majority of the hundreds of millions of people living in poverty both have net positive lives, and don’t want to die.
Even with a purely hedonistic outlook, it wouldn’t be better for their lives to end.
Unless you are not talking about the present, but a future far worse than today’s situation?
I’m saying that on some level it feels worse to me that 700 million people suffer in poverty than every single person dying painlessly in their sleep. Or that billions of animals are in torture factories. It sounds like I’m misunderstanding Jason’s point?
I would contend they are not “suffering” in poverty overall, because most of their lives are net positive. There may be many struggles and their lives are a lot harder than ours, but still better than not being alive at all.
I agree with you on the animals in torture factories, because their lives are probably net negative unlike the 700 million in poverty.
If AI actually does manage to kill us (which I doubt), It will not involve everybody dying painlessly in their sleep. That is an assumption of the “FOOM to god AI with no warning” model, which bears no resemblance to reality.
The technology to kill everyone on earth in their sleep instantaneously does not exist now, and will not exist in the near-future, even if AGI is invented. Killing everyone in their sleep is orders of magnitude more difficult than killing everyone awake, so why on earth would that be the default scenario?
I think you have a point with animals, but I don’t think the balance of human experience means that non-existence would be better than the status quo.
Will talks about this quite a lot in ch. 9 of WWOTF (“Will the future be good or bad?”). He writes:
If we assume, following the small UK survey, that the neutral point on a life satisfaction scale is between 1 and 2, then 5 to 10 percent of the global population have lives of negative wellbeing. In the World Values Survey, 17 percent of respondents classed themselves as unhappy. In the smaller skipping study of people in rich countries, 12 percent of people had days where their bad experiences outweighed the good. And in the study that I commissioned, fewer than 10 percent of people in both the United States and India said they wished they had never been born, and a little over 10 percent said that their lives contained more suffering than happiness.
So, I would guess that on either preference-satisfactionism or hedonism, most people have lives with positive wellbeing. If I were given the option, on my deathbed, to be reincarnated as a randomly selected person alive today, I would choose to do so.
And, of course, for people at least, things are getting better over time. I think animal suffering complicates this a lot.
Even though I’ve been in the AI Safety space for ~ 2 years, I can’t shake the feeling that every living thing dying painlessly in its sleep overnight (due to AI killing us) isn’t as bad as (i.e. is ‘better’ than) hundreds of millions of people living in poverty and/or hundreds of billions of animals being tortured.
This makes me suspicious of my motivations. I think I do the work partly because I kinda feel the loss of future generations, but mainly because AI Safety still feels so neglected (and my counter factual impact here is larger).
I don’t think s-risks play much of a role in this, although they have in the past (here, I define disempowerment due to AGI, or authoritarian use of AGI as s-risks).
Thanks for sharing. I suspect most of the hundreds of millions of people living in poverty would disagree with you, though, and would prefer not to painlessly die in their sleep tonight.
I think its possible we’re talking passed each other?
I don’t think he’s talking past you. His point seems that the vast majority of the hundreds of millions of people living in poverty both have net positive lives, and don’t want to die.
Even with a purely hedonistic outlook, it wouldn’t be better for their lives to end.
Unless you are not talking about the present, but a future far worse than today’s situation?
I’m saying that on some level it feels worse to me that 700 million people suffer in poverty than every single person dying painlessly in their sleep. Or that billions of animals are in torture factories. It sounds like I’m misunderstanding Jason’s point?
I would contend they are not “suffering” in poverty overall, because most of their lives are net positive. There may be many struggles and their lives are a lot harder than ours, but still better than not being alive at all.
I agree with you on the animals in torture factories, because their lives are probably net negative unlike the 700 million in poverty.
If AI actually does manage to kill us (which I doubt), It will not involve everybody dying painlessly in their sleep. That is an assumption of the “FOOM to god AI with no warning” model, which bears no resemblance to reality.
The technology to kill everyone on earth in their sleep instantaneously does not exist now, and will not exist in the near-future, even if AGI is invented. Killing everyone in their sleep is orders of magnitude more difficult than killing everyone awake, so why on earth would that be the default scenario?
I think you have a point with animals, but I don’t think the balance of human experience means that non-existence would be better than the status quo.
Will talks about this quite a lot in ch. 9 of WWOTF (“Will the future be good or bad?”). He writes:
And, of course, for people at least, things are getting better over time. I think animal suffering complicates this a lot.