I strongly object to the (Edit: previous) statement that my post “concludes that human extinction would be a very good thing”. I do not endorse this claim and think it’s a grave misconstrual of my analysis. My findings are highly uncertain, and, as Peter mentions, there are many potential reasons for believing human extinction would be bad even if my conclusions in the post were much more robust (e.g. lock-in effects, to name a particularly salient one).
Sorry! After Peter pointed that out I edited it to “concludes that human extinction would be a big welfare improvement”. Does that wording address your concerns?
EDIT: changed again, to “concludes that things are bad and getting worse, which suggests human extinction would be beneficial”.
EDIT: and again, to “concludes that things are bad and getting worse, which suggests efforts to reduce the risks of human extinction are misguided”.
EDIT: and again, to just “concludes that things are bad and getting worse”.
EDIT: and again, to “concludes that things are likely bad and getting worse”.
Thanks, Jeff! This helps a lot, though ideally a summary of my conclusions would acknowledge the tentativeness/uncertainty thereof, as I aim to do in the post (perhaps, “concludes that things may be bad and getting worse”).
Hm, I’m not sure how I would have read this if it had been your original wording, but in context it still feels like an effort to slightly spin my claims to make them more convenient for your critique. So for now I’m just gonna reference back to my original post—the language therein (including the title) is what I currently endorse.
Hmm..., sorry! Not trying to spin your claims and I would like to have something here that you’d be happy with. Would you agree your post views “currently negative and getting more negative” as more likely than not?
(I’m not happy with “may” because it’s ambiguous between indicating uncertainty vs possibility. more)
I’m still confused by the perceived need to state this in a way that’s stronger than my chosen wording. I used “may” when presenting the top line conclusions because the analysis is rough/preliminary, incomplete, and predicated on a long list of assumptions. I felt it was appropriate to express this degree of uncertainty when making my claims in the the post, and I think that that becomes all the more important when summarizing the conclusions in other contexts without mention of the underlying assumptions and other caveats.
I think we’re talking past each other a bit. Can we first try and get on the same page about what you’re claiming, and then figure out what wording is best for summarizing that?
My interpretation of the epistemic status of your post is that you did a preliminary analysis and your conclusions are tentative and uncertain, but you think global net welfare is about equally likely to be higher or lower than what you present in the post. Is that right? Or would you say the post is aiming at illustrating more of a worst case scenario, to show that this is worth substantial attention, and you think global net welfare is more likely higher than modeled?
Given the multiple pushbacks by different people, is there a reason you didn’t just take Kyle’s suggested text “concludes that things may be bad and getting worse”?
“May” is compatible with something like “my overall view is that things are good but I think there’s a 15% chance things are bad, which is too small to ignore” while Kyle’s analysis (as I read it) is stronger, more like “my current best guess is that things are bad, though there’s a ton of uncertainty”.
If I were you I would remove that part altogether. As Kyle has already said his analysis might imply that human extinction is highly undesirable.
For example, if animal welfare is significantly net negative now then human extinction removes our ability to help these animals, and they may just suffer for the rest of time (assuming whatever killed us off didn’t also kill off all other sentient life).
Just because total welfare may be net negative now and may have been decreasing over time doesn’t mean that this will always be the case. Maybe we can do something about it and have a flourishing future.
As Kyle has already said his analysis might imply that human extinction is highly undesirable. For example, if animal welfare is significantly net negative now then human extinction removes our ability to help these animals, and they may just suffer for the rest of time (assuming whatever killed us off didn’t also kill off all other sentient life).
But his analysis doesn’t say that? He considers two quantities in determining net welfare: human experience, and the experience of animals humans raise for food. Human extinction would bring both of these to zero.
I think maybe you’re thinking his analysis includes wild animal suffering?
Fair point, but I would still disagree his analysis implies that human extinction would be good. He discusses digital sentience and how, on our current trajectory, we may develop digital sentience with negative welfare. An implication isn’t necessarily that we should go extinct, but perhaps instead that we should try to alter this trajectory so that we instead create digital sentience that flourishes.
So it’s far too simple to say that his analysis “concludes that human extinction would be a very good thing”. It is also inaccurate because, quite literally, he doesn’t conclude that.
So I agree with your choice to remove that wording.
I strongly object to the (Edit: previous) statement that my post “concludes that human extinction would be a very good thing”. I do not endorse this claim and think it’s a grave misconstrual of my analysis. My findings are highly uncertain, and, as Peter mentions, there are many potential reasons for believing human extinction would be bad even if my conclusions in the post were much more robust (e.g. lock-in effects, to name a particularly salient one).
Sorry! After Peter pointed that out I edited it to “concludes that human extinction would be a big welfare improvement”. Does that wording address your concerns?
EDIT: changed again, to “concludes that things are bad and getting worse, which suggests human extinction would be beneficial”.
EDIT: and again, to “concludes that things are bad and getting worse, which suggests efforts to reduce the risks of human extinction are misguided”.
EDIT: and again, to just “concludes that things are bad and getting worse”.
EDIT: and again, to “concludes that things are likely bad and getting worse”.
Thanks, Jeff! This helps a lot, though ideally a summary of my conclusions would acknowledge the tentativeness/uncertainty thereof, as I aim to do in the post (perhaps, “concludes that things may be bad and getting worse”).
Sorry for all the noise on this! I’ve now added “likely” to show that this is uncertain; does that work?
Hm, I’m not sure how I would have read this if it had been your original wording, but in context it still feels like an effort to slightly spin my claims to make them more convenient for your critique. So for now I’m just gonna reference back to my original post—the language therein (including the title) is what I currently endorse.
Hmm..., sorry! Not trying to spin your claims and I would like to have something here that you’d be happy with. Would you agree your post views “currently negative and getting more negative” as more likely than not?
(I’m not happy with “may” because it’s ambiguous between indicating uncertainty vs possibility. more)
I’m still confused by the perceived need to state this in a way that’s stronger than my chosen wording. I used “may” when presenting the top line conclusions because the analysis is rough/preliminary, incomplete, and predicated on a long list of assumptions. I felt it was appropriate to express this degree of uncertainty when making my claims in the the post, and I think that that becomes all the more important when summarizing the conclusions in other contexts without mention of the underlying assumptions and other caveats.
I think we’re talking past each other a bit. Can we first try and get on the same page about what you’re claiming, and then figure out what wording is best for summarizing that?
My interpretation of the epistemic status of your post is that you did a preliminary analysis and your conclusions are tentative and uncertain, but you think global net welfare is about equally likely to be higher or lower than what you present in the post. Is that right? Or would you say the post is aiming at illustrating more of a worst case scenario, to show that this is worth substantial attention, and you think global net welfare is more likely higher than modeled?
Given the multiple pushbacks by different people, is there a reason you didn’t just take Kyle’s suggested text “concludes that things may be bad and getting worse”?
“May” is compatible with something like “my overall view is that things are good but I think there’s a 15% chance things are bad, which is too small to ignore” while Kyle’s analysis (as I read it) is stronger, more like “my current best guess is that things are bad, though there’s a ton of uncertainty”.
If I were you I would remove that part altogether. As Kyle has already said his analysis might imply that human extinction is highly undesirable.
For example, if animal welfare is significantly net negative now then human extinction removes our ability to help these animals, and they may just suffer for the rest of time (assuming whatever killed us off didn’t also kill off all other sentient life).
Just because total welfare may be net negative now and may have been decreasing over time doesn’t mean that this will always be the case. Maybe we can do something about it and have a flourishing future.
Yeah, this seems like it’s raising the stakes too much and distracting from the main argument; removed.
But his analysis doesn’t say that? He considers two quantities in determining net welfare: human experience, and the experience of animals humans raise for food. Human extinction would bring both of these to zero.
I think maybe you’re thinking his analysis includes wild animal suffering?
Fair point, but I would still disagree his analysis implies that human extinction would be good. He discusses digital sentience and how, on our current trajectory, we may develop digital sentience with negative welfare. An implication isn’t necessarily that we should go extinct, but perhaps instead that we should try to alter this trajectory so that we instead create digital sentience that flourishes.
So it’s far too simple to say that his analysis “concludes that human extinction would be a very good thing”. It is also inaccurate because, quite literally, he doesn’t conclude that.
So I agree with your choice to remove that wording.