Sorry! After Peter pointed that out I edited it to âconcludes that human extinction would be a big welfare improvementâ. Does that wording address your concerns?
EDIT: changed again, to âconcludes that things are bad and getting worse, which suggests human extinction would be beneficialâ.
EDIT: and again, to âconcludes that things are bad and getting worse, which suggests efforts to reduce the risks of human extinction are misguidedâ.
EDIT: and again, to just âconcludes that things are bad and getting worseâ.
EDIT: and again, to âconcludes that things are likely bad and getting worseâ.
Thanks, Jeff! This helps a lot, though ideally a summary of my conclusions would acknowledge the tentativeness/âuncertainty thereof, as I aim to do in the post (perhaps, âconcludes that things may be bad and getting worseâ).
Hm, Iâm not sure how I would have read this if it had been your original wording, but in context it still feels like an effort to slightly spin my claims to make them more convenient for your critique. So for now Iâm just gonna reference back to my original postâthe language therein (including the title) is what I currently endorse.
Hmm..., sorry! Not trying to spin your claims and I would like to have something here that youâd be happy with. Would you agree your post views âcurrently negative and getting more negativeâ as more likely than not?
(Iâm not happy with âmayâ because itâs ambiguous between indicating uncertainty vs possibility. more)
Iâm still confused by the perceived need to state this in a way thatâs stronger than my chosen wording. I used âmayâ when presenting the top line conclusions because the analysis is rough/âpreliminary, incomplete, and predicated on a long list of assumptions. I felt it was appropriate to express this degree of uncertainty when making my claims in the the post, and I think that that becomes all the more important when summarizing the conclusions in other contexts without mention of the underlying assumptions and other caveats.
I think weâre talking past each other a bit. Can we first try and get on the same page about what youâre claiming, and then figure out what wording is best for summarizing that?
My interpretation of the epistemic status of your post is that you did a preliminary analysis and your conclusions are tentative and uncertain, but you think global net welfare is about equally likely to be higher or lower than what you present in the post. Is that right? Or would you say the post is aiming at illustrating more of a worst case scenario, to show that this is worth substantial attention, and you think global net welfare is more likely higher than modeled?
Given the multiple pushbacks by different people, is there a reason you didnât just take Kyleâs suggested text âconcludes that things may be bad and getting worseâ?
âMayâ is compatible with something like âmy overall view is that things are good but I think thereâs a 15% chance things are bad, which is too small to ignoreâ while Kyleâs analysis (as I read it) is stronger, more like âmy current best guess is that things are bad, though thereâs a ton of uncertaintyâ.
If I were you I would remove that part altogether. As Kyle has already said his analysis might imply that human extinction is highly undesirable.
For example, if animal welfare is significantly net negative now then human extinction removes our ability to help these animals, and they may just suffer for the rest of time (assuming whatever killed us off didnât also kill off all other sentient life).
Just because total welfare may be net negative now and may have been decreasing over time doesnât mean that this will always be the case. Maybe we can do something about it and have a flourishing future.
As Kyle has already said his analysis might imply that human extinction is highly undesirable. For example, if animal welfare is significantly net negative now then human extinction removes our ability to help these animals, and they may just suffer for the rest of time (assuming whatever killed us off didnât also kill off all other sentient life).
But his analysis doesnât say that? He considers two quantities in determining net welfare: human experience, and the experience of animals humans raise for food. Human extinction would bring both of these to zero.
I think maybe youâre thinking his analysis includes wild animal suffering?
Fair point, but I would still disagree his analysis implies that human extinction would be good. He discusses digital sentience and how, on our current trajectory, we may develop digital sentience with negative welfare. An implication isnât necessarily that we should go extinct, but perhaps instead that we should try to alter this trajectory so that we instead create digital sentience that flourishes.
So itâs far too simple to say that his analysis âconcludes that human extinction would be a very good thingâ. It is also inaccurate because, quite literally, he doesnât conclude that.
So I agree with your choice to remove that wording.
Sorry! After Peter pointed that out I edited it to âconcludes that human extinction would be a big welfare improvementâ. Does that wording address your concerns?
EDIT: changed again, to âconcludes that things are bad and getting worse, which suggests human extinction would be beneficialâ.
EDIT: and again, to âconcludes that things are bad and getting worse, which suggests efforts to reduce the risks of human extinction are misguidedâ.
EDIT: and again, to just âconcludes that things are bad and getting worseâ.
EDIT: and again, to âconcludes that things are likely bad and getting worseâ.
Thanks, Jeff! This helps a lot, though ideally a summary of my conclusions would acknowledge the tentativeness/âuncertainty thereof, as I aim to do in the post (perhaps, âconcludes that things may be bad and getting worseâ).
Sorry for all the noise on this! Iâve now added âlikelyâ to show that this is uncertain; does that work?
Hm, Iâm not sure how I would have read this if it had been your original wording, but in context it still feels like an effort to slightly spin my claims to make them more convenient for your critique. So for now Iâm just gonna reference back to my original postâthe language therein (including the title) is what I currently endorse.
Hmm..., sorry! Not trying to spin your claims and I would like to have something here that youâd be happy with. Would you agree your post views âcurrently negative and getting more negativeâ as more likely than not?
(Iâm not happy with âmayâ because itâs ambiguous between indicating uncertainty vs possibility. more)
Iâm still confused by the perceived need to state this in a way thatâs stronger than my chosen wording. I used âmayâ when presenting the top line conclusions because the analysis is rough/âpreliminary, incomplete, and predicated on a long list of assumptions. I felt it was appropriate to express this degree of uncertainty when making my claims in the the post, and I think that that becomes all the more important when summarizing the conclusions in other contexts without mention of the underlying assumptions and other caveats.
I think weâre talking past each other a bit. Can we first try and get on the same page about what youâre claiming, and then figure out what wording is best for summarizing that?
My interpretation of the epistemic status of your post is that you did a preliminary analysis and your conclusions are tentative and uncertain, but you think global net welfare is about equally likely to be higher or lower than what you present in the post. Is that right? Or would you say the post is aiming at illustrating more of a worst case scenario, to show that this is worth substantial attention, and you think global net welfare is more likely higher than modeled?
Given the multiple pushbacks by different people, is there a reason you didnât just take Kyleâs suggested text âconcludes that things may be bad and getting worseâ?
âMayâ is compatible with something like âmy overall view is that things are good but I think thereâs a 15% chance things are bad, which is too small to ignoreâ while Kyleâs analysis (as I read it) is stronger, more like âmy current best guess is that things are bad, though thereâs a ton of uncertaintyâ.
If I were you I would remove that part altogether. As Kyle has already said his analysis might imply that human extinction is highly undesirable.
For example, if animal welfare is significantly net negative now then human extinction removes our ability to help these animals, and they may just suffer for the rest of time (assuming whatever killed us off didnât also kill off all other sentient life).
Just because total welfare may be net negative now and may have been decreasing over time doesnât mean that this will always be the case. Maybe we can do something about it and have a flourishing future.
Yeah, this seems like itâs raising the stakes too much and distracting from the main argument; removed.
But his analysis doesnât say that? He considers two quantities in determining net welfare: human experience, and the experience of animals humans raise for food. Human extinction would bring both of these to zero.
I think maybe youâre thinking his analysis includes wild animal suffering?
Fair point, but I would still disagree his analysis implies that human extinction would be good. He discusses digital sentience and how, on our current trajectory, we may develop digital sentience with negative welfare. An implication isnât necessarily that we should go extinct, but perhaps instead that we should try to alter this trajectory so that we instead create digital sentience that flourishes.
So itâs far too simple to say that his analysis âconcludes that human extinction would be a very good thingâ. It is also inaccurate because, quite literally, he doesnât conclude that.
So I agree with your choice to remove that wording.