Love this type of research, thank you very much for doing it!
I’m confused about the following statement:
While carp and salmon have lower scores than pigs and chickens, we suspect that’s largely due to a lack of research.
Is this a species-specific suspicion? Or does a lower amount of (high-quality) research on a species generally reduce your welfare range estimate?
On average I’d have expected the welfare range estimate to stay the same with increasing evidence, but the level of certainty about the estimate to increase.
If you have reason to believe that the existing research is systematically biased in a way that would lead to higher welfare range estimates with more research, do you account for this bias in your estimates?
Thank you for your response – I think you make a great case! :)
I very much agree that Pascal’s Mugging is relevant to longtermist philosophy,[1] for similar reasons to what you’ve stated – like that there is a trade-off between high existential risk and a high expected value of the future.[2]
I’m just pretty confused about whether this is the point being made by Philosophy Tube. Pascal’s mugging in the video has as an astronomical upside that “Super Hitler” is not born—because his birth would mean that “the future is doomed”. She doesn’t really address whether the future being big is plausible or not. For me, her argument derives a lot of the force from the implausibility of the infinitesimally small chance of achieving the upside by preventing “Super Hitler” from being born.
And maybe I watched too much with an eye for the relevance of Pascal’s Mugging to longtermist work on existential risk. I don’t think your version is very relevant unless existential risk work relies on astronomically large futures, which I don’t think much of it does. I think it’s quite a common sense position that a big future is at least plausible. Perhaps not Bostromian 10^42 future lives, but the ‘more than a trillion future lives’ that Abigail Thorn uses. If we assume a long-run population of around 10 billion. Then we’d get to 1 trillion people who would have lived in 10*80 = 800 years.[3] That doesn’t seem to be an absurd timeframe for humanity to reach. I think most of the longtermist-inspired existential risk research/efforts still work with futures that only have a median outcome of a trillion future lives.
I omitted this from an earlier draft of the post. Which in retrospect maybe wasn’t a good idea.
I’m personally confused about this trade-off. If I had a higher p(doom), then I’d want to have more clarity about this.
I’m unsure if that’s a sensible calculation.