You refuse to commit to a belief about x, but commit to one about y and that’s inconsistent.
I would rephrase as “You say you refuse to commit to a belief about x, but seem to act as if you’ve committed to a belief about x”. Specifically, you say you have no idea about the number of future people, but it seems like you’re saying we should act as if we believe it’s not huge (in expectation). The argument for strong longtermism you’re trying to undermine (assuming we get the chance of success and sign roughly accurate, which to me is more doubtful) goes through for a wide range of numbers. It seems that you’re committed to the belief that expected number is less than 1015, say, since you write in response “This paragraph illustrates one of the central pillars of longtermism. Without positing such large numbers of future people, the argument would not get off the ground”.
Maybe I’m misunderstanding. How would you act differently if you were confident the number was far less than 1015 in expectation, say 1012 (about 100 times the current population), rather than have no idea?
I don’t think I agree—would you commit to a belief about what Genghis Khan was thinking on his 17th birthday?
(...)
… but they’d be arbitrary, so by definition don’t tell us anything about the world?
There are certainly things I would commit to believing he was not thinking about, like modern digital computers (probability > 1−10−9), and I’d guess he thought about food/eating at some point during the day (probability > 0.5). Basically, either he ate that day (more likely than not) and thought about food before or while eating, or he didn’t eat and thought about food because he was hungry. Picking precise numbers would indeed be fairly arbitrary and even my precise bounds are pretty arbitrary, but I think these bounds are useful enough to make decisions based on if I had to, possibly after a sensitivity analysis.
If I were forced to bet on whether Genghis Khan thought about food on a randomly selected day during his life (randomly selected to avoid asymmetric information), I would bet yes.
We have theories of neurophysiology, and while none of them conclusively tells us that animals definitely feel pain, I think that’s the best explanation of our current observations.
I agree, but also none of these theories tell us how much a chicken can suffer relative to humans, as far as I know, or really anything about this, which is important in deciding how much to prioritize them, if at all. There are different suggestions for how the amount of suffering scales with brain size within the EA community, and there are arguments for these, but they’re a priori and fairly weak. This is one of the most recent discussions.
I would rephrase as “You say you refuse to commit to a belief about x, but seem to act as if you’ve committed to a belief about x”. Specifically, you say you have no idea about the number of future people, but it seems like you’re saying we should act as if we believe it’s not huge (in expectation). The argument for strong longtermism you’re trying to undermine (assuming we get the chance of success and sign roughly accurate, which to me is more doubtful) goes through for a wide range of numbers. It seems that you’re committed to the belief that expected number is less than 1015, say, since you write in response “This paragraph illustrates one of the central pillars of longtermism. Without positing such large numbers of future people, the argument would not get off the ground”.
Maybe I’m misunderstanding. How would you act differently if you were confident the number was far less than 1015 in expectation, say 1012 (about 100 times the current population), rather than have no idea?
There are certainly things I would commit to believing he was not thinking about, like modern digital computers (probability > 1−10−9), and I’d guess he thought about food/eating at some point during the day (probability > 0.5). Basically, either he ate that day (more likely than not) and thought about food before or while eating, or he didn’t eat and thought about food because he was hungry. Picking precise numbers would indeed be fairly arbitrary and even my precise bounds are pretty arbitrary, but I think these bounds are useful enough to make decisions based on if I had to, possibly after a sensitivity analysis.
If I were forced to bet on whether Genghis Khan thought about food on a randomly selected day during his life (randomly selected to avoid asymmetric information), I would bet yes.
I agree, but also none of these theories tell us how much a chicken can suffer relative to humans, as far as I know, or really anything about this, which is important in deciding how much to prioritize them, if at all. There are different suggestions for how the amount of suffering scales with brain size within the EA community, and there are arguments for these, but they’re a priori and fairly weak. This is one of the most recent discussions.