You refuse to commit to a belief about x, but commit to one about y and thatâs inconsistent.
I would rephrase as âYou say you refuse to commit to a belief about x, but seem to act as if youâve committed to a belief about xâ. Specifically, you say you have no idea about the number of future people, but it seems like youâre saying we should act as if we believe itâs not huge (in expectation). The argument for strong longtermism youâre trying to undermine (assuming we get the chance of success and sign roughly accurate, which to me is more doubtful) goes through for a wide range of numbers. It seems that youâre committed to the belief that expected number is less than 1015, say, since you write in response âThis paragraph illustrates one of the central pillars of longtermism. Without positing such large numbers of future people, the argument would not get off the groundâ.
Maybe Iâm misunderstanding. How would you act differently if you were confident the number was far less than 1015 in expectation, say 1012 (about 100 times the current population), rather than have no idea?
I donât think I agreeâwould you commit to a belief about what Genghis Khan was thinking on his 17th birthday?
(...)
⌠but theyâd be arbitrary, so by definition donât tell us anything about the world?
There are certainly things I would commit to believing he was not thinking about, like modern digital computers (probability > 1â10â9), and Iâd guess he thought about food/âeating at some point during the day (probability > 0.5). Basically, either he ate that day (more likely than not) and thought about food before or while eating, or he didnât eat and thought about food because he was hungry. Picking precise numbers would indeed be fairly arbitrary and even my precise bounds are pretty arbitrary, but I think these bounds are useful enough to make decisions based on if I had to, possibly after a sensitivity analysis.
If I were forced to bet on whether Genghis Khan thought about food on a randomly selected day during his life (randomly selected to avoid asymmetric information), I would bet yes.
We have theories of neurophysiology, and while none of them conclusively tells us that animals definitely feel pain, I think thatâs the best explanation of our current observations.
I agree, but also none of these theories tell us how much a chicken can suffer relative to humans, as far as I know, or really anything about this, which is important in deciding how much to prioritize them, if at all. There are different suggestions for how the amount of suffering scales with brain size within the EA community, and there are arguments for these, but theyâre a priori and fairly weak. This is one of the most recent discussions.
I would rephrase as âYou say you refuse to commit to a belief about x, but seem to act as if youâve committed to a belief about xâ. Specifically, you say you have no idea about the number of future people, but it seems like youâre saying we should act as if we believe itâs not huge (in expectation). The argument for strong longtermism youâre trying to undermine (assuming we get the chance of success and sign roughly accurate, which to me is more doubtful) goes through for a wide range of numbers. It seems that youâre committed to the belief that expected number is less than 1015, say, since you write in response âThis paragraph illustrates one of the central pillars of longtermism. Without positing such large numbers of future people, the argument would not get off the groundâ.
Maybe Iâm misunderstanding. How would you act differently if you were confident the number was far less than 1015 in expectation, say 1012 (about 100 times the current population), rather than have no idea?
There are certainly things I would commit to believing he was not thinking about, like modern digital computers (probability > 1â10â9), and Iâd guess he thought about food/âeating at some point during the day (probability > 0.5). Basically, either he ate that day (more likely than not) and thought about food before or while eating, or he didnât eat and thought about food because he was hungry. Picking precise numbers would indeed be fairly arbitrary and even my precise bounds are pretty arbitrary, but I think these bounds are useful enough to make decisions based on if I had to, possibly after a sensitivity analysis.
If I were forced to bet on whether Genghis Khan thought about food on a randomly selected day during his life (randomly selected to avoid asymmetric information), I would bet yes.
I agree, but also none of these theories tell us how much a chicken can suffer relative to humans, as far as I know, or really anything about this, which is important in deciding how much to prioritize them, if at all. There are different suggestions for how the amount of suffering scales with brain size within the EA community, and there are arguments for these, but theyâre a priori and fairly weak. This is one of the most recent discussions.