It seems like you’re acting as if you’re confident that the number of people in the future is not huge, or that the interventions are otherwise not so impactful (or they do more harm than good), but I’m not sure you actually believe this. Do you?
I have no idea about the number of future people. And I think this is the only defensible position. Which interventions do you mean? My argument is that longtermism enables reasoning that de-prioritizes current problems in lieu of possible, highly uncertain, future problems. Focusing on such problems prohibits us from making actual progress.
It sounds like you’re skeptical of AI safety work, but it also seems what you’re proposing is that we should be unwilling to commit to beliefs on some questions (like the number of people in the future), and then deprioritize longtermism as a result, but, again, doing so means acting as if we’re committed to beliefs that would make us pessimistic about longtermism.
I’m not quite sure I’m following this criticism, but I think it can be paraphrased as: You refuse to commit to a belief about x, but commit to one about y and that’s inconsistent. (Happy to revise if this is unfair.) I don’t think I agree—would you commit to a belief about what Genghis Khan was thinking on his 17th birthday? Some things are unknowable, and that’s okay. Ignorance is par for the course. We don’t need to pretend otherwise. Instead, we need a philosophy which is robust to uncertainty which, as I’ve argued, is one that focuses on correcting mistakes and solving the problems in front of us.
I think you do need to entertain arbitrary probabilities
… but they’d be arbitrary, so by definition don’t tell us anything about the world?
how do we decide between human-focused charities and animal charities, given the pretty arbitrary nature of assigning consciousness probabilities to nonhuman animals and the very arbitrary nature of assigning intensities of suffering to nonhuman animals?
This is of course a difficult question. But I don’t think the answer is to assign arbitrary numbers to the consciousness of animals. We can’t pull knowledge out of a hat, even using the most complex maths possible. We have theories of neurophysiology, and while none of them conclusively tells us that animals definitely feel pain, I think that’s the best explanation of our current observations. So, acknowledging this, we are in a situation where billions of animals needlessly suffer every year according to our best theory. And that’s a massive, horrendous tragedy—one that we should be fighting hard to stop. Assigning credences to the consciousness of animals just so we can start comparing this to other cause areas is just pretending knowledge where we have none.
You refuse to commit to a belief about x, but commit to one about y and that’s inconsistent.
I would rephrase as “You say you refuse to commit to a belief about x, but seem to act as if you’ve committed to a belief about x”. Specifically, you say you have no idea about the number of future people, but it seems like you’re saying we should act as if we believe it’s not huge (in expectation). The argument for strong longtermism you’re trying to undermine (assuming we get the chance of success and sign roughly accurate, which to me is more doubtful) goes through for a wide range of numbers. It seems that you’re committed to the belief that expected number is less than 1015, say, since you write in response “This paragraph illustrates one of the central pillars of longtermism. Without positing such large numbers of future people, the argument would not get off the ground”.
Maybe I’m misunderstanding. How would you act differently if you were confident the number was far less than 1015 in expectation, say 1012 (about 100 times the current population), rather than have no idea?
I don’t think I agree—would you commit to a belief about what Genghis Khan was thinking on his 17th birthday?
(...)
… but they’d be arbitrary, so by definition don’t tell us anything about the world?
There are certainly things I would commit to believing he was not thinking about, like modern digital computers (probability > 1−10−9), and I’d guess he thought about food/eating at some point during the day (probability > 0.5). Basically, either he ate that day (more likely than not) and thought about food before or while eating, or he didn’t eat and thought about food because he was hungry. Picking precise numbers would indeed be fairly arbitrary and even my precise bounds are pretty arbitrary, but I think these bounds are useful enough to make decisions based on if I had to, possibly after a sensitivity analysis.
If I were forced to bet on whether Genghis Khan thought about food on a randomly selected day during his life (randomly selected to avoid asymmetric information), I would bet yes.
We have theories of neurophysiology, and while none of them conclusively tells us that animals definitely feel pain, I think that’s the best explanation of our current observations.
I agree, but also none of these theories tell us how much a chicken can suffer relative to humans, as far as I know, or really anything about this, which is important in deciding how much to prioritize them, if at all. There are different suggestions for how the amount of suffering scales with brain size within the EA community, and there are arguments for these, but they’re a priori and fairly weak. This is one of the most recent discussions.
Hi Michael!
I have no idea about the number of future people. And I think this is the only defensible position. Which interventions do you mean? My argument is that longtermism enables reasoning that de-prioritizes current problems in lieu of possible, highly uncertain, future problems. Focusing on such problems prohibits us from making actual progress.
I’m not quite sure I’m following this criticism, but I think it can be paraphrased as: You refuse to commit to a belief about x, but commit to one about y and that’s inconsistent. (Happy to revise if this is unfair.) I don’t think I agree—would you commit to a belief about what Genghis Khan was thinking on his 17th birthday? Some things are unknowable, and that’s okay. Ignorance is par for the course. We don’t need to pretend otherwise. Instead, we need a philosophy which is robust to uncertainty which, as I’ve argued, is one that focuses on correcting mistakes and solving the problems in front of us.
… but they’d be arbitrary, so by definition don’t tell us anything about the world?
This is of course a difficult question. But I don’t think the answer is to assign arbitrary numbers to the consciousness of animals. We can’t pull knowledge out of a hat, even using the most complex maths possible. We have theories of neurophysiology, and while none of them conclusively tells us that animals definitely feel pain, I think that’s the best explanation of our current observations. So, acknowledging this, we are in a situation where billions of animals needlessly suffer every year according to our best theory. And that’s a massive, horrendous tragedy—one that we should be fighting hard to stop. Assigning credences to the consciousness of animals just so we can start comparing this to other cause areas is just pretending knowledge where we have none.
I would rephrase as “You say you refuse to commit to a belief about x, but seem to act as if you’ve committed to a belief about x”. Specifically, you say you have no idea about the number of future people, but it seems like you’re saying we should act as if we believe it’s not huge (in expectation). The argument for strong longtermism you’re trying to undermine (assuming we get the chance of success and sign roughly accurate, which to me is more doubtful) goes through for a wide range of numbers. It seems that you’re committed to the belief that expected number is less than 1015, say, since you write in response “This paragraph illustrates one of the central pillars of longtermism. Without positing such large numbers of future people, the argument would not get off the ground”.
Maybe I’m misunderstanding. How would you act differently if you were confident the number was far less than 1015 in expectation, say 1012 (about 100 times the current population), rather than have no idea?
There are certainly things I would commit to believing he was not thinking about, like modern digital computers (probability > 1−10−9), and I’d guess he thought about food/eating at some point during the day (probability > 0.5). Basically, either he ate that day (more likely than not) and thought about food before or while eating, or he didn’t eat and thought about food because he was hungry. Picking precise numbers would indeed be fairly arbitrary and even my precise bounds are pretty arbitrary, but I think these bounds are useful enough to make decisions based on if I had to, possibly after a sensitivity analysis.
If I were forced to bet on whether Genghis Khan thought about food on a randomly selected day during his life (randomly selected to avoid asymmetric information), I would bet yes.
I agree, but also none of these theories tell us how much a chicken can suffer relative to humans, as far as I know, or really anything about this, which is important in deciding how much to prioritize them, if at all. There are different suggestions for how the amount of suffering scales with brain size within the EA community, and there are arguments for these, but they’re a priori and fairly weak. This is one of the most recent discussions.