I share your concerns with using arbitrary numbers and skepticism of longtermism, but I wonder if your argument here proves too much. It seems like youâre acting as if youâre confident that the number of people in the future is not huge, or that the interventions are otherwise not so impactful (or they do more harm than good), but Iâm not sure you actually believe this. Do you?
It sounds like youâre skeptical of AI safety work, but it also seems what youâre proposing is that we should be unwilling to commit to beliefs on some questions (like the number of people in the future), and then deprioritize longtermism as a result, but, again, doing so means acting as if weâre committed to beliefs that would make us pessimistic about longtermism.
I think itâs more fair to think that we donât have enough reason to believe longtermist work does much good at all, or more good than harm (and generally be much more skeptical of causal effects with little evidence), than it is to be extremely confident that the future wonât be huge.
I think you do need to entertain arbitrary probabilities, even if youâre not a longtermist, although I donât think you should commit to a single joint probability distribution. You can do a sensitivity analysis.
Hereâs an example: how do we decide between human-focused charities and animal charities, given the pretty arbitrary nature of assigning consciousness probabilities to nonhuman animals and the very arbitrary nature of assigning intensities of suffering to nonhuman animals?
I think the analogous response to your rejection of longtermism here would be to ignore your effects on animals, not just with donations or your career, but in your everyday life, too. But, based on this conclusion, we could reverse engineer what kinds of credences you would have to commit to if you were a Bayesian to arrive at such a conclusion (and there could be multiple compatible joint distributions). And then it would turn out youâre acting as if youâre confident that factory farmed chickens suffer very little (assuming youâre confident in the causal effects of certain interventions/âactions), and youâre suggesting everyone else should act as if factory farmed chickens suffer very little.
It seems like youâre acting as if youâre confident that the number of people in the future is not huge, or that the interventions are otherwise not so impactful (or they do more harm than good), but Iâm not sure you actually believe this. Do you?
I have no idea about the number of future people. And I think this is the only defensible position. Which interventions do you mean? My argument is that longtermism enables reasoning that de-prioritizes current problems in lieu of possible, highly uncertain, future problems. Focusing on such problems prohibits us from making actual progress.
It sounds like youâre skeptical of AI safety work, but it also seems what youâre proposing is that we should be unwilling to commit to beliefs on some questions (like the number of people in the future), and then deprioritize longtermism as a result, but, again, doing so means acting as if weâre committed to beliefs that would make us pessimistic about longtermism.
Iâm not quite sure Iâm following this criticism, but I think it can be paraphrased as: You refuse to commit to a belief about x, but commit to one about y and thatâs inconsistent. (Happy to revise if this is unfair.) I donât think I agreeâwould you commit to a belief about what Genghis Khan was thinking on his 17th birthday? Some things are unknowable, and thatâs okay. Ignorance is par for the course. We donât need to pretend otherwise. Instead, we need a philosophy which is robust to uncertainty which, as Iâve argued, is one that focuses on correcting mistakes and solving the problems in front of us.
I think you do need to entertain arbitrary probabilities
⌠but theyâd be arbitrary, so by definition donât tell us anything about the world?
how do we decide between human-focused charities and animal charities, given the pretty arbitrary nature of assigning consciousness probabilities to nonhuman animals and the very arbitrary nature of assigning intensities of suffering to nonhuman animals?
This is of course a difficult question. But I donât think the answer is to assign arbitrary numbers to the consciousness of animals. We canât pull knowledge out of a hat, even using the most complex maths possible. We have theories of neurophysiology, and while none of them conclusively tells us that animals definitely feel pain, I think thatâs the best explanation of our current observations. So, acknowledging this, we are in a situation where billions of animals needlessly suffer every year according to our best theory. And thatâs a massive, horrendous tragedyâone that we should be fighting hard to stop. Assigning credences to the consciousness of animals just so we can start comparing this to other cause areas is just pretending knowledge where we have none.
You refuse to commit to a belief about x, but commit to one about y and thatâs inconsistent.
I would rephrase as âYou say you refuse to commit to a belief about x, but seem to act as if youâve committed to a belief about xâ. Specifically, you say you have no idea about the number of future people, but it seems like youâre saying we should act as if we believe itâs not huge (in expectation). The argument for strong longtermism youâre trying to undermine (assuming we get the chance of success and sign roughly accurate, which to me is more doubtful) goes through for a wide range of numbers. It seems that youâre committed to the belief that expected number is less than 1015, say, since you write in response âThis paragraph illustrates one of the central pillars of longtermism. Without positing such large numbers of future people, the argument would not get off the groundâ.
Maybe Iâm misunderstanding. How would you act differently if you were confident the number was far less than 1015 in expectation, say 1012 (about 100 times the current population), rather than have no idea?
I donât think I agreeâwould you commit to a belief about what Genghis Khan was thinking on his 17th birthday?
(...)
⌠but theyâd be arbitrary, so by definition donât tell us anything about the world?
There are certainly things I would commit to believing he was not thinking about, like modern digital computers (probability > 1â10â9), and Iâd guess he thought about food/âeating at some point during the day (probability > 0.5). Basically, either he ate that day (more likely than not) and thought about food before or while eating, or he didnât eat and thought about food because he was hungry. Picking precise numbers would indeed be fairly arbitrary and even my precise bounds are pretty arbitrary, but I think these bounds are useful enough to make decisions based on if I had to, possibly after a sensitivity analysis.
If I were forced to bet on whether Genghis Khan thought about food on a randomly selected day during his life (randomly selected to avoid asymmetric information), I would bet yes.
We have theories of neurophysiology, and while none of them conclusively tells us that animals definitely feel pain, I think thatâs the best explanation of our current observations.
I agree, but also none of these theories tell us how much a chicken can suffer relative to humans, as far as I know, or really anything about this, which is important in deciding how much to prioritize them, if at all. There are different suggestions for how the amount of suffering scales with brain size within the EA community, and there are arguments for these, but theyâre a priori and fairly weak. This is one of the most recent discussions.
I share your concerns with using arbitrary numbers and skepticism of longtermism, but I wonder if your argument here proves too much. It seems like youâre acting as if youâre confident that the number of people in the future is not huge, or that the interventions are otherwise not so impactful (or they do more harm than good), but Iâm not sure you actually believe this. Do you?
It sounds like youâre skeptical of AI safety work, but it also seems what youâre proposing is that we should be unwilling to commit to beliefs on some questions (like the number of people in the future), and then deprioritize longtermism as a result, but, again, doing so means acting as if weâre committed to beliefs that would make us pessimistic about longtermism.
I think itâs more fair to think that we donât have enough reason to believe longtermist work does much good at all, or more good than harm (and generally be much more skeptical of causal effects with little evidence), than it is to be extremely confident that the future wonât be huge.
I think you do need to entertain arbitrary probabilities, even if youâre not a longtermist, although I donât think you should commit to a single joint probability distribution. You can do a sensitivity analysis.
Hereâs an example: how do we decide between human-focused charities and animal charities, given the pretty arbitrary nature of assigning consciousness probabilities to nonhuman animals and the very arbitrary nature of assigning intensities of suffering to nonhuman animals?
I think the analogous response to your rejection of longtermism here would be to ignore your effects on animals, not just with donations or your career, but in your everyday life, too. But, based on this conclusion, we could reverse engineer what kinds of credences you would have to commit to if you were a Bayesian to arrive at such a conclusion (and there could be multiple compatible joint distributions). And then it would turn out youâre acting as if youâre confident that factory farmed chickens suffer very little (assuming youâre confident in the causal effects of certain interventions/âactions), and youâre suggesting everyone else should act as if factory farmed chickens suffer very little.
Hi Michael!
I have no idea about the number of future people. And I think this is the only defensible position. Which interventions do you mean? My argument is that longtermism enables reasoning that de-prioritizes current problems in lieu of possible, highly uncertain, future problems. Focusing on such problems prohibits us from making actual progress.
Iâm not quite sure Iâm following this criticism, but I think it can be paraphrased as: You refuse to commit to a belief about x, but commit to one about y and thatâs inconsistent. (Happy to revise if this is unfair.) I donât think I agreeâwould you commit to a belief about what Genghis Khan was thinking on his 17th birthday? Some things are unknowable, and thatâs okay. Ignorance is par for the course. We donât need to pretend otherwise. Instead, we need a philosophy which is robust to uncertainty which, as Iâve argued, is one that focuses on correcting mistakes and solving the problems in front of us.
⌠but theyâd be arbitrary, so by definition donât tell us anything about the world?
This is of course a difficult question. But I donât think the answer is to assign arbitrary numbers to the consciousness of animals. We canât pull knowledge out of a hat, even using the most complex maths possible. We have theories of neurophysiology, and while none of them conclusively tells us that animals definitely feel pain, I think thatâs the best explanation of our current observations. So, acknowledging this, we are in a situation where billions of animals needlessly suffer every year according to our best theory. And thatâs a massive, horrendous tragedyâone that we should be fighting hard to stop. Assigning credences to the consciousness of animals just so we can start comparing this to other cause areas is just pretending knowledge where we have none.
I would rephrase as âYou say you refuse to commit to a belief about x, but seem to act as if youâve committed to a belief about xâ. Specifically, you say you have no idea about the number of future people, but it seems like youâre saying we should act as if we believe itâs not huge (in expectation). The argument for strong longtermism youâre trying to undermine (assuming we get the chance of success and sign roughly accurate, which to me is more doubtful) goes through for a wide range of numbers. It seems that youâre committed to the belief that expected number is less than 1015, say, since you write in response âThis paragraph illustrates one of the central pillars of longtermism. Without positing such large numbers of future people, the argument would not get off the groundâ.
Maybe Iâm misunderstanding. How would you act differently if you were confident the number was far less than 1015 in expectation, say 1012 (about 100 times the current population), rather than have no idea?
There are certainly things I would commit to believing he was not thinking about, like modern digital computers (probability > 1â10â9), and Iâd guess he thought about food/âeating at some point during the day (probability > 0.5). Basically, either he ate that day (more likely than not) and thought about food before or while eating, or he didnât eat and thought about food because he was hungry. Picking precise numbers would indeed be fairly arbitrary and even my precise bounds are pretty arbitrary, but I think these bounds are useful enough to make decisions based on if I had to, possibly after a sensitivity analysis.
If I were forced to bet on whether Genghis Khan thought about food on a randomly selected day during his life (randomly selected to avoid asymmetric information), I would bet yes.
I agree, but also none of these theories tell us how much a chicken can suffer relative to humans, as far as I know, or really anything about this, which is important in deciding how much to prioritize them, if at all. There are different suggestions for how the amount of suffering scales with brain size within the EA community, and there are arguments for these, but theyâre a priori and fairly weak. This is one of the most recent discussions.