Because the criticism isn’t just against longtermism per se, but longtermism in practice.
But in my original post I already acknowledged this difference. You’re repeating things I’ve already said, as if it were somehow contradicting me.
Based on what are you making this claim?
Based on my general understanding of moral theory and the minimal kinds of assumptions necessary to place the highest priority on the long-run future.
Also, have you surveyed theories within virtue ethics and deontology?
I am familiar with them.
They, and many of the population ethics theories that you link, frequently still imply a greater focus on the long-term future than on other social issues.
(I don’t intend to go into more specific arguments here. If you care about this issue, go ahead and make a proper top-level post for it so that it can be debated in a proper context.)
At any rate, I’m not sure the number of theories is a better measure than number of philosophers or ethicists specifically
“Most” i.e. majority of theories weighted for how popular they are. That’s what I meant by saying “across the distribution of current opinions and published literature.” Though I don’t have a particular reason to think that support for long term priorities comes disproportionately from popular or unpopular theories.
Are you 100% certain of a specific fully-specified ethical system? I don’t think anyone should be. If you aren’t, then shouldn’t we call that “moral uncertainty” and find ways to deal with it?
No. First, if I’m uncertain between two ethical views, I’m genuinely ambivalent about what future me should decide: there’s no ‘value of information’ here. Second, as I said in the original post, it’s a pointless and costly exercise to preemptively try to figure out a fully-specified ethical system. I think we should take the mandate that we have, to follow some kind of Effective Altruism, and then answer moral questions if and when they appear and matter in the practice of this general mandate. Moral arguments need to be both potentially convincing and carrying practical ramifications for us to worry about moral uncertainty.
But in my original post I already acknowledged this difference. You’re repeating things I’ve already said, as if it were somehow contradicting me.
Sorry, I should have been more explicit at the start. You responded to a few of weeatquince’s points by saying they confounded specific narrower views with longtermism as whole, but these views are very influential within EA longtermism in practice, and the writing your OP is a response to dealt with these narrower views in the first place. I don’t think weeatquince (or Phil) was confounding these narrower views with longtermism broadly understood, and the point was to criticize these specific views, anyway, so longtermism being broader is besides the point. If they were confounding these more specific views with longtermism, it still wouldn’t invalidate the original criticisms, because these specific views do seem to get significant weight in EA longtermism in practice, anyway (e.g. through 80,000 Hours).
They, and many of the population ethics theories that you link, frequently still imply a greater focus on the long-term future than on other social issues.
I don’t disagree, but the original point was about “astronomical waste-type arguments”, specifically, not just priority for the long-term future or longtermism, broadly understood. Maybe I’ve interpreted “astronomical waste-type arguments” more narrowly than you have. Astronomical waste to me means roughly failing to ensure the creation of an astronomical number of happy beings. I seriously doubt that most theories or ethicists, or theories weighted by “the distribution of current opinions and published literature” would support the astronomical waste argument, whether or not most are longtermist in some sense. Maybe most would accept Beckstead’s adjustment, but the original criticisms seemed to be pretty specific to Bostrom’s original argument, so I think that’s what you should be responding to.
I think there’s an important practical difference between longtermist views which accept the original astronomical waste argument and those that don’t: those that do take extinction to be astronomically bad, so nearer term concerns are much more likely to be completely dominated by very small differences in extinction risk probabilities (under risk-neutral EV maximization or Maxipok, at least).
What theories have you seen that do support the astronomical waste argument? Don’t almost all of them (weighted by popularity or not) depend on (impersonal) totalism or a slight variation of it?
Are you saying views accepting the astronomical waste argument are dominant within ethics generally?
Sorry, I should have been more explicit at the start. You responded to a few of weeatquince’s points by saying they confounded specific narrower views with longtermism as whole, but these views are very influential within EA longtermism in practice, and the writing your OP is a response to dealt with these narrower views in the first place. I don’t think weeatquince (or Phil) was confounding these narrower views with longtermism broadly understood, and the point was to criticize these specific views, anyway, so longtermism being broader is besides the point. If they were confounding these more specific views with longtermism, it still wouldn’t invalidate the original criticisms, because these specific views do seem to get significant weight in EA longtermism in practice, anyway (e.g. through 80,000 Hours).
You seem to be interpreting my post as an attempt at a comprehensive refutation, when it is not and was not presented as such. I took some arguments and explored their implications. I was quite open about the fact that some of the arguments could lead to disagreement with common Effective Altruist interpretations of long-term priorities even if they don’t refute the basic idea. I feel like you are manufacturing disagreement and I think this is a good time to end the conversation.
What theories have you seen that do support the astronomical waste argument? Don’t almost all of them (weighted by popularity or not) depend on (impersonal) totalism or a slight variation of it?
As I said previously, this should be discussed in a proper post; I don’t currently have time or inclination to go into it.
Are you saying views accepting the astronomical waste argument are dominant within ethics generally?
But in my original post I already acknowledged this difference. You’re repeating things I’ve already said, as if it were somehow contradicting me.
Based on my general understanding of moral theory and the minimal kinds of assumptions necessary to place the highest priority on the long-run future.
I am familiar with them.
They, and many of the population ethics theories that you link, frequently still imply a greater focus on the long-term future than on other social issues.
(I don’t intend to go into more specific arguments here. If you care about this issue, go ahead and make a proper top-level post for it so that it can be debated in a proper context.)
“Most” i.e. majority of theories weighted for how popular they are. That’s what I meant by saying “across the distribution of current opinions and published literature.” Though I don’t have a particular reason to think that support for long term priorities comes disproportionately from popular or unpopular theories.
No. First, if I’m uncertain between two ethical views, I’m genuinely ambivalent about what future me should decide: there’s no ‘value of information’ here. Second, as I said in the original post, it’s a pointless and costly exercise to preemptively try to figure out a fully-specified ethical system. I think we should take the mandate that we have, to follow some kind of Effective Altruism, and then answer moral questions if and when they appear and matter in the practice of this general mandate. Moral arguments need to be both potentially convincing and carrying practical ramifications for us to worry about moral uncertainty.
Sorry, I should have been more explicit at the start. You responded to a few of weeatquince’s points by saying they confounded specific narrower views with longtermism as whole, but these views are very influential within EA longtermism in practice, and the writing your OP is a response to dealt with these narrower views in the first place. I don’t think weeatquince (or Phil) was confounding these narrower views with longtermism broadly understood, and the point was to criticize these specific views, anyway, so longtermism being broader is besides the point. If they were confounding these more specific views with longtermism, it still wouldn’t invalidate the original criticisms, because these specific views do seem to get significant weight in EA longtermism in practice, anyway (e.g. through 80,000 Hours).
I don’t disagree, but the original point was about “astronomical waste-type arguments”, specifically, not just priority for the long-term future or longtermism, broadly understood. Maybe I’ve interpreted “astronomical waste-type arguments” more narrowly than you have. Astronomical waste to me means roughly failing to ensure the creation of an astronomical number of happy beings. I seriously doubt that most theories or ethicists, or theories weighted by “the distribution of current opinions and published literature” would support the astronomical waste argument, whether or not most are longtermist in some sense. Maybe most would accept Beckstead’s adjustment, but the original criticisms seemed to be pretty specific to Bostrom’s original argument, so I think that’s what you should be responding to.
I think there’s an important practical difference between longtermist views which accept the original astronomical waste argument and those that don’t: those that do take extinction to be astronomically bad, so nearer term concerns are much more likely to be completely dominated by very small differences in extinction risk probabilities (under risk-neutral EV maximization or Maxipok, at least).
What theories have you seen that do support the astronomical waste argument? Don’t almost all of them (weighted by popularity or not) depend on (impersonal) totalism or a slight variation of it?
Are you saying views accepting the astronomical waste argument are dominant within ethics generally?
You seem to be interpreting my post as an attempt at a comprehensive refutation, when it is not and was not presented as such. I took some arguments and explored their implications. I was quite open about the fact that some of the arguments could lead to disagreement with common Effective Altruist interpretations of long-term priorities even if they don’t refute the basic idea. I feel like you are manufacturing disagreement and I think this is a good time to end the conversation.
As I said previously, this should be discussed in a proper post; I don’t currently have time or inclination to go into it.
I answered this in previous comments.