But in my original post I already acknowledged this difference. You’re repeating things I’ve already said, as if it were somehow contradicting me.
Sorry, I should have been more explicit at the start. You responded to a few of weeatquince’s points by saying they confounded specific narrower views with longtermism as whole, but these views are very influential within EA longtermism in practice, and the writing your OP is a response to dealt with these narrower views in the first place. I don’t think weeatquince (or Phil) was confounding these narrower views with longtermism broadly understood, and the point was to criticize these specific views, anyway, so longtermism being broader is besides the point. If they were confounding these more specific views with longtermism, it still wouldn’t invalidate the original criticisms, because these specific views do seem to get significant weight in EA longtermism in practice, anyway (e.g. through 80,000 Hours).
They, and many of the population ethics theories that you link, frequently still imply a greater focus on the long-term future than on other social issues.
I don’t disagree, but the original point was about “astronomical waste-type arguments”, specifically, not just priority for the long-term future or longtermism, broadly understood. Maybe I’ve interpreted “astronomical waste-type arguments” more narrowly than you have. Astronomical waste to me means roughly failing to ensure the creation of an astronomical number of happy beings. I seriously doubt that most theories or ethicists, or theories weighted by “the distribution of current opinions and published literature” would support the astronomical waste argument, whether or not most are longtermist in some sense. Maybe most would accept Beckstead’s adjustment, but the original criticisms seemed to be pretty specific to Bostrom’s original argument, so I think that’s what you should be responding to.
I think there’s an important practical difference between longtermist views which accept the original astronomical waste argument and those that don’t: those that do take extinction to be astronomically bad, so nearer term concerns are much more likely to be completely dominated by very small differences in extinction risk probabilities (under risk-neutral EV maximization or Maxipok, at least).
What theories have you seen that do support the astronomical waste argument? Don’t almost all of them (weighted by popularity or not) depend on (impersonal) totalism or a slight variation of it?
Are you saying views accepting the astronomical waste argument are dominant within ethics generally?
Sorry, I should have been more explicit at the start. You responded to a few of weeatquince’s points by saying they confounded specific narrower views with longtermism as whole, but these views are very influential within EA longtermism in practice, and the writing your OP is a response to dealt with these narrower views in the first place. I don’t think weeatquince (or Phil) was confounding these narrower views with longtermism broadly understood, and the point was to criticize these specific views, anyway, so longtermism being broader is besides the point. If they were confounding these more specific views with longtermism, it still wouldn’t invalidate the original criticisms, because these specific views do seem to get significant weight in EA longtermism in practice, anyway (e.g. through 80,000 Hours).
You seem to be interpreting my post as an attempt at a comprehensive refutation, when it is not and was not presented as such. I took some arguments and explored their implications. I was quite open about the fact that some of the arguments could lead to disagreement with common Effective Altruist interpretations of long-term priorities even if they don’t refute the basic idea. I feel like you are manufacturing disagreement and I think this is a good time to end the conversation.
What theories have you seen that do support the astronomical waste argument? Don’t almost all of them (weighted by popularity or not) depend on (impersonal) totalism or a slight variation of it?
As I said previously, this should be discussed in a proper post; I don’t currently have time or inclination to go into it.
Are you saying views accepting the astronomical waste argument are dominant within ethics generally?
Sorry, I should have been more explicit at the start. You responded to a few of weeatquince’s points by saying they confounded specific narrower views with longtermism as whole, but these views are very influential within EA longtermism in practice, and the writing your OP is a response to dealt with these narrower views in the first place. I don’t think weeatquince (or Phil) was confounding these narrower views with longtermism broadly understood, and the point was to criticize these specific views, anyway, so longtermism being broader is besides the point. If they were confounding these more specific views with longtermism, it still wouldn’t invalidate the original criticisms, because these specific views do seem to get significant weight in EA longtermism in practice, anyway (e.g. through 80,000 Hours).
I don’t disagree, but the original point was about “astronomical waste-type arguments”, specifically, not just priority for the long-term future or longtermism, broadly understood. Maybe I’ve interpreted “astronomical waste-type arguments” more narrowly than you have. Astronomical waste to me means roughly failing to ensure the creation of an astronomical number of happy beings. I seriously doubt that most theories or ethicists, or theories weighted by “the distribution of current opinions and published literature” would support the astronomical waste argument, whether or not most are longtermist in some sense. Maybe most would accept Beckstead’s adjustment, but the original criticisms seemed to be pretty specific to Bostrom’s original argument, so I think that’s what you should be responding to.
I think there’s an important practical difference between longtermist views which accept the original astronomical waste argument and those that don’t: those that do take extinction to be astronomically bad, so nearer term concerns are much more likely to be completely dominated by very small differences in extinction risk probabilities (under risk-neutral EV maximization or Maxipok, at least).
What theories have you seen that do support the astronomical waste argument? Don’t almost all of them (weighted by popularity or not) depend on (impersonal) totalism or a slight variation of it?
Are you saying views accepting the astronomical waste argument are dominant within ethics generally?
You seem to be interpreting my post as an attempt at a comprehensive refutation, when it is not and was not presented as such. I took some arguments and explored their implications. I was quite open about the fact that some of the arguments could lead to disagreement with common Effective Altruist interpretations of long-term priorities even if they don’t refute the basic idea. I feel like you are manufacturing disagreement and I think this is a good time to end the conversation.
As I said previously, this should be discussed in a proper post; I don’t currently have time or inclination to go into it.
I answered this in previous comments.