Do you think these views dominate within EA longtermism?
I don’t, but why do you ask? I don’t see your point.
I’d guess that acceptance of the astronomical waste argument, specifically, is a minority view within ethics, both among philosophers and among published views
As I said previously, most theories imply it, but the field hasn’t caught up. They were slow to catch onto LGBT rights, animal interests, and charity against global poverty; it’s not surprising that they would repeat the same problem of taking too long to recognize priorities from a broader moral circle.
Even anti-realists can assign different weights to different views and intuitions. It can be statement about where you expect your views to go if you heard all possible arguments for and against.
But then there is scarce reason to defer to surveys of philosophers as guidance. Moral views are largely based on differences in intuition, often determined by differences in psychology and identity. Future divergences in your moral inclinations could be a random walk from your current position, or regression to the human population mean, or regression to the Effective Altruist mean.
I don’t, but why do you ask? I don’t see your point.
Because the criticism isn’t just against longtermism per se, but longtermism in practice. In practice, I think these views are popular or at least disproportionately promoted or taken for granted at prominent EA orgs (well, 80,000 Hours, at least).
As I said previously, most theories imply it
Based on what are you making this claim? I seriously doubt this, given the popularity of different person-affecting views and different approaches to aggregation. Here are some surveys of value-monistic consequentialist systems (or population axiologies), but they are by no means exhaustive, since they miss theories like leximin/maximin, Moderate Trade-off Theory, rank-discounted theories and often specific theories with person-affecting views:
http://users.ox.ac.uk/~mert2255/papers/population_axiology.pdf (this one probably gives the broadest overview since it also covers person-affecting views; of the families of theories mentioned, I think only Totalism, and (some) critical-level theories clearly support the astronomical waste argument.)
Also, have you surveyed theories within virtue ethics and deontology?
At any rate, I’m not sure the number of theories is a better measure than number of philosophers or ethicists specifically. A lot of theories will be pretty ad hoc and receive hardly any support, sometimes even by their own authors. Some are introduced just for the purpose of illustration (I think Moderate Trade-off Theory was one).
But then there is scarce reason to defer to surveys of philosophers as guidance. Moral views are largely based on differences in intuition, often determined by differences in psychology and identity. Future divergences in your moral inclinations could be a random walk from your current position, or regression to the human population mean, or regression to the Effective Altruist mean.
Sure, but arguments can influence beliefs.
Are you 100% certain of a specific fully-specified ethical system? I don’t think anyone should be. If you aren’t, then shouldn’t we call that “moral uncertainty” and find ways to deal with it?
Because the criticism isn’t just against longtermism per se, but longtermism in practice.
But in my original post I already acknowledged this difference. You’re repeating things I’ve already said, as if it were somehow contradicting me.
Based on what are you making this claim?
Based on my general understanding of moral theory and the minimal kinds of assumptions necessary to place the highest priority on the long-run future.
Also, have you surveyed theories within virtue ethics and deontology?
I am familiar with them.
They, and many of the population ethics theories that you link, frequently still imply a greater focus on the long-term future than on other social issues.
(I don’t intend to go into more specific arguments here. If you care about this issue, go ahead and make a proper top-level post for it so that it can be debated in a proper context.)
At any rate, I’m not sure the number of theories is a better measure than number of philosophers or ethicists specifically
“Most” i.e. majority of theories weighted for how popular they are. That’s what I meant by saying “across the distribution of current opinions and published literature.” Though I don’t have a particular reason to think that support for long term priorities comes disproportionately from popular or unpopular theories.
Are you 100% certain of a specific fully-specified ethical system? I don’t think anyone should be. If you aren’t, then shouldn’t we call that “moral uncertainty” and find ways to deal with it?
No. First, if I’m uncertain between two ethical views, I’m genuinely ambivalent about what future me should decide: there’s no ‘value of information’ here. Second, as I said in the original post, it’s a pointless and costly exercise to preemptively try to figure out a fully-specified ethical system. I think we should take the mandate that we have, to follow some kind of Effective Altruism, and then answer moral questions if and when they appear and matter in the practice of this general mandate. Moral arguments need to be both potentially convincing and carrying practical ramifications for us to worry about moral uncertainty.
But in my original post I already acknowledged this difference. You’re repeating things I’ve already said, as if it were somehow contradicting me.
Sorry, I should have been more explicit at the start. You responded to a few of weeatquince’s points by saying they confounded specific narrower views with longtermism as whole, but these views are very influential within EA longtermism in practice, and the writing your OP is a response to dealt with these narrower views in the first place. I don’t think weeatquince (or Phil) was confounding these narrower views with longtermism broadly understood, and the point was to criticize these specific views, anyway, so longtermism being broader is besides the point. If they were confounding these more specific views with longtermism, it still wouldn’t invalidate the original criticisms, because these specific views do seem to get significant weight in EA longtermism in practice, anyway (e.g. through 80,000 Hours).
They, and many of the population ethics theories that you link, frequently still imply a greater focus on the long-term future than on other social issues.
I don’t disagree, but the original point was about “astronomical waste-type arguments”, specifically, not just priority for the long-term future or longtermism, broadly understood. Maybe I’ve interpreted “astronomical waste-type arguments” more narrowly than you have. Astronomical waste to me means roughly failing to ensure the creation of an astronomical number of happy beings. I seriously doubt that most theories or ethicists, or theories weighted by “the distribution of current opinions and published literature” would support the astronomical waste argument, whether or not most are longtermist in some sense. Maybe most would accept Beckstead’s adjustment, but the original criticisms seemed to be pretty specific to Bostrom’s original argument, so I think that’s what you should be responding to.
I think there’s an important practical difference between longtermist views which accept the original astronomical waste argument and those that don’t: those that do take extinction to be astronomically bad, so nearer term concerns are much more likely to be completely dominated by very small differences in extinction risk probabilities (under risk-neutral EV maximization or Maxipok, at least).
What theories have you seen that do support the astronomical waste argument? Don’t almost all of them (weighted by popularity or not) depend on (impersonal) totalism or a slight variation of it?
Are you saying views accepting the astronomical waste argument are dominant within ethics generally?
Sorry, I should have been more explicit at the start. You responded to a few of weeatquince’s points by saying they confounded specific narrower views with longtermism as whole, but these views are very influential within EA longtermism in practice, and the writing your OP is a response to dealt with these narrower views in the first place. I don’t think weeatquince (or Phil) was confounding these narrower views with longtermism broadly understood, and the point was to criticize these specific views, anyway, so longtermism being broader is besides the point. If they were confounding these more specific views with longtermism, it still wouldn’t invalidate the original criticisms, because these specific views do seem to get significant weight in EA longtermism in practice, anyway (e.g. through 80,000 Hours).
You seem to be interpreting my post as an attempt at a comprehensive refutation, when it is not and was not presented as such. I took some arguments and explored their implications. I was quite open about the fact that some of the arguments could lead to disagreement with common Effective Altruist interpretations of long-term priorities even if they don’t refute the basic idea. I feel like you are manufacturing disagreement and I think this is a good time to end the conversation.
What theories have you seen that do support the astronomical waste argument? Don’t almost all of them (weighted by popularity or not) depend on (impersonal) totalism or a slight variation of it?
As I said previously, this should be discussed in a proper post; I don’t currently have time or inclination to go into it.
Are you saying views accepting the astronomical waste argument are dominant within ethics generally?
I don’t, but why do you ask? I don’t see your point.
As I said previously, most theories imply it, but the field hasn’t caught up. They were slow to catch onto LGBT rights, animal interests, and charity against global poverty; it’s not surprising that they would repeat the same problem of taking too long to recognize priorities from a broader moral circle.
But then there is scarce reason to defer to surveys of philosophers as guidance. Moral views are largely based on differences in intuition, often determined by differences in psychology and identity. Future divergences in your moral inclinations could be a random walk from your current position, or regression to the human population mean, or regression to the Effective Altruist mean.
Because the criticism isn’t just against longtermism per se, but longtermism in practice. In practice, I think these views are popular or at least disproportionately promoted or taken for granted at prominent EA orgs (well, 80,000 Hours, at least).
Based on what are you making this claim? I seriously doubt this, given the popularity of different person-affecting views and different approaches to aggregation. Here are some surveys of value-monistic consequentialist systems (or population axiologies), but they are by no means exhaustive, since they miss theories like leximin/maximin, Moderate Trade-off Theory, rank-discounted theories and often specific theories with person-affecting views:
http://www.crepp.ulg.ac.be/papers/crepp-wp200303.pdf
https://www.repugnant-conclusion.com/population-ethics.pdf
http://users.ox.ac.uk/~mert2255/papers/population_axiology.pdf (this one probably gives the broadest overview since it also covers person-affecting views; of the families of theories mentioned, I think only Totalism, and (some) critical-level theories clearly support the astronomical waste argument.)
Also, have you surveyed theories within virtue ethics and deontology?
At any rate, I’m not sure the number of theories is a better measure than number of philosophers or ethicists specifically. A lot of theories will be pretty ad hoc and receive hardly any support, sometimes even by their own authors. Some are introduced just for the purpose of illustration (I think Moderate Trade-off Theory was one).
Sure, but arguments can influence beliefs.
Are you 100% certain of a specific fully-specified ethical system? I don’t think anyone should be. If you aren’t, then shouldn’t we call that “moral uncertainty” and find ways to deal with it?
But in my original post I already acknowledged this difference. You’re repeating things I’ve already said, as if it were somehow contradicting me.
Based on my general understanding of moral theory and the minimal kinds of assumptions necessary to place the highest priority on the long-run future.
I am familiar with them.
They, and many of the population ethics theories that you link, frequently still imply a greater focus on the long-term future than on other social issues.
(I don’t intend to go into more specific arguments here. If you care about this issue, go ahead and make a proper top-level post for it so that it can be debated in a proper context.)
“Most” i.e. majority of theories weighted for how popular they are. That’s what I meant by saying “across the distribution of current opinions and published literature.” Though I don’t have a particular reason to think that support for long term priorities comes disproportionately from popular or unpopular theories.
No. First, if I’m uncertain between two ethical views, I’m genuinely ambivalent about what future me should decide: there’s no ‘value of information’ here. Second, as I said in the original post, it’s a pointless and costly exercise to preemptively try to figure out a fully-specified ethical system. I think we should take the mandate that we have, to follow some kind of Effective Altruism, and then answer moral questions if and when they appear and matter in the practice of this general mandate. Moral arguments need to be both potentially convincing and carrying practical ramifications for us to worry about moral uncertainty.
Sorry, I should have been more explicit at the start. You responded to a few of weeatquince’s points by saying they confounded specific narrower views with longtermism as whole, but these views are very influential within EA longtermism in practice, and the writing your OP is a response to dealt with these narrower views in the first place. I don’t think weeatquince (or Phil) was confounding these narrower views with longtermism broadly understood, and the point was to criticize these specific views, anyway, so longtermism being broader is besides the point. If they were confounding these more specific views with longtermism, it still wouldn’t invalidate the original criticisms, because these specific views do seem to get significant weight in EA longtermism in practice, anyway (e.g. through 80,000 Hours).
I don’t disagree, but the original point was about “astronomical waste-type arguments”, specifically, not just priority for the long-term future or longtermism, broadly understood. Maybe I’ve interpreted “astronomical waste-type arguments” more narrowly than you have. Astronomical waste to me means roughly failing to ensure the creation of an astronomical number of happy beings. I seriously doubt that most theories or ethicists, or theories weighted by “the distribution of current opinions and published literature” would support the astronomical waste argument, whether or not most are longtermist in some sense. Maybe most would accept Beckstead’s adjustment, but the original criticisms seemed to be pretty specific to Bostrom’s original argument, so I think that’s what you should be responding to.
I think there’s an important practical difference between longtermist views which accept the original astronomical waste argument and those that don’t: those that do take extinction to be astronomically bad, so nearer term concerns are much more likely to be completely dominated by very small differences in extinction risk probabilities (under risk-neutral EV maximization or Maxipok, at least).
What theories have you seen that do support the astronomical waste argument? Don’t almost all of them (weighted by popularity or not) depend on (impersonal) totalism or a slight variation of it?
Are you saying views accepting the astronomical waste argument are dominant within ethics generally?
You seem to be interpreting my post as an attempt at a comprehensive refutation, when it is not and was not presented as such. I took some arguments and explored their implications. I was quite open about the fact that some of the arguments could lead to disagreement with common Effective Altruist interpretations of long-term priorities even if they don’t refute the basic idea. I feel like you are manufacturing disagreement and I think this is a good time to end the conversation.
As I said previously, this should be discussed in a proper post; I don’t currently have time or inclination to go into it.
I answered this in previous comments.