I downvoted this but I wanted to explain why and hopefully provide constructive feedback. I felt that, having seen the original post this is referencing, I really do not think this post did a good/fair job of representing (or steelmanning) the original arguments raised.
To try and make this feedback more useful and help the debate here are some very quick attempts to steelman some of the original arguments:
Historically arguments that justify horrendous activities have a high frequency of being utopia based (appealing to possible but uncertain future utopias). The long-termist astronomical waste argument has this feature and so we should be wary of it.
If an argument leads to some ridiculous / repugnant conclusions that most people would object too then it is worth being wary of that argument. The philosophers who developed the long-termist astronomical waste argument openly use it to promote a range of abhorrent hawkish geopolitical responses (eg premptive nuclear strikes). We should be wary of following and promoting such arguments and philosophers.
There are problems with taking a simple expected value approach to decision making under uncertainty. Eg Pascal’s mugging problems. [For more on this look up robust decision making under deep uncertainty or knightian uncertainty]
The astronomical waste type arguments are not robust to a range of different philosophical and non-utilitarian ethical frameworks and (given ethical uncertainty) this makes them not great arguments
Etc
The above are not arguments against working on x-risks etc (and the original poster does himself work on x-risk issues) but are against overly relying on, using and promoting the astronomical waste type arguments for long-termism.
Your comment makes points that are already addressed by my original post.
Historically arguments that justify horrendous activities have a high frequency of being utopia based (appealing to possible but uncertain future utopias).
This is selecting on the dependent variable. Nearly every reformer and revolutionary has appealed to possible but uncertain future utopias. Also most horrendous activities have been most greatly motivated by some form of xenophobia or parochialism, which is absent here.
If an argument leads to some ridiculous / repugnant conclusions that most people would object too then it is worth being wary of that argument.
Maybe at first glance, but it’s a good idea to replace that first glance with a more rigorous look at pros and cons, which we have been doing for years. Also this is about consequentialist longtermism, not longtermism per se.
The philosophers who developed the long-termist astronomical waste argument openly use it to promote a range of abhorrent hawkish geopolitical responses (eg premptive nuclear strikes).
I don’t find Bostrom’s argument abhorrent, especially since he didn’t actually promote preemptive nuclear strikes. And again, this confounds longtermism with act-consequentialist views.
There are problems with taking a simple expected value approach to decision making under uncertainty. Eg Pascal’s mugging problems.
It’s inappropriate to confound longtermism with EV maximization. It’s not clear that doubting EV maximization will weaken, let alone end, the case for focusing on the long run. Loss-averse frameworks will care more about preventing existential risks and negative long-run trajectories. If you ignore tiny-probability events then you will worry less about existential risks but will still prioritize acceleration of our broad socioeconomic trajectory.
Generally speaking, EV maximization is fine and does a good job of beating its objections. Pascal’s Mugging is answered by factoring in the optimizer’s curse, noting that paying off the mugger incurs opportunity costs and that larger speculated benefits are less likely on priors.
People should move beyond merely objecting to EV maximization, and provide preferred formal characterizations that can be examined for real-world implications. They exist in the literature but in the context of these debates people always seem shy to commit to anything.
The astronomical waste type arguments are not robust to a range of different philosophical and non-utilitarian ethical frameworks
They are robust to a pretty good range of frameworks. I would guess that perhaps three-fourths of philosophical views, across the distribution of current opinions and published literature, would back up a broadly long-term focused view (not that philosophers themselves have necessarily caught up), although they won’t necessarily be consequentialist about pursuing this priority.
(given ethical uncertainty) this makes them not great arguments
Assuming ethical uncertainty. I do not make this assumption: it requires a kind of false moral realism.
When non-Effective-Altruists open more leeway to us on the basis of moral uncertainty, we can respond in kind, but until then, deferring to ethical uncertainty is needless disregard for other people’s well-being.
I don’t think your responses in this comment about narrower views being confounded with longtermism as a whole are fair. If your point is that longtermism is broader than weeatquince or Phil made it out to be, then I think you’re missing the point of the original criticisms, since the views being criticized are prominent in practice within EA longtermism. The response “There are other longtermist views” doesn’t help the ones being criticized.
In particular, 80,000 Hours promotes both (risk-neutral) EV maximization and astronomical waste (as I described in other replies), and consequentialism is disproportionately popular among EA survey and SSC survey respondents. It’s about half, although they don’t distinguish between act and rule consequentialism, and it’s possible longtermists are much less likely to be consequentialists, but I doubt that. To be fair to 80,000 Hours, they’ve also written against pure consequentialism, with that article linked to on their key ideas page.
80,000 Hours shapes the views and priorities of EAs, and, overall, I think the views being criticized will be made more popular by 80,000 Hours’ work.
And again, this is making the mistake of confounding longtermism with act-consequentialist views.
It’s similarly inappropriate to confound longtermism with EV maximization.
Do you think these views dominate within EA longtermism? I suspect they might, and this seems to be how 80,000 Hours (and probably by extension, CEA) thinks about longtermism, at least. (More on this below in this comment.)
They are robust to a pretty good range of frameworks. I would guess that perhaps three-fourths of philosophical views, across the distribution of current opinions and published literature, would back up a broadly long-term focused view (not that philosophers themselves have necessarily caught up), although they won’t necessarily be consequentialist about pursuing this priority.
I think this may be plausible of longtermism generally (although I’m very unsure), but not all longtermist views accept astronomical waste. I’d guess that acceptance of the astronomical waste argument, specifically, is a minority view within ethics, both among philosophers and among published views. Among population axiologies or pure consequentialist views, I’d guess most published views were designed specifically to avoid the repugnant conclusion, and a great deal of these (maybe most, but I’m not sure) will also reject the astronomical waste argument.
Longtermism in EA seems to be dominated by views that accept the astronomical waste argument, or, at least that seems to be the case for 80,000 Hours. 80,000 Hours’ cause prioritization and problem quiz (question 4, specifically) make it clear that the absence of future generations accounts for most of their priority given to existential risks.
Let’s explore some hypothetical numbers to illustrate the general concept. If there’s a 5% chance that civilisation lasts for ten million years, then in expectation, there are 5000 future generations. If thousands of people making a concerted effort could, with a 55% probability, reduce the risk of premature extinction by 1 percentage point, then these efforts would in expectation save 28 future generations. If each generation contains ten billion people, that would be 280 billion lives saved. If there’s a chance civilisation lasts longer than ten million years, or that there are more than ten billion people in each future generation, then the argument is strengthened even further.
Moral uncertainty isn’t a theoretically coherent idea—it assumes an objectively factual basis for people’s motivations.
I don’t think it needs to make this assumption. Even anti-realists can assign different weights to different views and intuitions. It can be statement about where you expect your views to go if you heard all possible arguments for and against. 80,000 Hours also uses moral uncertainty, FWIW.
Philosophy should not be about fact-finding, it should be about reinforcing the mission to improve and protect all people’s lives.
Who counts (different person-affecting views, nonhuman animals, fetuses, etc.), in what ways and what does it mean to improve and protect people’s lives? These are obviously important questions for philosophy. Are you saying we should stop thinking about them?
Do you improve and protect a person’s life by ensuring they come into existence in the first place? If not, then you should reject the astronomical waste argument.
Do you think these views dominate within EA longtermism?
I don’t, but why do you ask? I don’t see your point.
I’d guess that acceptance of the astronomical waste argument, specifically, is a minority view within ethics, both among philosophers and among published views
As I said previously, most theories imply it, but the field hasn’t caught up. They were slow to catch onto LGBT rights, animal interests, and charity against global poverty; it’s not surprising that they would repeat the same problem of taking too long to recognize priorities from a broader moral circle.
Even anti-realists can assign different weights to different views and intuitions. It can be statement about where you expect your views to go if you heard all possible arguments for and against.
But then there is scarce reason to defer to surveys of philosophers as guidance. Moral views are largely based on differences in intuition, often determined by differences in psychology and identity. Future divergences in your moral inclinations could be a random walk from your current position, or regression to the human population mean, or regression to the Effective Altruist mean.
I don’t, but why do you ask? I don’t see your point.
Because the criticism isn’t just against longtermism per se, but longtermism in practice. In practice, I think these views are popular or at least disproportionately promoted or taken for granted at prominent EA orgs (well, 80,000 Hours, at least).
As I said previously, most theories imply it
Based on what are you making this claim? I seriously doubt this, given the popularity of different person-affecting views and different approaches to aggregation. Here are some surveys of value-monistic consequentialist systems (or population axiologies), but they are by no means exhaustive, since they miss theories like leximin/maximin, Moderate Trade-off Theory, rank-discounted theories and often specific theories with person-affecting views:
http://users.ox.ac.uk/~mert2255/papers/population_axiology.pdf (this one probably gives the broadest overview since it also covers person-affecting views; of the families of theories mentioned, I think only Totalism, and (some) critical-level theories clearly support the astronomical waste argument.)
Also, have you surveyed theories within virtue ethics and deontology?
At any rate, I’m not sure the number of theories is a better measure than number of philosophers or ethicists specifically. A lot of theories will be pretty ad hoc and receive hardly any support, sometimes even by their own authors. Some are introduced just for the purpose of illustration (I think Moderate Trade-off Theory was one).
But then there is scarce reason to defer to surveys of philosophers as guidance. Moral views are largely based on differences in intuition, often determined by differences in psychology and identity. Future divergences in your moral inclinations could be a random walk from your current position, or regression to the human population mean, or regression to the Effective Altruist mean.
Sure, but arguments can influence beliefs.
Are you 100% certain of a specific fully-specified ethical system? I don’t think anyone should be. If you aren’t, then shouldn’t we call that “moral uncertainty” and find ways to deal with it?
Because the criticism isn’t just against longtermism per se, but longtermism in practice.
But in my original post I already acknowledged this difference. You’re repeating things I’ve already said, as if it were somehow contradicting me.
Based on what are you making this claim?
Based on my general understanding of moral theory and the minimal kinds of assumptions necessary to place the highest priority on the long-run future.
Also, have you surveyed theories within virtue ethics and deontology?
I am familiar with them.
They, and many of the population ethics theories that you link, frequently still imply a greater focus on the long-term future than on other social issues.
(I don’t intend to go into more specific arguments here. If you care about this issue, go ahead and make a proper top-level post for it so that it can be debated in a proper context.)
At any rate, I’m not sure the number of theories is a better measure than number of philosophers or ethicists specifically
“Most” i.e. majority of theories weighted for how popular they are. That’s what I meant by saying “across the distribution of current opinions and published literature.” Though I don’t have a particular reason to think that support for long term priorities comes disproportionately from popular or unpopular theories.
Are you 100% certain of a specific fully-specified ethical system? I don’t think anyone should be. If you aren’t, then shouldn’t we call that “moral uncertainty” and find ways to deal with it?
No. First, if I’m uncertain between two ethical views, I’m genuinely ambivalent about what future me should decide: there’s no ‘value of information’ here. Second, as I said in the original post, it’s a pointless and costly exercise to preemptively try to figure out a fully-specified ethical system. I think we should take the mandate that we have, to follow some kind of Effective Altruism, and then answer moral questions if and when they appear and matter in the practice of this general mandate. Moral arguments need to be both potentially convincing and carrying practical ramifications for us to worry about moral uncertainty.
But in my original post I already acknowledged this difference. You’re repeating things I’ve already said, as if it were somehow contradicting me.
Sorry, I should have been more explicit at the start. You responded to a few of weeatquince’s points by saying they confounded specific narrower views with longtermism as whole, but these views are very influential within EA longtermism in practice, and the writing your OP is a response to dealt with these narrower views in the first place. I don’t think weeatquince (or Phil) was confounding these narrower views with longtermism broadly understood, and the point was to criticize these specific views, anyway, so longtermism being broader is besides the point. If they were confounding these more specific views with longtermism, it still wouldn’t invalidate the original criticisms, because these specific views do seem to get significant weight in EA longtermism in practice, anyway (e.g. through 80,000 Hours).
They, and many of the population ethics theories that you link, frequently still imply a greater focus on the long-term future than on other social issues.
I don’t disagree, but the original point was about “astronomical waste-type arguments”, specifically, not just priority for the long-term future or longtermism, broadly understood. Maybe I’ve interpreted “astronomical waste-type arguments” more narrowly than you have. Astronomical waste to me means roughly failing to ensure the creation of an astronomical number of happy beings. I seriously doubt that most theories or ethicists, or theories weighted by “the distribution of current opinions and published literature” would support the astronomical waste argument, whether or not most are longtermist in some sense. Maybe most would accept Beckstead’s adjustment, but the original criticisms seemed to be pretty specific to Bostrom’s original argument, so I think that’s what you should be responding to.
I think there’s an important practical difference between longtermist views which accept the original astronomical waste argument and those that don’t: those that do take extinction to be astronomically bad, so nearer term concerns are much more likely to be completely dominated by very small differences in extinction risk probabilities (under risk-neutral EV maximization or Maxipok, at least).
What theories have you seen that do support the astronomical waste argument? Don’t almost all of them (weighted by popularity or not) depend on (impersonal) totalism or a slight variation of it?
Are you saying views accepting the astronomical waste argument are dominant within ethics generally?
Sorry, I should have been more explicit at the start. You responded to a few of weeatquince’s points by saying they confounded specific narrower views with longtermism as whole, but these views are very influential within EA longtermism in practice, and the writing your OP is a response to dealt with these narrower views in the first place. I don’t think weeatquince (or Phil) was confounding these narrower views with longtermism broadly understood, and the point was to criticize these specific views, anyway, so longtermism being broader is besides the point. If they were confounding these more specific views with longtermism, it still wouldn’t invalidate the original criticisms, because these specific views do seem to get significant weight in EA longtermism in practice, anyway (e.g. through 80,000 Hours).
You seem to be interpreting my post as an attempt at a comprehensive refutation, when it is not and was not presented as such. I took some arguments and explored their implications. I was quite open about the fact that some of the arguments could lead to disagreement with common Effective Altruist interpretations of long-term priorities even if they don’t refute the basic idea. I feel like you are manufacturing disagreement and I think this is a good time to end the conversation.
What theories have you seen that do support the astronomical waste argument? Don’t almost all of them (weighted by popularity or not) depend on (impersonal) totalism or a slight variation of it?
As I said previously, this should be discussed in a proper post; I don’t currently have time or inclination to go into it.
Are you saying views accepting the astronomical waste argument are dominant within ethics generally?
The philosophers who developed the long-termist astronomical waste argument openly use it to promote a range of abhorrent hawkish geopolitical responses (eg premptive nuclear strikes).
I find this surprising. Can you point to examples?
(Disclaimer: Not my own views/criticism. I am just trying to steelman a Facebook post I read. I have not looked into the wider context of these views or people’s current positions on these views.)
Note that Bostrom doesn’t advocate preemptive nuclear strikes in this essay. Rather he says the level of force should be no greater than necessary to “reduce the threat to an acceptable level.”
When the stakes are “astronomical” and many longtermists are maximizing EV (or using Maxipok) and are consequentialists (or sufficiently consequentialist), what’s an acceptable level of threat? For them, isn’t the only acceptable level of threat the lowest possible level of threat?
Unless the probability difference is extremely small, won’t it come down to whether it increases or decreases risk in expectation, and those who would be killed can effectively be ignored since they won’t make a large enough difference to change the decision?
EDIT: Ah, preemptive strikes still might not be the best uses of limited resources if they could be used another way.
EDIT2: The US already has a bunch of nukes that aren’t being used for anything else, though.
There are going to be prudential questions of governance, collateral damages, harms to norms, and similar issues which swamp very small direct differences in risk probability even if one is fixated on the very long run. Hence, an acceptable level of risk is one which is low enough that it seems equal or smaller than these other issues.
Hi,
I downvoted this but I wanted to explain why and hopefully provide constructive feedback. I felt that, having seen the original post this is referencing, I really do not think this post did a good/fair job of representing (or steelmanning) the original arguments raised.
To try and make this feedback more useful and help the debate here are some very quick attempts to steelman some of the original arguments:
Historically arguments that justify horrendous activities have a high frequency of being utopia based (appealing to possible but uncertain future utopias). The long-termist astronomical waste argument has this feature and so we should be wary of it.
If an argument leads to some ridiculous / repugnant conclusions that most people would object too then it is worth being wary of that argument. The philosophers who developed the long-termist astronomical waste argument openly use it to promote a range of abhorrent hawkish geopolitical responses (eg premptive nuclear strikes). We should be wary of following and promoting such arguments and philosophers.
There are problems with taking a simple expected value approach to decision making under uncertainty. Eg Pascal’s mugging problems. [For more on this look up robust decision making under deep uncertainty or knightian uncertainty]
The astronomical waste type arguments are not robust to a range of different philosophical and non-utilitarian ethical frameworks and (given ethical uncertainty) this makes them not great arguments
Etc
The above are not arguments against working on x-risks etc (and the original poster does himself work on x-risk issues) but are against overly relying on, using and promoting the astronomical waste type arguments for long-termism.
Your comment makes points that are already addressed by my original post.
This is selecting on the dependent variable. Nearly every reformer and revolutionary has appealed to possible but uncertain future utopias. Also most horrendous activities have been most greatly motivated by some form of xenophobia or parochialism, which is absent here.
Maybe at first glance, but it’s a good idea to replace that first glance with a more rigorous look at pros and cons, which we have been doing for years. Also this is about consequentialist longtermism, not longtermism per se.
I don’t find Bostrom’s argument abhorrent, especially since he didn’t actually promote preemptive nuclear strikes. And again, this confounds longtermism with act-consequentialist views.
It’s inappropriate to confound longtermism with EV maximization. It’s not clear that doubting EV maximization will weaken, let alone end, the case for focusing on the long run. Loss-averse frameworks will care more about preventing existential risks and negative long-run trajectories. If you ignore tiny-probability events then you will worry less about existential risks but will still prioritize acceleration of our broad socioeconomic trajectory.
Generally speaking, EV maximization is fine and does a good job of beating its objections. Pascal’s Mugging is answered by factoring in the optimizer’s curse, noting that paying off the mugger incurs opportunity costs and that larger speculated benefits are less likely on priors.
People should move beyond merely objecting to EV maximization, and provide preferred formal characterizations that can be examined for real-world implications. They exist in the literature but in the context of these debates people always seem shy to commit to anything.
They are robust to a pretty good range of frameworks. I would guess that perhaps three-fourths of philosophical views, across the distribution of current opinions and published literature, would back up a broadly long-term focused view (not that philosophers themselves have necessarily caught up), although they won’t necessarily be consequentialist about pursuing this priority.
Assuming ethical uncertainty. I do not make this assumption: it requires a kind of false moral realism.
When non-Effective-Altruists open more leeway to us on the basis of moral uncertainty, we can respond in kind, but until then, deferring to ethical uncertainty is needless disregard for other people’s well-being.
I don’t think your responses in this comment about narrower views being confounded with longtermism as a whole are fair. If your point is that longtermism is broader than weeatquince or Phil made it out to be, then I think you’re missing the point of the original criticisms, since the views being criticized are prominent in practice within EA longtermism. The response “There are other longtermist views” doesn’t help the ones being criticized.
In particular, 80,000 Hours promotes both (risk-neutral) EV maximization and astronomical waste (as I described in other replies), and consequentialism is disproportionately popular among EA survey and SSC survey respondents. It’s about half, although they don’t distinguish between act and rule consequentialism, and it’s possible longtermists are much less likely to be consequentialists, but I doubt that. To be fair to 80,000 Hours, they’ve also written against pure consequentialism, with that article linked to on their key ideas page.
80,000 Hours shapes the views and priorities of EAs, and, overall, I think the views being criticized will be made more popular by 80,000 Hours’ work.
Do you think these views dominate within EA longtermism? I suspect they might, and this seems to be how 80,000 Hours (and probably by extension, CEA) thinks about longtermism, at least. (More on this below in this comment.)
I think this may be plausible of longtermism generally (although I’m very unsure), but not all longtermist views accept astronomical waste. I’d guess that acceptance of the astronomical waste argument, specifically, is a minority view within ethics, both among philosophers and among published views. Among population axiologies or pure consequentialist views, I’d guess most published views were designed specifically to avoid the repugnant conclusion, and a great deal of these (maybe most, but I’m not sure) will also reject the astronomical waste argument.
Longtermism in EA seems to be dominated by views that accept the astronomical waste argument, or, at least that seems to be the case for 80,000 Hours. 80,000 Hours’ cause prioritization and problem quiz (question 4, specifically) make it clear that the absence of future generations accounts for most of their priority given to existential risks.
They also speak of preventing extinction as “saving lives” and use the expected number of lives:
I don’t think it needs to make this assumption. Even anti-realists can assign different weights to different views and intuitions. It can be statement about where you expect your views to go if you heard all possible arguments for and against. 80,000 Hours also uses moral uncertainty, FWIW.
Who counts (different person-affecting views, nonhuman animals, fetuses, etc.), in what ways and what does it mean to improve and protect people’s lives? These are obviously important questions for philosophy. Are you saying we should stop thinking about them?
Do you improve and protect a person’s life by ensuring they come into existence in the first place? If not, then you should reject the astronomical waste argument.
I don’t, but why do you ask? I don’t see your point.
As I said previously, most theories imply it, but the field hasn’t caught up. They were slow to catch onto LGBT rights, animal interests, and charity against global poverty; it’s not surprising that they would repeat the same problem of taking too long to recognize priorities from a broader moral circle.
But then there is scarce reason to defer to surveys of philosophers as guidance. Moral views are largely based on differences in intuition, often determined by differences in psychology and identity. Future divergences in your moral inclinations could be a random walk from your current position, or regression to the human population mean, or regression to the Effective Altruist mean.
Because the criticism isn’t just against longtermism per se, but longtermism in practice. In practice, I think these views are popular or at least disproportionately promoted or taken for granted at prominent EA orgs (well, 80,000 Hours, at least).
Based on what are you making this claim? I seriously doubt this, given the popularity of different person-affecting views and different approaches to aggregation. Here are some surveys of value-monistic consequentialist systems (or population axiologies), but they are by no means exhaustive, since they miss theories like leximin/maximin, Moderate Trade-off Theory, rank-discounted theories and often specific theories with person-affecting views:
http://www.crepp.ulg.ac.be/papers/crepp-wp200303.pdf
https://www.repugnant-conclusion.com/population-ethics.pdf
http://users.ox.ac.uk/~mert2255/papers/population_axiology.pdf (this one probably gives the broadest overview since it also covers person-affecting views; of the families of theories mentioned, I think only Totalism, and (some) critical-level theories clearly support the astronomical waste argument.)
Also, have you surveyed theories within virtue ethics and deontology?
At any rate, I’m not sure the number of theories is a better measure than number of philosophers or ethicists specifically. A lot of theories will be pretty ad hoc and receive hardly any support, sometimes even by their own authors. Some are introduced just for the purpose of illustration (I think Moderate Trade-off Theory was one).
Sure, but arguments can influence beliefs.
Are you 100% certain of a specific fully-specified ethical system? I don’t think anyone should be. If you aren’t, then shouldn’t we call that “moral uncertainty” and find ways to deal with it?
But in my original post I already acknowledged this difference. You’re repeating things I’ve already said, as if it were somehow contradicting me.
Based on my general understanding of moral theory and the minimal kinds of assumptions necessary to place the highest priority on the long-run future.
I am familiar with them.
They, and many of the population ethics theories that you link, frequently still imply a greater focus on the long-term future than on other social issues.
(I don’t intend to go into more specific arguments here. If you care about this issue, go ahead and make a proper top-level post for it so that it can be debated in a proper context.)
“Most” i.e. majority of theories weighted for how popular they are. That’s what I meant by saying “across the distribution of current opinions and published literature.” Though I don’t have a particular reason to think that support for long term priorities comes disproportionately from popular or unpopular theories.
No. First, if I’m uncertain between two ethical views, I’m genuinely ambivalent about what future me should decide: there’s no ‘value of information’ here. Second, as I said in the original post, it’s a pointless and costly exercise to preemptively try to figure out a fully-specified ethical system. I think we should take the mandate that we have, to follow some kind of Effective Altruism, and then answer moral questions if and when they appear and matter in the practice of this general mandate. Moral arguments need to be both potentially convincing and carrying practical ramifications for us to worry about moral uncertainty.
Sorry, I should have been more explicit at the start. You responded to a few of weeatquince’s points by saying they confounded specific narrower views with longtermism as whole, but these views are very influential within EA longtermism in practice, and the writing your OP is a response to dealt with these narrower views in the first place. I don’t think weeatquince (or Phil) was confounding these narrower views with longtermism broadly understood, and the point was to criticize these specific views, anyway, so longtermism being broader is besides the point. If they were confounding these more specific views with longtermism, it still wouldn’t invalidate the original criticisms, because these specific views do seem to get significant weight in EA longtermism in practice, anyway (e.g. through 80,000 Hours).
I don’t disagree, but the original point was about “astronomical waste-type arguments”, specifically, not just priority for the long-term future or longtermism, broadly understood. Maybe I’ve interpreted “astronomical waste-type arguments” more narrowly than you have. Astronomical waste to me means roughly failing to ensure the creation of an astronomical number of happy beings. I seriously doubt that most theories or ethicists, or theories weighted by “the distribution of current opinions and published literature” would support the astronomical waste argument, whether or not most are longtermist in some sense. Maybe most would accept Beckstead’s adjustment, but the original criticisms seemed to be pretty specific to Bostrom’s original argument, so I think that’s what you should be responding to.
I think there’s an important practical difference between longtermist views which accept the original astronomical waste argument and those that don’t: those that do take extinction to be astronomically bad, so nearer term concerns are much more likely to be completely dominated by very small differences in extinction risk probabilities (under risk-neutral EV maximization or Maxipok, at least).
What theories have you seen that do support the astronomical waste argument? Don’t almost all of them (weighted by popularity or not) depend on (impersonal) totalism or a slight variation of it?
Are you saying views accepting the astronomical waste argument are dominant within ethics generally?
You seem to be interpreting my post as an attempt at a comprehensive refutation, when it is not and was not presented as such. I took some arguments and explored their implications. I was quite open about the fact that some of the arguments could lead to disagreement with common Effective Altruist interpretations of long-term priorities even if they don’t refute the basic idea. I feel like you are manufacturing disagreement and I think this is a good time to end the conversation.
As I said previously, this should be discussed in a proper post; I don’t currently have time or inclination to go into it.
I answered this in previous comments.
I find this surprising. Can you point to examples?
Section 9.3 here: https://www.nickbostrom.com/existential/risks.html
(Disclaimer: Not my own views/criticism. I am just trying to steelman a Facebook post I read. I have not looked into the wider context of these views or people’s current positions on these views.)
Note that Bostrom doesn’t advocate preemptive nuclear strikes in this essay. Rather he says the level of force should be no greater than necessary to “reduce the threat to an acceptable level.”
When the stakes are “astronomical” and many longtermists are maximizing EV (or using Maxipok) and are consequentialists (or sufficiently consequentialist), what’s an acceptable level of threat? For them, isn’t the only acceptable level of threat the lowest possible level of threat?
Unless the probability difference is extremely small, won’t it come down to whether it increases or decreases risk in expectation, and those who would be killed can effectively be ignored since they won’t make a large enough difference to change the decision?
EDIT: Ah, preemptive strikes still might not be the best uses of limited resources if they could be used another way.
EDIT2: The US already has a bunch of nukes that aren’t being used for anything else, though.
There are going to be prudential questions of governance, collateral damages, harms to norms, and similar issues which swamp very small direct differences in risk probability even if one is fixated on the very long run. Hence, an acceptable level of risk is one which is low enough that it seems equal or smaller than these other issues.
I’m not sure why this comment was downvoted; @weeatquince was asked for information and provided it.