I downvoted this but I wanted to explain why and hopefully provide constructive feedback. I felt that, having seen the original post this is referencing, I really do not think this post did a good/âfair job of representing (or steelmanning) the original arguments raised.
To try and make this feedback more useful and help the debate here are some very quick attempts to steelman some of the original arguments:
Historically arguments that justify horrendous activities have a high frequency of being utopia based (appealing to possible but uncertain future utopias). The long-termist astronomical waste argument has this feature and so we should be wary of it.
If an argument leads to some ridiculous /â repugnant conclusions that most people would object too then it is worth being wary of that argument. The philosophers who developed the long-termist astronomical waste argument openly use it to promote a range of abhorrent hawkish geopolitical responses (eg premptive nuclear strikes). We should be wary of following and promoting such arguments and philosophers.
There are problems with taking a simple expected value approach to decision making under uncertainty. Eg Pascalâs mugging problems. [For more on this look up robust decision making under deep uncertainty or knightian uncertainty]
The astronomical waste type arguments are not robust to a range of different philosophical and non-utilitarian ethical frameworks and (given ethical uncertainty) this makes them not great arguments
Etc
The above are not arguments against working on x-risks etc (and the original poster does himself work on x-risk issues) but are against overly relying on, using and promoting the astronomical waste type arguments for long-termism.
Your comment makes points that are already addressed by my original post.
Historically arguments that justify horrendous activities have a high frequency of being utopia based (appealing to possible but uncertain future utopias).
This is selecting on the dependent variable. Nearly every reformer and revolutionary has appealed to possible but uncertain future utopias. Also most horrendous activities have been most greatly motivated by some form of xenophobia or parochialism, which is absent here.
If an argument leads to some ridiculous /â repugnant conclusions that most people would object too then it is worth being wary of that argument.
Maybe at first glance, but itâs a good idea to replace that first glance with a more rigorous look at pros and cons, which we have been doing for years. Also this is about consequentialist longtermism, not longtermism per se.
The philosophers who developed the long-termist astronomical waste argument openly use it to promote a range of abhorrent hawkish geopolitical responses (eg premptive nuclear strikes).
I donât find Bostromâs argument abhorrent, especially since he didnât actually promote preemptive nuclear strikes. And again, this confounds longtermism with act-consequentialist views.
There are problems with taking a simple expected value approach to decision making under uncertainty. Eg Pascalâs mugging problems.
Itâs inappropriate to confound longtermism with EV maximization. Itâs not clear that doubting EV maximization will weaken, let alone end, the case for focusing on the long run. Loss-averse frameworks will care more about preventing existential risks and negative long-run trajectories. If you ignore tiny-probability events then you will worry less about existential risks but will still prioritize acceleration of our broad socioeconomic trajectory.
Generally speaking, EV maximization is fine and does a good job of beating its objections. Pascalâs Mugging is answered by factoring in the optimizerâs curse, noting that paying off the mugger incurs opportunity costs and that larger speculated benefits are less likely on priors.
People should move beyond merely objecting to EV maximization, and provide preferred formal characterizations that can be examined for real-world implications. They exist in the literature but in the context of these debates people always seem shy to commit to anything.
The astronomical waste type arguments are not robust to a range of different philosophical and non-utilitarian ethical frameworks
They are robust to a pretty good range of frameworks. I would guess that perhaps three-fourths of philosophical views, across the distribution of current opinions and published literature, would back up a broadly long-term focused view (not that philosophers themselves have necessarily caught up), although they wonât necessarily be consequentialist about pursuing this priority.
(given ethical uncertainty) this makes them not great arguments
Assuming ethical uncertainty. I do not make this assumption: it requires a kind of false moral realism.
When non-Effective-Altruists open more leeway to us on the basis of moral uncertainty, we can respond in kind, but until then, deferring to ethical uncertainty is needless disregard for other peopleâs well-being.
I donât think your responses in this comment about narrower views being confounded with longtermism as a whole are fair. If your point is that longtermism is broader than weeatquince or Phil made it out to be, then I think youâre missing the point of the original criticisms, since the views being criticized are prominent in practice within EA longtermism. The response âThere are other longtermist viewsâ doesnât help the ones being criticized.
In particular, 80,000 Hours promotes both (risk-neutral) EV maximization and astronomical waste (as I described in other replies), and consequentialism is disproportionately popular among EA survey and SSC survey respondents. Itâs about half, although they donât distinguish between act and rule consequentialism, and itâs possible longtermists are much less likely to be consequentialists, but I doubt that. To be fair to 80,000 Hours, theyâve also written against pure consequentialism, with that article linked to on their key ideas page.
80,000 Hours shapes the views and priorities of EAs, and, overall, I think the views being criticized will be made more popular by 80,000 Hoursâ work.
And again, this is making the mistake of confounding longtermism with act-consequentialist views.
Itâs similarly inappropriate to confound longtermism with EV maximization.
Do you think these views dominate within EA longtermism? I suspect they might, and this seems to be how 80,000 Hours (and probably by extension, CEA) thinks about longtermism, at least. (More on this below in this comment.)
They are robust to a pretty good range of frameworks. I would guess that perhaps three-fourths of philosophical views, across the distribution of current opinions and published literature, would back up a broadly long-term focused view (not that philosophers themselves have necessarily caught up), although they wonât necessarily be consequentialist about pursuing this priority.
I think this may be plausible of longtermism generally (although Iâm very unsure), but not all longtermist views accept astronomical waste. Iâd guess that acceptance of the astronomical waste argument, specifically, is a minority view within ethics, both among philosophers and among published views. Among population axiologies or pure consequentialist views, Iâd guess most published views were designed specifically to avoid the repugnant conclusion, and a great deal of these (maybe most, but Iâm not sure) will also reject the astronomical waste argument.
Longtermism in EA seems to be dominated by views that accept the astronomical waste argument, or, at least that seems to be the case for 80,000 Hours. 80,000 Hoursâ cause prioritization and problem quiz (question 4, specifically) make it clear that the absence of future generations accounts for most of their priority given to existential risks.
Letâs explore some hypothetical numbers to illustrate the general concept. If thereâs a 5% chance that civilisation lasts for ten million years, then in expectation, there are 5000 future generations. If thousands of people making a concerted effort could, with a 55% probability, reduce the risk of premature extinction by 1 percentage point, then these efforts would in expectation save 28 future generations. If each generation contains ten billion people, that would be 280 billion lives saved. If thereâs a chance civilisation lasts longer than ten million years, or that there are more than ten billion people in each future generation, then the argument is strengthened even further.
Moral uncertainty isnât a theoretically coherent ideaâit assumes an objectively factual basis for peopleâs motivations.
I donât think it needs to make this assumption. Even anti-realists can assign different weights to different views and intuitions. It can be statement about where you expect your views to go if you heard all possible arguments for and against. 80,000 Hours also uses moral uncertainty, FWIW.
Philosophy should not be about fact-finding, it should be about reinforcing the mission to improve and protect all peopleâs lives.
Who counts (different person-affecting views, nonhuman animals, fetuses, etc.), in what ways and what does it mean to improve and protect peopleâs lives? These are obviously important questions for philosophy. Are you saying we should stop thinking about them?
Do you improve and protect a personâs life by ensuring they come into existence in the first place? If not, then you should reject the astronomical waste argument.
Do you think these views dominate within EA longtermism?
I donât, but why do you ask? I donât see your point.
Iâd guess that acceptance of the astronomical waste argument, specifically, is a minority view within ethics, both among philosophers and among published views
As I said previously, most theories imply it, but the field hasnât caught up. They were slow to catch onto LGBT rights, animal interests, and charity against global poverty; itâs not surprising that they would repeat the same problem of taking too long to recognize priorities from a broader moral circle.
Even anti-realists can assign different weights to different views and intuitions. It can be statement about where you expect your views to go if you heard all possible arguments for and against.
But then there is scarce reason to defer to surveys of philosophers as guidance. Moral views are largely based on differences in intuition, often determined by differences in psychology and identity. Future divergences in your moral inclinations could be a random walk from your current position, or regression to the human population mean, or regression to the Effective Altruist mean.
I donât, but why do you ask? I donât see your point.
Because the criticism isnât just against longtermism per se, but longtermism in practice. In practice, I think these views are popular or at least disproportionately promoted or taken for granted at prominent EA orgs (well, 80,000 Hours, at least).
As I said previously, most theories imply it
Based on what are you making this claim? I seriously doubt this, given the popularity of different person-affecting views and different approaches to aggregation. Here are some surveys of value-monistic consequentialist systems (or population axiologies), but they are by no means exhaustive, since they miss theories like leximin/âmaximin, Moderate Trade-off Theory, rank-discounted theories and often specific theories with person-affecting views:
Also, have you surveyed theories within virtue ethics and deontology?
At any rate, Iâm not sure the number of theories is a better measure than number of philosophers or ethicists specifically. A lot of theories will be pretty ad hoc and receive hardly any support, sometimes even by their own authors. Some are introduced just for the purpose of illustration (I think Moderate Trade-off Theory was one).
But then there is scarce reason to defer to surveys of philosophers as guidance. Moral views are largely based on differences in intuition, often determined by differences in psychology and identity. Future divergences in your moral inclinations could be a random walk from your current position, or regression to the human population mean, or regression to the Effective Altruist mean.
Sure, but arguments can influence beliefs.
Are you 100% certain of a specific fully-specified ethical system? I donât think anyone should be. If you arenât, then shouldnât we call that âmoral uncertaintyâ and find ways to deal with it?
Because the criticism isnât just against longtermism per se, but longtermism in practice.
But in my original post I already acknowledged this difference. Youâre repeating things Iâve already said, as if it were somehow contradicting me.
Based on what are you making this claim?
Based on my general understanding of moral theory and the minimal kinds of assumptions necessary to place the highest priority on the long-run future.
Also, have you surveyed theories within virtue ethics and deontology?
I am familiar with them.
They, and many of the population ethics theories that you link, frequently still imply a greater focus on the long-term future than on other social issues.
(I donât intend to go into more specific arguments here. If you care about this issue, go ahead and make a proper top-level post for it so that it can be debated in a proper context.)
At any rate, Iâm not sure the number of theories is a better measure than number of philosophers or ethicists specifically
âMostâ i.e. majority of theories weighted for how popular they are. Thatâs what I meant by saying âacross the distribution of current opinions and published literature.â Though I donât have a particular reason to think that support for long term priorities comes disproportionately from popular or unpopular theories.
Are you 100% certain of a specific fully-specified ethical system? I donât think anyone should be. If you arenât, then shouldnât we call that âmoral uncertaintyâ and find ways to deal with it?
No. First, if Iâm uncertain between two ethical views, Iâm genuinely ambivalent about what future me should decide: thereâs no âvalue of informationâ here. Second, as I said in the original post, itâs a pointless and costly exercise to preemptively try to figure out a fully-specified ethical system. I think we should take the mandate that we have, to follow some kind of Effective Altruism, and then answer moral questions if and when they appear and matter in the practice of this general mandate. Moral arguments need to be both potentially convincing and carrying practical ramifications for us to worry about moral uncertainty.
But in my original post I already acknowledged this difference. Youâre repeating things Iâve already said, as if it were somehow contradicting me.
Sorry, I should have been more explicit at the start. You responded to a few of weeatquinceâs points by saying they confounded specific narrower views with longtermism as whole, but these views are very influential within EA longtermism in practice, and the writing your OP is a response to dealt with these narrower views in the first place. I donât think weeatquince (or Phil) was confounding these narrower views with longtermism broadly understood, and the point was to criticize these specific views, anyway, so longtermism being broader is besides the point. If they were confounding these more specific views with longtermism, it still wouldnât invalidate the original criticisms, because these specific views do seem to get significant weight in EA longtermism in practice, anyway (e.g. through 80,000 Hours).
They, and many of the population ethics theories that you link, frequently still imply a greater focus on the long-term future than on other social issues.
I donât disagree, but the original point was about âastronomical waste-type argumentsâ, specifically, not just priority for the long-term future or longtermism, broadly understood. Maybe Iâve interpreted âastronomical waste-type argumentsâ more narrowly than you have. Astronomical waste to me means roughly failing to ensure the creation of an astronomical number of happy beings. I seriously doubt that most theories or ethicists, or theories weighted by âthe distribution of current opinions and published literatureâ would support the astronomical waste argument, whether or not most are longtermist in some sense. Maybe most would accept Becksteadâs adjustment, but the original criticisms seemed to be pretty specific to Bostromâs original argument, so I think thatâs what you should be responding to.
I think thereâs an important practical difference between longtermist views which accept the original astronomical waste argument and those that donât: those that do take extinction to be astronomically bad, so nearer term concerns are much more likely to be completely dominated by very small differences in extinction risk probabilities (under risk-neutral EV maximization or Maxipok, at least).
What theories have you seen that do support the astronomical waste argument? Donât almost all of them (weighted by popularity or not) depend on (impersonal) totalism or a slight variation of it?
Are you saying views accepting the astronomical waste argument are dominant within ethics generally?
Sorry, I should have been more explicit at the start. You responded to a few of weeatquinceâs points by saying they confounded specific narrower views with longtermism as whole, but these views are very influential within EA longtermism in practice, and the writing your OP is a response to dealt with these narrower views in the first place. I donât think weeatquince (or Phil) was confounding these narrower views with longtermism broadly understood, and the point was to criticize these specific views, anyway, so longtermism being broader is besides the point. If they were confounding these more specific views with longtermism, it still wouldnât invalidate the original criticisms, because these specific views do seem to get significant weight in EA longtermism in practice, anyway (e.g. through 80,000 Hours).
You seem to be interpreting my post as an attempt at a comprehensive refutation, when it is not and was not presented as such. I took some arguments and explored their implications. I was quite open about the fact that some of the arguments could lead to disagreement with common Effective Altruist interpretations of long-term priorities even if they donât refute the basic idea. I feel like you are manufacturing disagreement and I think this is a good time to end the conversation.
What theories have you seen that do support the astronomical waste argument? Donât almost all of them (weighted by popularity or not) depend on (impersonal) totalism or a slight variation of it?
As I said previously, this should be discussed in a proper post; I donât currently have time or inclination to go into it.
Are you saying views accepting the astronomical waste argument are dominant within ethics generally?
The philosophers who developed the long-termist astronomical waste argument openly use it to promote a range of abhorrent hawkish geopolitical responses (eg premptive nuclear strikes).
I find this surprising. Can you point to examples?
(Disclaimer: Not my own views/âcriticism. I am just trying to steelman a Facebook post I read. I have not looked into the wider context of these views or peopleâs current positions on these views.)
Note that Bostrom doesnât advocate preemptive nuclear strikes in this essay. Rather he says the level of force should be no greater than necessary to âreduce the threat to an acceptable level.â
When the stakes are âastronomicalâ and many longtermists are maximizing EV (or using Maxipok) and are consequentialists (or sufficiently consequentialist), whatâs an acceptable level of threat? For them, isnât the only acceptable level of threat the lowest possible level of threat?
Unless the probability difference is extremely small, wonât it come down to whether it increases or decreases risk in expectation, and those who would be killed can effectively be ignored since they wonât make a large enough difference to change the decision?
EDIT: Ah, preemptive strikes still might not be the best uses of limited resources if they could be used another way.
EDIT2: The US already has a bunch of nukes that arenât being used for anything else, though.
There are going to be prudential questions of governance, collateral damages, harms to norms, and similar issues which swamp very small direct differences in risk probability even if one is fixated on the very long run. Hence, an acceptable level of risk is one which is low enough that it seems equal or smaller than these other issues.
Hi,
I downvoted this but I wanted to explain why and hopefully provide constructive feedback. I felt that, having seen the original post this is referencing, I really do not think this post did a good/âfair job of representing (or steelmanning) the original arguments raised.
To try and make this feedback more useful and help the debate here are some very quick attempts to steelman some of the original arguments:
Historically arguments that justify horrendous activities have a high frequency of being utopia based (appealing to possible but uncertain future utopias). The long-termist astronomical waste argument has this feature and so we should be wary of it.
If an argument leads to some ridiculous /â repugnant conclusions that most people would object too then it is worth being wary of that argument. The philosophers who developed the long-termist astronomical waste argument openly use it to promote a range of abhorrent hawkish geopolitical responses (eg premptive nuclear strikes). We should be wary of following and promoting such arguments and philosophers.
There are problems with taking a simple expected value approach to decision making under uncertainty. Eg Pascalâs mugging problems. [For more on this look up robust decision making under deep uncertainty or knightian uncertainty]
The astronomical waste type arguments are not robust to a range of different philosophical and non-utilitarian ethical frameworks and (given ethical uncertainty) this makes them not great arguments
Etc
The above are not arguments against working on x-risks etc (and the original poster does himself work on x-risk issues) but are against overly relying on, using and promoting the astronomical waste type arguments for long-termism.
Your comment makes points that are already addressed by my original post.
This is selecting on the dependent variable. Nearly every reformer and revolutionary has appealed to possible but uncertain future utopias. Also most horrendous activities have been most greatly motivated by some form of xenophobia or parochialism, which is absent here.
Maybe at first glance, but itâs a good idea to replace that first glance with a more rigorous look at pros and cons, which we have been doing for years. Also this is about consequentialist longtermism, not longtermism per se.
I donât find Bostromâs argument abhorrent, especially since he didnât actually promote preemptive nuclear strikes. And again, this confounds longtermism with act-consequentialist views.
Itâs inappropriate to confound longtermism with EV maximization. Itâs not clear that doubting EV maximization will weaken, let alone end, the case for focusing on the long run. Loss-averse frameworks will care more about preventing existential risks and negative long-run trajectories. If you ignore tiny-probability events then you will worry less about existential risks but will still prioritize acceleration of our broad socioeconomic trajectory.
Generally speaking, EV maximization is fine and does a good job of beating its objections. Pascalâs Mugging is answered by factoring in the optimizerâs curse, noting that paying off the mugger incurs opportunity costs and that larger speculated benefits are less likely on priors.
People should move beyond merely objecting to EV maximization, and provide preferred formal characterizations that can be examined for real-world implications. They exist in the literature but in the context of these debates people always seem shy to commit to anything.
They are robust to a pretty good range of frameworks. I would guess that perhaps three-fourths of philosophical views, across the distribution of current opinions and published literature, would back up a broadly long-term focused view (not that philosophers themselves have necessarily caught up), although they wonât necessarily be consequentialist about pursuing this priority.
Assuming ethical uncertainty. I do not make this assumption: it requires a kind of false moral realism.
When non-Effective-Altruists open more leeway to us on the basis of moral uncertainty, we can respond in kind, but until then, deferring to ethical uncertainty is needless disregard for other peopleâs well-being.
I donât think your responses in this comment about narrower views being confounded with longtermism as a whole are fair. If your point is that longtermism is broader than weeatquince or Phil made it out to be, then I think youâre missing the point of the original criticisms, since the views being criticized are prominent in practice within EA longtermism. The response âThere are other longtermist viewsâ doesnât help the ones being criticized.
In particular, 80,000 Hours promotes both (risk-neutral) EV maximization and astronomical waste (as I described in other replies), and consequentialism is disproportionately popular among EA survey and SSC survey respondents. Itâs about half, although they donât distinguish between act and rule consequentialism, and itâs possible longtermists are much less likely to be consequentialists, but I doubt that. To be fair to 80,000 Hours, theyâve also written against pure consequentialism, with that article linked to on their key ideas page.
80,000 Hours shapes the views and priorities of EAs, and, overall, I think the views being criticized will be made more popular by 80,000 Hoursâ work.
Do you think these views dominate within EA longtermism? I suspect they might, and this seems to be how 80,000 Hours (and probably by extension, CEA) thinks about longtermism, at least. (More on this below in this comment.)
I think this may be plausible of longtermism generally (although Iâm very unsure), but not all longtermist views accept astronomical waste. Iâd guess that acceptance of the astronomical waste argument, specifically, is a minority view within ethics, both among philosophers and among published views. Among population axiologies or pure consequentialist views, Iâd guess most published views were designed specifically to avoid the repugnant conclusion, and a great deal of these (maybe most, but Iâm not sure) will also reject the astronomical waste argument.
Longtermism in EA seems to be dominated by views that accept the astronomical waste argument, or, at least that seems to be the case for 80,000 Hours. 80,000 Hoursâ cause prioritization and problem quiz (question 4, specifically) make it clear that the absence of future generations accounts for most of their priority given to existential risks.
They also speak of preventing extinction as âsaving livesâ and use the expected number of lives:
I donât think it needs to make this assumption. Even anti-realists can assign different weights to different views and intuitions. It can be statement about where you expect your views to go if you heard all possible arguments for and against. 80,000 Hours also uses moral uncertainty, FWIW.
Who counts (different person-affecting views, nonhuman animals, fetuses, etc.), in what ways and what does it mean to improve and protect peopleâs lives? These are obviously important questions for philosophy. Are you saying we should stop thinking about them?
Do you improve and protect a personâs life by ensuring they come into existence in the first place? If not, then you should reject the astronomical waste argument.
I donât, but why do you ask? I donât see your point.
As I said previously, most theories imply it, but the field hasnât caught up. They were slow to catch onto LGBT rights, animal interests, and charity against global poverty; itâs not surprising that they would repeat the same problem of taking too long to recognize priorities from a broader moral circle.
But then there is scarce reason to defer to surveys of philosophers as guidance. Moral views are largely based on differences in intuition, often determined by differences in psychology and identity. Future divergences in your moral inclinations could be a random walk from your current position, or regression to the human population mean, or regression to the Effective Altruist mean.
Because the criticism isnât just against longtermism per se, but longtermism in practice. In practice, I think these views are popular or at least disproportionately promoted or taken for granted at prominent EA orgs (well, 80,000 Hours, at least).
Based on what are you making this claim? I seriously doubt this, given the popularity of different person-affecting views and different approaches to aggregation. Here are some surveys of value-monistic consequentialist systems (or population axiologies), but they are by no means exhaustive, since they miss theories like leximin/âmaximin, Moderate Trade-off Theory, rank-discounted theories and often specific theories with person-affecting views:
http://ââwww.crepp.ulg.ac.be/ââpapers/ââcrepp-wp200303.pdf
https://ââwww.repugnant-conclusion.com/ââpopulation-ethics.pdf
http://ââusers.ox.ac.uk/ââ~mert2255/ââpapers/ââpopulation_axiology.pdf (this one probably gives the broadest overview since it also covers person-affecting views; of the families of theories mentioned, I think only Totalism, and (some) critical-level theories clearly support the astronomical waste argument.)
Also, have you surveyed theories within virtue ethics and deontology?
At any rate, Iâm not sure the number of theories is a better measure than number of philosophers or ethicists specifically. A lot of theories will be pretty ad hoc and receive hardly any support, sometimes even by their own authors. Some are introduced just for the purpose of illustration (I think Moderate Trade-off Theory was one).
Sure, but arguments can influence beliefs.
Are you 100% certain of a specific fully-specified ethical system? I donât think anyone should be. If you arenât, then shouldnât we call that âmoral uncertaintyâ and find ways to deal with it?
But in my original post I already acknowledged this difference. Youâre repeating things Iâve already said, as if it were somehow contradicting me.
Based on my general understanding of moral theory and the minimal kinds of assumptions necessary to place the highest priority on the long-run future.
I am familiar with them.
They, and many of the population ethics theories that you link, frequently still imply a greater focus on the long-term future than on other social issues.
(I donât intend to go into more specific arguments here. If you care about this issue, go ahead and make a proper top-level post for it so that it can be debated in a proper context.)
âMostâ i.e. majority of theories weighted for how popular they are. Thatâs what I meant by saying âacross the distribution of current opinions and published literature.â Though I donât have a particular reason to think that support for long term priorities comes disproportionately from popular or unpopular theories.
No. First, if Iâm uncertain between two ethical views, Iâm genuinely ambivalent about what future me should decide: thereâs no âvalue of informationâ here. Second, as I said in the original post, itâs a pointless and costly exercise to preemptively try to figure out a fully-specified ethical system. I think we should take the mandate that we have, to follow some kind of Effective Altruism, and then answer moral questions if and when they appear and matter in the practice of this general mandate. Moral arguments need to be both potentially convincing and carrying practical ramifications for us to worry about moral uncertainty.
Sorry, I should have been more explicit at the start. You responded to a few of weeatquinceâs points by saying they confounded specific narrower views with longtermism as whole, but these views are very influential within EA longtermism in practice, and the writing your OP is a response to dealt with these narrower views in the first place. I donât think weeatquince (or Phil) was confounding these narrower views with longtermism broadly understood, and the point was to criticize these specific views, anyway, so longtermism being broader is besides the point. If they were confounding these more specific views with longtermism, it still wouldnât invalidate the original criticisms, because these specific views do seem to get significant weight in EA longtermism in practice, anyway (e.g. through 80,000 Hours).
I donât disagree, but the original point was about âastronomical waste-type argumentsâ, specifically, not just priority for the long-term future or longtermism, broadly understood. Maybe Iâve interpreted âastronomical waste-type argumentsâ more narrowly than you have. Astronomical waste to me means roughly failing to ensure the creation of an astronomical number of happy beings. I seriously doubt that most theories or ethicists, or theories weighted by âthe distribution of current opinions and published literatureâ would support the astronomical waste argument, whether or not most are longtermist in some sense. Maybe most would accept Becksteadâs adjustment, but the original criticisms seemed to be pretty specific to Bostromâs original argument, so I think thatâs what you should be responding to.
I think thereâs an important practical difference between longtermist views which accept the original astronomical waste argument and those that donât: those that do take extinction to be astronomically bad, so nearer term concerns are much more likely to be completely dominated by very small differences in extinction risk probabilities (under risk-neutral EV maximization or Maxipok, at least).
What theories have you seen that do support the astronomical waste argument? Donât almost all of them (weighted by popularity or not) depend on (impersonal) totalism or a slight variation of it?
Are you saying views accepting the astronomical waste argument are dominant within ethics generally?
You seem to be interpreting my post as an attempt at a comprehensive refutation, when it is not and was not presented as such. I took some arguments and explored their implications. I was quite open about the fact that some of the arguments could lead to disagreement with common Effective Altruist interpretations of long-term priorities even if they donât refute the basic idea. I feel like you are manufacturing disagreement and I think this is a good time to end the conversation.
As I said previously, this should be discussed in a proper post; I donât currently have time or inclination to go into it.
I answered this in previous comments.
I find this surprising. Can you point to examples?
Section 9.3 here: https://ââwww.nickbostrom.com/ââexistential/âârisks.html
(Disclaimer: Not my own views/âcriticism. I am just trying to steelman a Facebook post I read. I have not looked into the wider context of these views or peopleâs current positions on these views.)
Note that Bostrom doesnât advocate preemptive nuclear strikes in this essay. Rather he says the level of force should be no greater than necessary to âreduce the threat to an acceptable level.â
When the stakes are âastronomicalâ and many longtermists are maximizing EV (or using Maxipok) and are consequentialists (or sufficiently consequentialist), whatâs an acceptable level of threat? For them, isnât the only acceptable level of threat the lowest possible level of threat?
Unless the probability difference is extremely small, wonât it come down to whether it increases or decreases risk in expectation, and those who would be killed can effectively be ignored since they wonât make a large enough difference to change the decision?
EDIT: Ah, preemptive strikes still might not be the best uses of limited resources if they could be used another way.
EDIT2: The US already has a bunch of nukes that arenât being used for anything else, though.
There are going to be prudential questions of governance, collateral damages, harms to norms, and similar issues which swamp very small direct differences in risk probability even if one is fixated on the very long run. Hence, an acceptable level of risk is one which is low enough that it seems equal or smaller than these other issues.
Iâm not sure why this comment was downvoted; @weeatquince was asked for information and provided it.