I think that humans are the only chance to all other animal species to survive their normal way to extinction. Most species exist around 4 million years and all life on Earth will die off in next 1 billion year or earlier because of Sun’s rising luminosity. But humans start to resurrect extinct species and will safe animal life if humanity will be able to colonise the Galaxy.
That is why doing good to humans and to preventing x-risks is the best way we could help animals.
Who said we will preserve wild nature in its present form? We will re-engineer it to eliminate animal suffering while enhancing positive animal experience and wild nature’s aesthetic appeal.
The number of people who want to re-engineer nature is currently much, much smaller than the number of dedicated conservationists. It is a fringe view that basically only effective altruists support, and not even all EAs. I see no reason to believe that humans will ever modify wild animals to be more happy. Humans might eventually destroy most habitats, however.
This perspective is widely restated but I’m not sure this is supportable by argument:
Isn’t it almost certain that humans will eventually destroy most existing habitats? We’ve already destroyed in the vicinity of half, right, by proportion of land?
Most social change is a fringe interest initially. If we have good reasons to care about animal welfare in the abstract, then the interest in this may continue to increase. If one does not think have confidence in these arguments, then instead, mightn’t one want to take moral uncertainty or moral pluralism more seriously?
Creating animal environments in new planets in which they would not naturally live will involve a significantly different discussion compared to the treatment of wild animals.
In general, are you trying to generalise from humans’ treatment of animals in the 21st century to humans’ treatment of animals, in which modification of animals is very difficult, to an environment for the next thousands of years, that may or may not be in a simulated environment, grown in vitro, genetically engineered, et cetera, in which modification may be less difficult. If this is the generalisation that you are trying to make, then more thorough argumentation is needed. Obviously I’m also trying to generalise to the future, but the current naturalistic biases aren’t an obviously particularly relevant factor when the future situation is fleshed out more concretely.
Although most people currently don’t want to alter nature, in any circumstances that we worry about, people will have different capacities, that shape different views, about something that would no longer be aptly called “nature”, and so we need to reason differently about what to expect.
Most people will help wild animal if they see it in trouble now.
Anyway I think that we could create nanoimplants, which will be able to prevent suffering of wild animals by blocking excessive pain in case of death or injury. But these implants will not change the ways of their ordinary life, so natural life will look like almost the same.
I am also would vote for resurrection of all sentient life, starting from humans, but also including animals from most complex one to less complex. Probably future AI could do it.
Most people will help wild animal if they see it in trouble now.
Most people I know don’t really care about wild animals. The ones who do tend to be environmentalists who care more about preserving nature as it is than maximizing welfare. Do you have any evidence that most people actually would support reducing wild animal suffering?
I am also would vote for resurrection of all sentient life, starting from humans, but also including animals from most complex one to less complex.
I don’t see what the purpose of that would be. If you’re a classical total utilitarian, you should just determine what the happiest species is and make more of that (or make utilitronium). If you’re not a classic total utilitarian, then why would creating new beings help?
In an intergalactic civilisation, why would one expect un-augmented wild animals to represent any significant fraction of all life? There are strong incentives to use resources for human flourishing. Not a rhetorical question but I can’t think of any reason more compelling than the econ incentives. Notwithstanding language used in past pieces by pessimists, they don’t tend to contest this.
Most people don’t generally just want to reduce animal suffering without regard to preferences and happiness of animals. Nor is it reasonable to want purely that, when people have good arguments for other moral perspectives.
It’s also generally agrees that there’s not much reason to expect that there to be a lot of suffering animals in the long run, compared to the amount of animal flourishing, if we’re able to travel to new planets, make synthetic meats, create awesome entertainment and scientific experiments without their use, et cetera.
If terraforming other planets involves spreading nature, then it would be very bad according to any nonspeciesist utilitarian framework. So bad that it makes increasing x-risk look like a good idea.
I’m not convinced that we are morally obligated to create beings who will experience happiness/preference satisfaction. That just seems absurd, because nonexistence doesn’t deprive anyone of anything. On the other hand, creating beings who experience suffering definitely seems bad.
More intense lives will be able to be engineered, on expectation, for a longer time period, at a higher density, and across a larger space, on expectation, through biological augmentation or virtual reality, than through nature. So
trrraforming is a red herring here, because most (approximately all) human and animal experience will be engineered by biotech in the long run.
most (approximately all) human and animal experience will be engineered by biotech in the long run
You’re making a very strong claim about something that will happen in the future that has never happened in the past based on speculation about what’s technologically feasible and on what the people with power will want to do. Maybe you’re right but you seem really overconfident here.
I mean “on expectation” as in it’s at least slightly more likely than not, based on what little.wr currently know, but I’m still very interested in new evidence.
Do you think it is likely that humans will run sentient simulations in the future? It could be that wild animal brain simulations and “suffering subroutines” dominate future expected utility calculations.
Sure, any subroutines could exist in the future. In artificial worlds, the range of possible experience should be much larger than presently. The incentives for researching and developing entertainment should by much larger than for engineering psychological harm. Generalisations from the natural world wouldn’t necessarily follow to simulations, but on the inside view, net flourishing is expected.
It looks like you’re subscribing to a person-affecting philosophy, whereby you say potential future humans aren’t worthy of moral consideration because they’re not being deprived, but bringing them into existence would be bad because they would (could) suffer.
I think this is arbitrarily asymmetrical, and not really compatible with a total utilitarian framework. I would suggest reading the relevant chapter in Nick Beckstead’s thesis ‘On the overwhelming importance of shaping the far future’, where I think he does a pretty good job at showing just this.
Another tack: You don’t have to include the creation of new beings in the calculation. There are plenty who already exist. How many orders of magnitude do you expect between the intensity/eneegy-ensity/numver of natural experiences compared to expected synthetic ones. There seem to be strong arguments that make the natural experiences irrelevant.
But the majority of beings that already exist are wild animals with negative lives. I’m not sure what you’re trying to argue here. Do you mean something like “already will exist”?
Even if there’s only a small chance of people achieving life extension, the humans currently alive have a pretty long expected life, big enough to make wild animals currently alive less relevant (though concievably there could be other cohorts even more important).
Not that attempts to make a principled distinction between moral treatment of ‘deprivation’ and ‘fulfilment’ or ‘exist’ and ‘already will exist’ don’t seem to be particularly philosophically satisfying or seem to go much beyond restatement of a personal intuition, anyway, in so far as I can tell.
Do you seriously believe that there is a non-negligible chance that any human alive today will be alive in, let’s say, 2000 years? That sounds like wishful thinking to me.
I don’t think classical total utilitarianism is the correct theory of population ethics. If you do, I suppose breeding and wireheading a bunch of rats is a great way to help the world, but that just seems silly.
Lol, I’m glad I was a salient example of someone with silly beliefs =P. Just doing my part to push the Overton window.
Strictly speaking, I expect there could be beings that are a lot happier than rats (or any other current living thing), so we should really breed those instead.
No, the negative preference utilitarian version wants to prevent Bob from surviving to have children and numerous future generations (even if their lives are all great overall), so strongly as to overwhelm any horrible thing that happens to Bob, including painfully killing him (or torturing him for thousands of years, if that was instrumentally useful) to prevent his procreation.
This is why we need to implement my own theory, “Negative-Leaning Average Preference Prioritarianism with Time-Discounted Utility for Future Generations and with Extra Points Awarded for Minimizing the Variance of Utilities Among the Present Generation.”
In this particular example, Rawlsianism will also point to making the least well-off better before creating new lives or killing existing ones (though it’s quite possible your stomach wouldn’t be as badly off as the kids with schistosomiasis).
Wishful thinking? Hardly. A 1% chance of being alive in 2000 years is too unlikely psychologically useful to me, but a smallet chance of being around forever is mathematically decisive for the specific stated purpose of making utility calculations.
Alternative approaches to population are even worse because instead of the so-called “repugnant conclusion” (it can be debated) you get the “sadistic conclusion”. Alternatives are not only less principled, but worse (1). None can ever be satisfactory (2).
If you dispense with utilitarian approaches (or at least allow alternatives to contribute), then terraforming or the lack of it is less of a focus.
A Pascal’s mugging is an intentional move by an actor where they are probably deceiving you. The fact that something has a low probability of huge payoff doesn’t make it a mugging and doesn’t imply that we should ignore it.
I don’t think moral uncertainty is a real problem. The slope isn’t uneven; I just believe that suffering is worse than most EAs do, but it would still be a straight line. I also do not support creating new happy beings instead of helping those who already exist and are suffering.
I don’t think “moral uncertainty” is something that can be solved, or even a legitimate meta-ethical problem. You can’t compare how bad something is across multiple ethical theories. Is 1 violation of rights = 1 utilon? There’s also the possibility that the correct ethical theory hasn’t even been discovered yet, and we don’t have any idea what it would say.
Cool. Interestingly, twice you’ve surprised me by endorsing a position that I thought you were repudiating. A straight line in terms of experience and value is exactly what I think of by symmetric utilitarianism, just as puzzling over this question is just what I imagine by thinking moral uncertainty is a problem. The idea that the correct ethical theory hasn’t been discovered yet, if there is such a thing, seems to be the most important source of uncertainty of all to me, though it is rarely discussed.
I believe in a symmetry for people who already exist, but I also think empirically that many common sources of suffering are far worse than the common sources of happiness are good. For people who don’t exist, I don’t see how creating more happy people is good. The absence of happiness is not bad. This is where I think there is an asymmetry.
I don’t even understand what it would mean for an ethical theory to be correct. Does that mean it is hardwired into the physical constants of the universe? I guess I’m sort of a non-cognitivist.
Right, but is that for sources of happiness and suffering that are common among all people who will exist across all time? Because almost all of the people who will exist (irrespective of your actions) don’t currently.
There’s a difficulty that I guess you’d be sensitive to, in that it’s hard to distinguish the absence of happiness from the presence of suffering and vice versa. The difference between the two is not hardwired into the physical constants of the universe, if that is a phrasing that you might be sympathetic to, though no snark is intended.
If you’re non-cognitivist, then you could ask whether you “should” (even rationally or egoistically) act according to your moral perspective. If you choose to live out your values by some description, for some reason, then they’re not going to be purely represented by any ethical theory anyway, and it’d be unclear to me why you’d want to simplify your intuitions in that way.
If you don’t have a child, you are not decreasing your nonexistent offspring’s welfare/preference satisfaction. Beings who do not exist do not have preferences and cannot suffer. Once they exist (and become sentient), their preferences and welfare matter. This may not be hardcoded into the universe, but it’s not hard to distinguish between having a child and not having one.
I meant within one person. If you believe that there is a fundamental difference between intrapersonal and interpersonal comparisons, then you’re going to run into a wall trying to define persons… It doesn’t seem to me that this really checks out, putting aside the question of why one would want simple answers here as a non-cognitivist.
It’s also generally agrees that there’s not much reason to expect that there to be a lot of suffering animals in the long run, compared to the amount of animal flourishing, if we’re able to travel to new planets, make synthetic meats, create awesome entertainment and scientific experiments without their use, et cetera.
This is not a consensus. Most people don’t care about suffering in nature or equivalent ecosystems. There is also no consensus that we should outlaw animal use even if we invent fully functional substitutes.
I personally think it’s naive to expect more flourishing than suffering even in humans. Just because a culture is technologically advanced doesn’t mean they won’t torture the defenseless on a large scale. I expect this to happen to humans and posthumans frequently. There is nothing in the universe that will prevent this.
It’s not a consensus but none of the authors/researchers who I would.expect to argue this actually do expect the expected amount of animal suffering to outweigh the amount of animal flourishing in the long run. On the other hand, dozens of prominent researchers including Shulman, Beckstead, Wiblin and Bostrom, many of whom are hard-nosed utilitarians, have come to conclude the opposite.
What I’m looking for in a credible assessment of this question is for people to think about what kinds of worlds we might see. Then the trick is to focus not on the worlds with a particular salient scenario in them, but the ones that are most durable, with large scope and large populstion. Such worlds will be outliers in the sense that they are not natural anymore, we might live much longer, there may not be a meaningful category of a “human” anymore. There may no-longer be multiple living entities anymore but perhaps just one. Or there may be much better ways to understand the experiences of other beings. We may have a very different approach to morality. It may be possible to create accurate models of a being without simulating their emotions. We may have a better understanding of the mechanics of emotions. Et cetera et cetera.
That kind of thinking, sharpened with an empirical approach that takes note of past improvements in technology and welfare, is needed to thoroughly investigate this issue, not a “single issue” presumption about a topic to one’s personal interests, however interesting that topic may seem.
The fact that their analyses include a wide range of topics, rather than focusing on confirming and emphasising specific hypotheses is encouraging, and the fact that a large number of credible people have arrived at similar conclusions from widely varying perspectives is the best possible sign.
I think that humans are the only chance to all other animal species to survive their normal way to extinction. Most species exist around 4 million years and all life on Earth will die off in next 1 billion year or earlier because of Sun’s rising luminosity. But humans start to resurrect extinct species and will safe animal life if humanity will be able to colonise the Galaxy. That is why doing good to humans and to preventing x-risks is the best way we could help animals.
We don’t want to rewild or spread wild animals to other realms.
http://reducing-suffering.org/will-space-colonization-multiply-wild-animal-suffering/
http://reducing-suffering.org/applied-welfare-biology-wild-animal-advocates-focus-spreading-nature/
Who said we will preserve wild nature in its present form? We will re-engineer it to eliminate animal suffering while enhancing positive animal experience and wild nature’s aesthetic appeal.
The number of people who want to re-engineer nature is currently much, much smaller than the number of dedicated conservationists. It is a fringe view that basically only effective altruists support, and not even all EAs. I see no reason to believe that humans will ever modify wild animals to be more happy. Humans might eventually destroy most habitats, however.
This perspective is widely restated but I’m not sure this is supportable by argument:
Isn’t it almost certain that humans will eventually destroy most existing habitats? We’ve already destroyed in the vicinity of half, right, by proportion of land?
Most social change is a fringe interest initially. If we have good reasons to care about animal welfare in the abstract, then the interest in this may continue to increase. If one does not think have confidence in these arguments, then instead, mightn’t one want to take moral uncertainty or moral pluralism more seriously?
Creating animal environments in new planets in which they would not naturally live will involve a significantly different discussion compared to the treatment of wild animals.
In general, are you trying to generalise from humans’ treatment of animals in the 21st century to humans’ treatment of animals, in which modification of animals is very difficult, to an environment for the next thousands of years, that may or may not be in a simulated environment, grown in vitro, genetically engineered, et cetera, in which modification may be less difficult. If this is the generalisation that you are trying to make, then more thorough argumentation is needed. Obviously I’m also trying to generalise to the future, but the current naturalistic biases aren’t an obviously particularly relevant factor when the future situation is fleshed out more concretely.
Although most people currently don’t want to alter nature, in any circumstances that we worry about, people will have different capacities, that shape different views, about something that would no longer be aptly called “nature”, and so we need to reason differently about what to expect.
Most people will help wild animal if they see it in trouble now. Anyway I think that we could create nanoimplants, which will be able to prevent suffering of wild animals by blocking excessive pain in case of death or injury. But these implants will not change the ways of their ordinary life, so natural life will look like almost the same. I am also would vote for resurrection of all sentient life, starting from humans, but also including animals from most complex one to less complex. Probably future AI could do it.
Most people I know don’t really care about wild animals. The ones who do tend to be environmentalists who care more about preserving nature as it is than maximizing welfare. Do you have any evidence that most people actually would support reducing wild animal suffering?
I don’t see what the purpose of that would be. If you’re a classical total utilitarian, you should just determine what the happiest species is and make more of that (or make utilitronium). If you’re not a classic total utilitarian, then why would creating new beings help?
In an intergalactic civilisation, why would one expect un-augmented wild animals to represent any significant fraction of all life? There are strong incentives to use resources for human flourishing. Not a rhetorical question but I can’t think of any reason more compelling than the econ incentives. Notwithstanding language used in past pieces by pessimists, they don’t tend to contest this.
There are some arguments for potential future suffering here:
http://foundational-research.org/risks-of-astronomical-future-suffering/
That would increase animal suffering. We want to decrease it.
Most people don’t generally just want to reduce animal suffering without regard to preferences and happiness of animals. Nor is it reasonable to want purely that, when people have good arguments for other moral perspectives.
It’s also generally agrees that there’s not much reason to expect that there to be a lot of suffering animals in the long run, compared to the amount of animal flourishing, if we’re able to travel to new planets, make synthetic meats, create awesome entertainment and scientific experiments without their use, et cetera.
If terraforming other planets involves spreading nature, then it would be very bad according to any nonspeciesist utilitarian framework. So bad that it makes increasing x-risk look like a good idea.
I’m not convinced that we are morally obligated to create beings who will experience happiness/preference satisfaction. That just seems absurd, because nonexistence doesn’t deprive anyone of anything. On the other hand, creating beings who experience suffering definitely seems bad.
More intense lives will be able to be engineered, on expectation, for a longer time period, at a higher density, and across a larger space, on expectation, through biological augmentation or virtual reality, than through nature. So trrraforming is a red herring here, because most (approximately all) human and animal experience will be engineered by biotech in the long run.
Arguments not downvotes, please!
You’re making a very strong claim about something that will happen in the future that has never happened in the past based on speculation about what’s technologically feasible and on what the people with power will want to do. Maybe you’re right but you seem really overconfident here.
I mean “on expectation” as in it’s at least slightly more likely than not, based on what little.wr currently know, but I’m still very interested in new evidence.
Do you think it is likely that humans will run sentient simulations in the future? It could be that wild animal brain simulations and “suffering subroutines” dominate future expected utility calculations.
Sure, any subroutines could exist in the future. In artificial worlds, the range of possible experience should be much larger than presently. The incentives for researching and developing entertainment should by much larger than for engineering psychological harm. Generalisations from the natural world wouldn’t necessarily follow to simulations, but on the inside view, net flourishing is expected.
It looks like you’re subscribing to a person-affecting philosophy, whereby you say potential future humans aren’t worthy of moral consideration because they’re not being deprived, but bringing them into existence would be bad because they would (could) suffer.
I think this is arbitrarily asymmetrical, and not really compatible with a total utilitarian framework. I would suggest reading the relevant chapter in Nick Beckstead’s thesis ‘On the overwhelming importance of shaping the far future’, where I think he does a pretty good job at showing just this.
Another tack: You don’t have to include the creation of new beings in the calculation. There are plenty who already exist. How many orders of magnitude do you expect between the intensity/eneegy-ensity/numver of natural experiences compared to expected synthetic ones. There seem to be strong arguments that make the natural experiences irrelevant.
But the majority of beings that already exist are wild animals with negative lives. I’m not sure what you’re trying to argue here. Do you mean something like “already will exist”?
Even if there’s only a small chance of people achieving life extension, the humans currently alive have a pretty long expected life, big enough to make wild animals currently alive less relevant (though concievably there could be other cohorts even more important).
Not that attempts to make a principled distinction between moral treatment of ‘deprivation’ and ‘fulfilment’ or ‘exist’ and ‘already will exist’ don’t seem to be particularly philosophically satisfying or seem to go much beyond restatement of a personal intuition, anyway, in so far as I can tell.
Do you seriously believe that there is a non-negligible chance that any human alive today will be alive in, let’s say, 2000 years? That sounds like wishful thinking to me.
I don’t think classical total utilitarianism is the correct theory of population ethics. If you do, I suppose breeding and wireheading a bunch of rats is a great way to help the world, but that just seems silly.
Lol, I’m glad I was a salient example of someone with silly beliefs =P. Just doing my part to push the Overton window.
Strictly speaking, I expect there could be beings that are a lot happier than rats (or any other current living thing), so we should really breed those instead.
When you’re advocating a reductio ad absurdum, I do wonder if that pushes the overton window backwards.
Bob: “Ouch, my stomach hurts.”
Classical total utilitarian: “Don’t worry! Wait while I create more happy people to make up for it.”
Average utilitarian: “Never fear! Let me create more people with only mild stomach aches to improve the average.”
Egalitarian: “I’m sorry to hear that. Here, let me give everyone else awful stomach aches too.”
...
Negative utilitarian: “Here, take this medicine to make your stomach feel better.”
The medicine is a lethal dose of sedatives.
Negative preference utilitarianism avoids that problem.
No, the negative preference utilitarian version wants to prevent Bob from surviving to have children and numerous future generations (even if their lives are all great overall), so strongly as to overwhelm any horrible thing that happens to Bob, including painfully killing him (or torturing him for thousands of years, if that was instrumentally useful) to prevent his procreation.
This is why we need to implement my own theory, “Negative-Leaning Average Preference Prioritarianism with Time-Discounted Utility for Future Generations and with Extra Points Awarded for Minimizing the Variance of Utilities Among the Present Generation.”
We’ll call it NLAPPTDUFGEPAMVUAPG.
As a classical total utilitarian, I think we should both give medicine to make your stomach feel better AND create more happy people.
Or ideally, we should create people who don’t get stomach aches in the first place.
To the extent this example has force, it seems to push towards prioritarianism rather than negative utilitarianism.
In this particular example, Rawlsianism will also point to making the least well-off better before creating new lives or killing existing ones (though it’s quite possible your stomach wouldn’t be as badly off as the kids with schistosomiasis).
Wishful thinking? Hardly. A 1% chance of being alive in 2000 years is too unlikely psychologically useful to me, but a smallet chance of being around forever is mathematically decisive for the specific stated purpose of making utility calculations.
Alternative approaches to population are even worse because instead of the so-called “repugnant conclusion” (it can be debated) you get the “sadistic conclusion”. Alternatives are not only less principled, but worse (1). None can ever be satisfactory (2).
If you dispense with utilitarian approaches (or at least allow alternatives to contribute), then terraforming or the lack of it is less of a focus.
1) interactive guide to population ethics, ben west http://people.su.se/~guarr/Texter/The%20Impossibility%20of%20a%20Satisfactory%20Population%20Ethics%20in%20Descriptive%20and%20Normative%20Approaches%20to%20Human%20Behavior%202011.pdf 2).http://people.su.se/~guarr/Texter/The%20Impossibility%20of%20a%20Satisfactory%20Population%20Ethics%20in%20Descriptive%20and%20Normative%20Approaches%20to%20Human%20Behavior%202011.pdf
This seems like a textbook case of a Pascal’s mugging.
I would describe my ethical view as negative-leaning (or perhaps asymmetric)), but still broadly utilitarian.
A Pascal’s mugging is an intentional move by an actor where they are probably deceiving you. The fact that something has a low probability of huge payoff doesn’t make it a mugging and doesn’t imply that we should ignore it.
How do you involve moral uncertainty or moral pluralism?
How do you set the scale for happiness and suffering on which moral value is supposed to slope unevenly? (1)
http://www.amirrorclear.net/academic/ideas/negative-utilitarianism/images/graph.png
I don’t think moral uncertainty is a real problem. The slope isn’t uneven; I just believe that suffering is worse than most EAs do, but it would still be a straight line. I also do not support creating new happy beings instead of helping those who already exist and are suffering.
Are you completely certain that you should act according to your moral perspective?
I don’t think “moral uncertainty” is something that can be solved, or even a legitimate meta-ethical problem. You can’t compare how bad something is across multiple ethical theories. Is 1 violation of rights = 1 utilon? There’s also the possibility that the correct ethical theory hasn’t even been discovered yet, and we don’t have any idea what it would say.
Cool. Interestingly, twice you’ve surprised me by endorsing a position that I thought you were repudiating. A straight line in terms of experience and value is exactly what I think of by symmetric utilitarianism, just as puzzling over this question is just what I imagine by thinking moral uncertainty is a problem. The idea that the correct ethical theory hasn’t been discovered yet, if there is such a thing, seems to be the most important source of uncertainty of all to me, though it is rarely discussed.
I believe in a symmetry for people who already exist, but I also think empirically that many common sources of suffering are far worse than the common sources of happiness are good. For people who don’t exist, I don’t see how creating more happy people is good. The absence of happiness is not bad. This is where I think there is an asymmetry.
I don’t even understand what it would mean for an ethical theory to be correct. Does that mean it is hardwired into the physical constants of the universe? I guess I’m sort of a non-cognitivist.
Right, but is that for sources of happiness and suffering that are common among all people who will exist across all time? Because almost all of the people who will exist (irrespective of your actions) don’t currently.
There’s a difficulty that I guess you’d be sensitive to, in that it’s hard to distinguish the absence of happiness from the presence of suffering and vice versa. The difference between the two is not hardwired into the physical constants of the universe, if that is a phrasing that you might be sympathetic to, though no snark is intended.
If you’re non-cognitivist, then you could ask whether you “should” (even rationally or egoistically) act according to your moral perspective. If you choose to live out your values by some description, for some reason, then they’re not going to be purely represented by any ethical theory anyway, and it’d be unclear to me why you’d want to simplify your intuitions in that way.
If you don’t have a child, you are not decreasing your nonexistent offspring’s welfare/preference satisfaction. Beings who do not exist do not have preferences and cannot suffer. Once they exist (and become sentient), their preferences and welfare matter. This may not be hardcoded into the universe, but it’s not hard to distinguish between having a child and not having one.
I meant within one person. If you believe that there is a fundamental difference between intrapersonal and interpersonal comparisons, then you’re going to run into a wall trying to define persons… It doesn’t seem to me that this really checks out, putting aside the question of why one would want simple answers here as a non-cognitivist.
This is not a consensus. Most people don’t care about suffering in nature or equivalent ecosystems. There is also no consensus that we should outlaw animal use even if we invent fully functional substitutes.
I personally think it’s naive to expect more flourishing than suffering even in humans. Just because a culture is technologically advanced doesn’t mean they won’t torture the defenseless on a large scale. I expect this to happen to humans and posthumans frequently. There is nothing in the universe that will prevent this.
It’s not a consensus but none of the authors/researchers who I would.expect to argue this actually do expect the expected amount of animal suffering to outweigh the amount of animal flourishing in the long run. On the other hand, dozens of prominent researchers including Shulman, Beckstead, Wiblin and Bostrom, many of whom are hard-nosed utilitarians, have come to conclude the opposite.
What I’m looking for in a credible assessment of this question is for people to think about what kinds of worlds we might see. Then the trick is to focus not on the worlds with a particular salient scenario in them, but the ones that are most durable, with large scope and large populstion. Such worlds will be outliers in the sense that they are not natural anymore, we might live much longer, there may not be a meaningful category of a “human” anymore. There may no-longer be multiple living entities anymore but perhaps just one. Or there may be much better ways to understand the experiences of other beings. We may have a very different approach to morality. It may be possible to create accurate models of a being without simulating their emotions. We may have a better understanding of the mechanics of emotions. Et cetera et cetera.
That kind of thinking, sharpened with an empirical approach that takes note of past improvements in technology and welfare, is needed to thoroughly investigate this issue, not a “single issue” presumption about a topic to one’s personal interests, however interesting that topic may seem.
Nick Bostrom et al. could be affected by confirmation bias and optimism bias.
The fact that their analyses include a wide range of topics, rather than focusing on confirming and emphasising specific hypotheses is encouraging, and the fact that a large number of credible people have arrived at similar conclusions from widely varying perspectives is the best possible sign.