An Argument for Why the Future May Be Good
In late 2014, I ate lunch with an EA who prefers to remain anonymous. I had originally been of the opinion that, should humans survive, the future is likely to be bad. He convinced me to change my mind about this.
I havenât seen this argument written up anywhere and so, with his permission, Iâm attempting to put it online for discussion.
A sketch of the argument is:
-
Humans are generally not evil, just lazy
-
Therefore, we should expect there to only be suffering in the future if that suffering enables people to be lazier
-
The most efficient solutions to problems donât seem like they involve suffering
-
Therefore, as technology progresses, we will move more towards solutions which donât involve suffering
-
Furthermore, people are generally willing to exert some (small) amount of effort to reduce suffering
-
As technology progresses, the amount of effort required to reduce suffering will go down
-
Therefore, the future will contain less net suffering
-
Therefore, the future will be good
My Original Theory for Why the Future Might Be Bad
There are about ten billion farmed land animals killed for food every year in the US, which has a population of ~320 million humans.
The farmed animals are overwhelmingly living in factory farming conditions, which results in enormous cruelties, and probably have lives which are not worth living. Since (a) farmed animals so completely outnumber humans, (b) humans are the cause of their cruelty, and (c) humans havenât caused an equal/âhigher # of beings to lead happy lives, human existence is plausibly bad on net.
Furthermore, technology seems to have instigated this problem. Animal agriculture has never been great for the animals which were being slaughtered, but there was historically some modicum of welfare. For example: chickens had to be let outside at least some of the time, because otherwise they would develop vitamin D deficiencies. But with the discovery of vitamins and methods for synthesizing them, chickens could now be kept indoors for their entire lives. Other scientific advancements like antibiotics enabled them to be packed densely, so that now the average chicken has 67 inches of space (about two thirds the size of a sheet of paper).
Itâs very hard to predict the future, but one reasonable thing you can do is guess that current trends will continue. Even if you donât believe society is currently net negative, it seems fairly clear that the trend has been getting worse (e.g. the number of suffering farmed animals grew much more rapidly than the [presumably happy] human population over the last century), and therefore we should predict that the future will be bad.
His Response
Technology is neither good nor bad, itâs merely a tool which enables the people who use it to do good or bad things. In the case of factory farming, it it seemed to me (Ben) that people overwhelmingly wanted to do bad things, and therefore technological progress was bad. Technological progress will presumably continue, and therefore we might expect this ethical trend to continue and the future to be even worse than today.
He pointed out that this wasnât an entirely accurate way of viewing things: people didnât actively want to cause suffering, they are just lazy, and it turns out that the lazy solution in this case causes more suffering.
So the key question is: when we look at problems that the future will have, will the lazy solution be the morally worse one?
It seems like the answer is plausibly ânoâ. To give some examples:
-
Factory farming exists because the easiest way to get food which tastes good and meets various social goals people have causes cruelty. Once we get more scientifically advanced though, it will presumably become even more efficient to produce foods without any conscious experience at all by the animals (i.e. clean meat); at that point, the lazy solution is the more ethical one.
-
(This arguably is what happened with domestic work animals on farms: we now have cars and trucks which replaced horses and mules, making even the phrase âbeat like a rented muleâ seem appalling.)
-
-
Slavery exists because there is currently no way to get labor from people without them having conscious experience. Again though, this is due to a lack of scientific knowledge: there is no obvious reason why conscious experience is required for plowing a field or harvesting cocoa, and therefore the more efficient solution is to simply have nonconscious robots do these tasks.
-
(This arguably is what happened with human slavery in the US: industrialization meant that slavery wasnât required to create wealth in a large chunk of the US, and therefore slavery was outlawed.)
-
Of course, this is not a definitive proof that the future will be good. One can imagine the anti-GMO lobby morphing into an anti-clean meat lobby as part of some misguided appeal to nature, for example.
But this does give us hope that the lazy â and therefore default â position on issues will generally be the more ethical one, and therefore people would need to actively work against the grain in order to make the world less ethical.
If anything, we might have some hope towards the opposite: a small but nontrivial fraction of people are currently vegan, and a larger number of people spend extra money to buy animal products which (they believe) are less inhumane. I am not aware of any large group which does the opposite (go out of their way to cause more cruelty to farmed animals). Therefore, we might guess that the average position of people is slightly ethical and so people would be willing to not just be vegan if that was the cheaper option, but also be willing to pay a small amount of money to live more ethically.
The same thing goes for slavery: a small fraction of consumers go out of their way to buy slave-free chocolate, with no corresponding group of people who go out of their way to buy chocolate produced with slavery. Once machines come close to human cocoa growing abilities, we would expect chocolate industry slavery to die off.
Summary
If the default course of humanity is to be ethical, our prior should be that the future will be good, and the burden of proof shifts to those who believe that the future will be bad.
I do not believe it provides a knockdown counterargument to concerns about s-risks, but I hope this argumentâs publication encourages more discussion of the topic, and a viewpoint some readers have not before considered.
This post represents a combination of my and the anonymous EAâs views. Any errors are mine. I would like to thank Gina Stuessy and this EA for proofreading a draft of this post, and for talking about this and many other important ideas about the far future with me.
- The FuÂture Might Not Be So Great by 30 Jun 2022 13:01 UTC; 142 points) (
- A longterÂmist criÂtique of âThe exÂpected value of exÂtincÂtion risk reÂducÂtion is posÂiÂtiveâ by 1 Jul 2021 21:01 UTC; 131 points) (
- Why I priÂoriÂtize moral cirÂcle exÂpanÂsion over reÂducÂing exÂtincÂtion risk through arÂtifiÂcial inÂtelÂliÂgence alignment by 20 Feb 2018 18:29 UTC; 107 points) (
- LongterÂmism and anÂiÂmal advocacy by 11 Nov 2020 17:44 UTC; 99 points) (
- 4 Aug 2018 18:12 UTC; 89 points) 's comment on ProbÂlems with EA repÂreÂsenÂtaÂtiveÂness and how to solve it by (
- Cause priÂoriÂtiÂzaÂtion for downÂside-foÂcused value systems by 31 Jan 2018 14:47 UTC; 75 points) (
- The exÂpected value of exÂtincÂtion risk reÂducÂtion is positive by 9 Dec 2018 8:00 UTC; 66 points) (
- FuÂture techÂnologÂiÂcal progress does NOT corÂreÂlate with methÂods that inÂvolve less suffering by 1 Aug 2023 9:30 UTC; 60 points) (
- 23 Sep 2021 20:03 UTC; 30 points) 's comment on Why I am probÂaÂbly not a longtermist by (
- S-risk FAQ by 18 Sep 2017 8:05 UTC; 29 points) (
- The exÂpected value of exÂtincÂtion risk reÂducÂtion is positive by 9 Jun 2019 15:49 UTC; 23 points) (LessWrong;
- 30 Nov 2021 23:28 UTC; 20 points) 's comment on RowÂing, SteerÂing, AnÂchorÂing, Equity, Mutiny by (
- 9 Sep 2020 10:39 UTC; 15 points) 's comment on AMA: ToÂbias BauÂmann, CenÂter for ReÂducÂing Suffering by (
- 8 Sep 2020 16:27 UTC; 13 points) 's comment on AMA: ToÂbias BauÂmann, CenÂter for ReÂducÂing Suffering by (
- Bi-Weekly RaÂtional Feed by 24 Jul 2017 21:56 UTC; 10 points) (LessWrong;
- 3 Sep 2020 7:55 UTC; 8 points) 's comment on âDisÂapÂpointÂing FuÂturesâ Might Be As ImÂporÂtant As ExÂisÂtenÂtial Risks by (
- 14 Jul 2020 12:46 UTC; 4 points) 's comment on Max_Danielâs Quick takes by (
- 30 Mar 2021 9:46 UTC; 3 points) 's comment on Max_Danielâs Quick takes by (
- é·æ䞻矩ăšćç©äżè· by 16 Aug 2023 14:36 UTC; 2 points) (
- [OpzÂionale] Il lunÂgoterÂminismo e lâatÂtivismo per gli animali by 17 Jan 2023 20:17 UTC; 1 point) (
What lazy solutions will look like seems unpredictable to me. Suppose someone in the future wants to realistically roleplay a historical or fantasy character. The lazy solution might be to simulate a game world with conscious NPCs. The universe contains so much potential for computing power (which presumably can be turned into conscious experiences), that even if a very small fraction of people do this (or other things whose lazy solutions happen to involve suffering), that could create an astronomical amount of suffering.
Lazy solutions to problems of motivating, punishing, and experimenting on digital sentiences could also involve astronomical suffering.
Yes, I agree. More generally: the more things consciousness (and particularly suffering) are useful for, the less reasonable point (3) above is.
One concern might be not malevolence, but misguided benevolence. For just one example, spreading wild animals to other planets could potentially involve at least some otherwise avoidable suffering (within at least some of the species), but might be done anyway out of misguided versions of âconservationistâ or ânature-favoringâ views.
Iâm curious if you think that the âreflective equilibriumâ position of the average person is net negative?
E.g. many people who would describe themselves as âconservationistsâ probably also think that suffering is bad. If they moved into reflective equilibrium, would they give up the conservation or the anti-suffering principles (where these conflict)?
I donât know, but I would guess that people would give up conservation under reflective equilibrium (assuming and insofar as conservation is, in fact, net negative).
This is what I am most concerned about. There is likely that there will be less suffering in those areas where humans are the direct cause or recipient of suffering (e.g. farmed animals, global poverty). I think it is less likely that there will be a reduction in suffering in areas where we are not the clear cause of the suffering.
I donât think wild-animal suffering will be solved somewhere along the line of our technological progress because of the above. That said, I do think the continued existence of humans is a good thing because without humans, Iâm fairly confident that the world existing is a net negative.
Yeah, I think the point Iâm trying to make is that it would require effort for things to go badly. This is, of course, importantly different from saying that things canât go badly.
Thanks for writing this up! I agree that this is a relevant argument, even though many steps of the argument are (as you say yourself) not airtight. For example, consciousness or suffering may be related to learning, in which case point 3) is much less clear.
Also, the future may contain vastly larger populations (e.g. because of space colonization), which, all else being equal, may imply (vastly) more suffering. Even if your argument is valid and the fraction of suffering decreases, itâs not clear whether the absolute amount will be higher or lower (as you claim in 7.).
Finally, I would argue we should focus on the bad scenarios anyway â given sufficient uncertainty â because thereâs not much to do if the future will âautomaticallyâ be good. If s-risks are likely, my actions matter much more.
(This is from a suffering-focused perspective. Other value systems may arrive at different conclusions.)
Thanks for the response!
It would be surprising to me if learning required suffering, but I agree that if it does then point (3) is less clear.
Good point! I rewrote it to clarify that there is less net suffering
Where I disagree with you the most is your statement âthereâs not much to do if the future will âautomaticallyâ be good.â Most obviously, we have the difficult (and perhaps impossible) task of ensuring the future exists at all (maxipok).
The Foundational Research Institute site in the links above seems to have a wealth of writing about the far future!
Thanks for the post! If lazy solutions reduce suffering by reducing consciousness, they also reduce happiness. So, for example, a future civilization optimizing for very alien values relative to what humans care about might not have much suffering or happiness (if you donât think consciousness is useful for many things; I think it is), and the net balance of welfare would be unclear (even relative to a typical classical-utilitarian evaluation of net welfare).
Personally I find it very likely that the long-run future of Earth-originating intelligence will optimize for values relatively alien to human values. This has been the historical trend whenever one dominant life form replaces another. (Human values are relatively alien to those of our fish ancestors, for example.) The main way out of this conclusion is if humansâ abilities for self-understanding and cooperation make our own future evolution an exception to the general trend.
Thanks Brian!
I think you are describing two scenarios:
Post-humans will become something completely alien to us (e.g. mindless outsourcers). In this case, arguments that these post-humans will not have negative states equally imply that these post-humans wonât have positive states. Therefore, we might expect some (perhaps very strong) regression towards neutral moral value.
Post-humans will have some sort of abilities which are influenced by current humansâ values. In this case, it seems like these post-humans will have good lives (at least as measured by our current values).
This still seems to me to be asymmetric â as long as you have some positive probability on scenario (2), isnât the expected value greater than zero?
I think maybe what I had in mind with my original comment was something like: âThereâs a high probability (maybe >80%?) that the future will be very alien relative to our values, and itâs pretty unclear whether alien futures will be net positive or negative (say 50% for each), so thereâs a moderate probability that the future will be net negative: namely, at least 80% * 50%.â This is a statement about P(future is positive), but probably what you had in mind was the expected value of the future, counting the IMO unlikely scenarios where human-like values persist. Relative to values of many people on this forum, that expected value does seem plausibly positive, though there are many scenarios where the future could be strongly and not just weakly negative. (Relative to my values, almost any scenario where space is colonized is likely negative.)
I think one important reason for optimism that you didnât explicitly mention is the expanding circle of moral concern, a la Peter Singer. Sure, peopleâs behaviors are strongly influenced by laziness/âconvenience/âself-interest, but they are also influenced by their own ethical principles, which in a society-wide sense have generally grown better and more sophisticated over time. For the two examples that you give, factory farming and slavery, your view seems to be that (and correct me if Iâm wrong) in the future, people will look for more efficient ways to extract food/âlabor, and those more efficient ways will happen to involve less suffering; therefore, suffering will decrease in the future. In my head itâs the other way around: people are first motivated by their moral concerns, which may then spur them to find efficient technological solutions to these problems. For example, I donât think the cultured meat movement has its roots in trying to find a more cost-effective way to make meat; I think it started off with people genuinely concerned about the suffering of factory-farmed animals. Same with the abolitionist movement to abolish slavery in the US; I donât think industrialization had as much to do with it as peopleâs changing views on ethics.
We reach the same conclusion â that the future is likely to be good â but I think for slightly different reasons.
The change in ethical views seems very slow and patchy, thoughâthere are something like 30 million slaves in the world today, compared to 3 million in the US at its peak (I donât know how worldwide numbers have changed over time.)
?
Human history has many examples of systematic unnecessary sadism, such as torture for religious reasons. Modern Western moral values are an anomaly.
Thanks for the response! But is that true? The examples I can think of seem better explained by a desire for power etc. than suffering as an end goal in itself. (To quote every placeholder text: Lorem ipsum dolor sit amet...)
Here is another argument why the future with humanity is likely better than the future without it. Possibly, there are many things of moral weight that are independent of humanityâs survival. And if you think that humanity would care about moral outcomes more than zero, then it might be better to have humanity around.
For example in many szenarios of human extinction, wild animals would continue existing. In your post you assigned farmed animals enough moral weight to determine the moral value of the future, and wild animals should probably have even more moral weight. There are 10 x more wild birds than farmed birds, 100-1000x more wild mammals than farmed animals (and of course many, many more fish or even invertebrates). I am not convinced that wild animalsâ lives are on average not worth living (= that they contain more suffering than happiness), but even without that, there surely is a huge amount of suffering. If you believe that humanity will have the potential to prevent/âalleviate that suffering some time in the future, that seems pretty important.
The same goes for unknown unknowns. I think we know extremely little about what is morally good or bad, and maybe our views will fundamentally change in the (far) future. Maybe there are suffering non-intelligent extraterrestrials, maybe bacteria suffer, maybe there is moral weight in places were we would not have expected it (http://ââreducing-suffering.org/ââis-there-suffering-in-fundamental-physics/ââ), maybe something completely different.
Letâs see what the future brings, but it might be better to have an intelligent and at least slightly utility-concerned species around, as compared to no intelligent species.
For those with a strong suffering focus, there are reasons to worry about an intelligent future even if you think suffering in fundamental physics dominates, because intelligent agents seem to me more likely to want to increase the size or vivacity of physics rather than decrease it, given generally pro-life, pro-sentience sentiments (or, if paperclip maximizers control the future, to increase the number of quasi-paperclips that exist).
There now is a more detailed analysis of a similar topic: The expected value of extinction risk reduction is positive
Not sure if âlazyâ is quite the right word. For example, it took work to rebuild chicken housing so that each chicken got even less space. I think âgreedyâ is a more accurate word.
By the way, does the vegan movement talk about running non-factory farms that sell animal products which are subsidized so they are priced competitively with factory farm products? If farming animals ethically costs a premium, from a purely consequentialist perspective, it doesnât seem like it should matter whether the premium is paid by the customer or by some random person who wants to convert dollars in to reduced suffering.
BTW I think this is pretty relevant to the Moloch line of thinking.
I would guess itâd be much less cost-effective than lobbying for welfare reforms and such.
If the altruist spends her money on this, she has less left over to spend on other things. In contrast, most consumers wonât spend their savings on highly altruistic causes.
I suppose this cost-effectiveness difference could be seen as a crude way to measure how close we are to the pure Moloch type scenario?
I agree my proposal would probably not make sense for anyone reading this forum. It was more of theoretical interest. Itâs not clear whether equivalent actions exist for other Moloch type scenarios.
A complication: Whole-brain emulation seeks to instantiate human minds, which are conscious by default, in virtual worlds. Any suffering involved in that can presumably be edited away if I go by what Robin Hanson wrote in Age of Em. Hanson also thinks that this might be a more likely first route for HLAI, which suggests that may be the âlazy solutionâ, compared to mathematically-based AGI. However, in the S-risks talk at EAG Boston, an example of s-risk was something like this.
Analogizing like this isnât my idea of a first-principle argument, and therefore what Iâm saying is not airtight either, considering the levels of uncertainty for paths to AGI.
Could this be rewritten as â8. Therefore, the future will be better than the presentâ or would that change its meaning?
If it would change the meaning, then what do you mean by âgoodâ? (Note: If youâre confused about why Iâm confused about this, then note that it seems to me that 8 does not follow from 7 for the meaning of âgoodâ I usually hear from EAs (something like ânet positive utilityâ).)
Yeah, it would change the meaning.
My assumption was that, if things monotonically improve, then in the long run (perhaps the very, very long run) we will get to net positive. You are proposing that we might instead asymptote at some negative value, even though we are still always improving?
I wasnât proposing that (I in fact think the present is already good), but rather was just trying to better understand what you meant.
Your comment clarified my understanding.
On premise 1, a related but stronger claim is that humans tend to shape the universe to their values much more strongly than do blind natural forces. This allows for a simpler but weaker argument than yours: it follows that, should humans survive, the universe is likely to be better (according to those values) than it otherwise would be.
I think a good definition of what is suffering is also required for this. Are we talking only human suffering? And if so, in what sense? Momentary suffering, chronic suffer, extremity of suffering?