Note: as a result of the discussions I’ve had here in the comment section and elsewhere, my views have changed since I made this post. I no longer think permanently stalling technological progress is a realistic option, and am questioning whether a long-term AI development pause is even feasible. -(-H.F., Jan 15., 2024)
———
By this, I mean a world in which:
Humans remain the dominant intelligent, technological species on Earth’s landmasses for a long period of time (> ~10,000 years).
AGI is never developed, or it gets banned / limited in the interests of human safety. AI never has much social or economic impact.
Narrow AI never advances much beyond where it is today, or it becomes banned / limited in the interests of human safety.
Mind uploading is impossible or never pursued.
Life extension (beyond modest gains due to modern medicine) isn’t possible, or is never pursued.
Any form of transhumanist initiatives are impossible or never pursued.
No contact is made with alien species or extraterrestrial AIs, no greater-than-human intelligences are discovered anywhere in the universe.
Every human grows, peaks, ages, and passes away within ~100 years of their birth, and this continues for the remainder of the human species’ lifetime.
Most other EAs I’ve talked to have indicated that this sort of future is suboptimal, undesirable, or best avoided, and this seems to be a widespread position among AI researchers as well (1). Even MIRI founder Eliezer Yudkowsky, perhaps the most well-known AI abolitionist outside of EA circles, wouldn’t go as far as to say that AGI should never be developed, and that transhumanist projects should never be pursued (2). And he isn’t alone—there are many, many researchers both within and outside of the EA community with similar views on P(extinction) and P(societal collapse), and they still wouldn’t accept the idea that the human condition should never be altered via technological means.
My question is why can’t we just accept the human condition as it existed before smarter-than-human AI (and fundamental alterations to our nature) were considered to be more than pure fantasy? After all, the best way to stop a hostile, unaligned AI is to never invent it in the first place. The best way to avoid the destruction of future value by smarter-than-human artificial intelligence is to avoid obsession with present utility and convenience.
So why aren’t more EA-aligned organizations and initiatives (other than MIRI) presenting global, strictly enforced bans on advanced AI training as a solution to AI-generated x-risk? Why isn’t there more discussion of acceptance (of the traditional human condition) as an antidote to the risks of AGI, rather than relying solely on alignment research and safety practices to provide a safe path forward for AI (I’m not convinced such a path exists)?
Let’s leave out the considerations of whether AI development can be practically stopped at this stage, and just focus more on the philosophical issues here.
References:
Katya_Grace (EA Forum Poster) (2024, January 5). Survey of 2,778 AI authors: six parts in pictures.
Yudkowsky, E. S. (2023, March 29). The only way to deal with the threat from AI? Shut it down. Time. https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/
I think the simplest answer is not that such a world would be terrible (except for factory farming and wild animal welfare, which are major concerns), but that a world with all these transhumanist initiatives would be much better
Thanks for pointing that out. Just to elaborate a little, the table below from Newberry 2021 has some estimates of how valuable the future can be. Even if one does not endorse the total view, person-affecting views may be dominated by possibilities of large future populations of necessary people.
I’m a technoskeptic because I’m a longtermist. I don’t want AI to destroy the potential of the future persons you describe (whose numbers are vast, as you linked) to exist and find happiness and fulfillment.
Note only the 4 smallest estimates would apply if humans continued to exist as in 2010.
True, but they are still vastly large numbers—and they are all biological, Earth-based beings given we continue to exist as in 2010. I think that is far more valuable than transforming the affect able universe for the benefit of “digital persons” (who aren’t actual persons, since to be a person is to be both sentient and biological).
I also don’t really buy population ethics. It is the quality of life, not the duration of an individual’s life or the sheer number of lives that determines value. My ethics are utilitarian but definitely lean more toward the suffering-avoidance end of things—and lower populations have lower potential for suffering (at least in aggregate).
Just to clarify, population ethics “deals with the moral problems that arise when our actions affect who and how many people are born and at what quality of life”. You can reject the total view, and at the same time engage with population ethics.
Since the industrial revolution, increases in quality of life (welfare per person per year) have gone hand in hand with increases in both population and life expectancy. So a priori opposing the latter may hinder the former.
Lower population have lower potential for suffering, but, at least based on the last few hundred years, they may also have greater potential for suffering per person per year. I wonder whether you mostly care about minimising total or average suffering. If total, I can see how maintaining 2010 would be good. If average, as you seemed to suggest in your comment, technological progress still looks very good to me.
I’ve had to sit with this comment for a bit, both to make sure I didn’t misunderstand your perspective and that I was conveying my views accurately.
I agree that population ethics can still be relevant to the conversation even if its full conclusion isn’t accepted. Moral problems can arise from, for instance, a one-child policy, and this is in the purview of population ethics without requiring the acceptance of some kind of population-maximizing hedonic system (which some PE proponents seem to support).
As for suffering—it is important to remember what it actually is. It is the pain of wanting to survive but being unable to escape disease, predators, war, poverty, violence, or myriad other horrors. It’s the gazelle’s agony at the lion’s bite, the starving child’s cry for sustenance, and the dispossessed worker’s sigh of despair. It’s easy (at least for me) to lose sight of this, of what “suffering” actually is, and so it’s important for me to state this flat out.
So, being reminded of what suffering is, let’s think about the kind of world where it can flourish. More population = more beings capable of suffering = more suffering in existence, for all instantiations of reality that are not literally perfect (since any non-perfect reality would contain some suffering, and this would scale up linearly with population). So lower populations are better from a moral perspective, because they have lower potential for suffering.
Most people I’ve seen espouse a pro-tech view seem to think (properly aligned) smarter-than-human AI will bring a utopia, similar to the paradises of many myths and faiths. Unless it can actually do that (and I have no reason to believe it will), suffering-absence (and therefore moral good, in my perspective) will always be associated with lower populations of sentient beings.
Thanks for following up.
You seem to be supporting the reduction of total suffering. Which of the following would you pick:
A: your perfect utopia forever plus a very tiny amount of suffering (e.g. the mildest of headaches) for 1 second.
B: nothing forever (e.g. suffering-free collapse of the whole universe forever).
I think A is way way better than B, even though B has less suffering. If you prefer A to B, I think you would be putting some value on happiness (which I think is totally reasonable!). So the possibility of a much happier future if technological progress continues should be given significant consideration?
In this (purely hypothetical, functionally impossible) scenario, I would choose option B—not because of the mild, transient suffering in scenario A, but the possibility of the emergence of serious suffering in the future (which doesn’t exist on B).
Happiness is also extremely subjective, and therefore can’t be meaningfully quantified, while the things that cause suffering tend to be remarkably consistent across times, places, and even species. So basing a moral system on happiness (rather than suffering-reduction) seems to make no sense to me.
Scenario A assumed “your perfect utopia forever”, so there would be no chance for serious suffering to emerge.
Then that would make Scenario A much more attractive to me (not necessarily from a moral perspective), and I apologize for misunderstanding your hypothetical. To be honest, with the caveat of forever, I’m not sure which scenario I’d prefer. A is certainly much more interesting to me, but my moral calculus pushes me to conclude B is more rational.
I also get that it’s an analogy to get me thinking about the deeper issues here, and I understand. My perspective is just that, while I certainly find the philosophy behind this interesting, the issue of AI permanently limiting human potential isn’t hypothetical anymore.
It’s likely to happen in a very short period of time (relative to the total lifespan of the species), unless indefinite delay of AI development really is socio-politically possible (and from what evidence I’ve seen recently, it doesn’t seem to be). [epistemic certainly — relatively low — 60%]
How could AI stop factory farms (aside from making humans extinct)? I’m honestly interested in the connection there. If you’re referring to cellular agriculture, I’m not sure why any form of AI would be needed to accomplish that.
To clarify: the point of this parenthetical was to state reasons why a world without transhumanist progress may be terrible. I don’t think animal welfare concerns disappear or even are remedied much with transhumanism in the picture. As long as animal welfare concerns don’t get much worse however, transhumanism changes the world either from good to amazing (if we figure out animal welfare) or terrible to good (if we don’t). Assuming AI doesn’t kill us obviously.
The universe can probably support a lot more sentient life if we convert everything that we can into computronium (optimized computing substrate) and use it to run digital/artificial/simulated lives, instead of just colonizing the universe with biological humans. To conclude that such a future doesn’t have much more potential value than your 2010 world, we would have to assign zero value to such non-biological lives, or value each of them much less than a biological human, or make other very questionable assumptions. The Newberry 2021 paper that Vasco Grilo linked to has a section about about this:
Such lives wouldn’t be human or even “lives” in any real, biological sense, and so yes, I consider them to be of low value compared to biological sentient life (humans, other animals, even aliens should they exist). These “digital persons” would be AIs, machines- with some heritage from humanity, yes, but let’s be clear: they aren’t us. To be human is to be biological, mortal, and Earthbound—those three things are essential traits of Homo sapiens. If those traits aren’t there, one isn’t human, but something else, even if one was once human. “Digitizing” humanity (or even the entire universe, as suggested in the Newberry paper) would be destroying it, even if it is an evolution of sorts.
If there’s one issue with the EA movement that I see, it’s that our dreams are far too big. We are rationalists, but our ultimate vision for the future of humanity is no less esoteric than the visions of Heavens and Buddha fields written by the mystics—it is no less a fundamental shift in consciousness, identity, and mode of existence.
Am I wrong for being wary of this on a more than instrumental level (I would argue that even Yudkowsky’s objections are merely instrumental, centered on x- and s-risk alone)? I mean, what would be suboptimal about a sustainable, Earthen existence for us and our descendants? Is it just the numbers (can the value of human lives necessarily be measured mathematically, much less in numbers)?
I think dying is bad.
Also, I’m not sure why “no life extension” and “no AGI” have to be linked. We could do life extension without AGI, it’d just be harder.
I think dying is bad too, and that’s why I want to abolish AI. It’s an existential risk to humanity and other sentient species on Earth, and anywhere close enough to be reached via interstellar travel at any point in the future.
“No life extension” and “no AGI” aren’t inherently linked, but they are practically linked in some important ways. These are:
1. Human intelligence may not be enough to solve the hard problems of aging and cancer, meaning we may never develop meaningful life extension tech.
2. Humanity may not have enough time or cultural stability to commit to a megaproject like this (which will likely take centuries at purely human scales of research) before climate change, economic inequality, and other x- and s-risks greatly weaken our species.
3. Both AGI and life extension come from the same philosophical place: the rejection of our natural limits as biological animals (put there by Nature via billions of years of natural selection). I think this is extremely dangerous, as it encourages us to seek out new x-risks to find a solution to our mortality (AGI being the most obvious, but large-scale gene editing and brain augmentation carry a high extinction chance as well).
Basically, my argument is that any attempt to escape our mortality is likely to cause more death and suffering than it prevents. In light of this, we should accept our mortality and try to optimize society for 70-80 years of individual life and health. We should train future generations to continue human progress. In other words, we should stick to the model we’ve used for all of human history before 2020.
The number of non-human animals being tortured is one reason. But that doesn’t (yet) justify accelerating AGI.
I agree that the current state of non-human animal treatment by humans is atrocious, and animal welfare is my primary cause area because, from a moral perspective, I cannot abide by the way this society treats animals. With that said, I don’t see how accelerating AGI would stop animal torture (unless you’re referring to human extinction itself, but I’m not convinced AGI would be any better than humans in its treatment of non-human sentient beings).
I agree. I just think there is some chance that AGI would wipe all of us out in an instant. And I don’t trust humans to improve the lives of non human animals any time soon.
I was surprised to see the comments on this post, which mostly provide arguments in favor of pursuing technological progress, even if this might lead to a higher risk of catastrophes.
I would like to chip in the following:
Preferences regarding the human condition are largely irrelevant for technological progress in the areas that you mention. Technological progress is driven by a large number of individuals that seek prestige and money. There is simply consumer demand for AI and technologies which may alter the human condition. Thus, technological progress happens, irrespective of whether this is considered good or bad.
Further reading:
The philosophical debate you are referring to is sometimes discussed as the scenario “1972”, e.g. in Max Tegmarks “Life 3.0″. He also provides reasons to believe that this scenario is not satisfying, given better alternatives.
Thanks for your response! I did mean to limit my post by saying that I wasn’t intending to discuss the practical feasibility of permanently stopping AI progress in the actual world, only the moral desirability of doing so. With that said, I don’t think postmodern Western capitalism is the final word on what is possible in either the economic or moral realms. More imagination is needed, I think.
Thanks for the further reading suggestion—adding it to my list.
Great question which merits more discussion. I’m sure there is an interesting argument to be made about how we should settle for “good enough” if it helps us avoid extinction risks.
One argument for continued technological progress is that our current civilization is not particularly stable or sustainable. One of the lessons from history is that seemingly stable empires such as the Roman or Chinese empires eventually collapse after a few hundred years. If there isn’t more technological progress so that our civilization reaches a stable and sustainable state, I think our current civilization will eventually collapse because of climate change, nuclear war resource exhaustion, political extremism, or some other cause.
I agree that our civilization is unstable, and climate change, nuclear war, and resource exhaustion are certainly important risks to be considered and mitigated.
With that said, societal collapse—while certainly bad—is not extinction. Resource exhaustion and nuclear war won’t drive us to extinction, and even climate change would have a hard time killing us all (in the absence of other catastrophes, which is certainly not guaranteed).
Humans have recovered from societal collapses several times in the past, so you would have to make some argument as to why this couldn’t happen again should the same thing happen to Western techno-capitalist society.
As an example, if the P(collapse) given AGI is never achieved is 1, it would still be a preferable outcome to pursue versus creating AGI with P(extinction) of > .05 (this probability as cited in the recent AI expert survey). I’m willing to accept a very high level of s-risk(s) to avoid an x-risk with a sufficiently high probability of occurrence, because extinction would be a uniquely tragic event.
Nuclear war is inevitable in the scale of decades to centuries (see this: one and two).
I’m not familiar enough with the arguments around this to comment on it intelligently. With that said, nuclear war is not necessarily an extinction event—it is likely that even with a full-scale nuclear exchange between the US, China, and Russia, some small breeding populations of humans would survive somewhere on Earth (source). Hostile AI takeover would likely kill every last human, however.
Well, nuclear weapons already exist (not conditional) and you survive one, two, how many nuclear wars?
There is a nuclear weaponized “human alignment” problem. Without a clear road to Utopy, how can we avoid, I don’t know, a nuclear war every 200 years? A geological level catastrophe on historic time scale cycle…
Still, I would say my two posts linked above are not so difficult to read.
I think a strong argument would be the use of AI to eliminate large sectors of work in society, and therefore have UBI or a similar system. I don’t see how this is possible using 2010 or even 2024′s AI technology. Furthermore, by allowing humans to have more free time and increased QALYs (from say, AI-related medical advances), people may become more sensitive to animal welfare concerns. Even without the second part of the argument, I think removing people from having to work, especially in agriculture/manual labor/sweatshops/cashiers etc., perhaps is a compelling reason to advocate against your proposal.
If anyone has any specific recommendations of works on this topic, do let me know!
Thanks for your response, Alexa! I’d recommend reading anything by Eliezer Yudkowsky (the founder of the Machine Intelligence Research Institute and one of the world’s most well-known AI safety advocates), especially his open letter (linked here). This journal article by Joe Carlsmith (who is an EA and I believe a Forum participant as well) gives a more technical case for AI x-risk, and does it from a less pessimistic perspective than Yudkowsky.
Thank you for writing this up Hayven! I think there are multiple reasons as to why it will be very difficult for humans to settle for less. Primarily, I suspect this to be the case because a large part of our human nature is to strive for maximizing resources, and wanting to consistently improve the conditions of life. There are clear evolutionary advantages to have this ingrained into a species. This tendency to want to have more got us out of picking berries and hunting mammoths to living in houses with heating, being able to connect with our loved ones via video calls and benefiting from better healthcare. In other words, I don’t think that the human condition was different in 2010, it was pretty much exactly the same as it is now, just as it was 20 000 years ago. “Bigger, better, faster.”
The combination out of this human tendency, combined with our short-sightedness is a perfect recipe for human extinction. If we want to overcome the Great Filter, I think the only realistic way we will accomplish this is by figuring out how we can combine this desire for more with more wisdom and better coordination. It seems to be that we are far from that point, unfortunately.
A key takeaway for me is the increased likelihood of success with interventions that guide, rather than restrict, human consumption and development. These strategies seem more feasible as they align with, rather than oppose, human tendencies towards growth and improvement. That does not mean that they should be favoured though, only that they will be more likely to succeed. I would be glad to get pushback here.
I can highly recommend the book The Molecule of More to read more about this perspective (especially Chapter 6).
Thank you so much for this comment, Johan! It is really insightful. I agree that working with our evolutionary tendencies, instead of against them, would be the best option. The hard problem, as you mentioned, is how do we do that?
(I’ll give the chapter a read today—if my power manages to stay on! [there’s a Nor’easter hitting where I live]).
I think what has changed since 2010 has been general awareness of transcending human limits as a realistic possibility. Outside of ML researchers, MIRI and the rationality community, who back then considered AGI reshaping society in our lifetimes a serious possibility?
There is a very real psychological difference between the way the average human sees “sci-fi risks” (alien invasion, asteroids, Cthulhu rising) vs. realistic ones (war, poverty, recession, climate change). In 2010 AI was a sci-fi risk, in 2024 it is a realistic one. Most humans are still struggling with that transition, and we are getting technically closer to reaching AGI. This is extremely dangerous.
I hope you are okay with the storm! Good luck there. And indeed, figuring out how to work with ones evolutionary tendencies is not always straightforward. For many personal decisions this is easier, such as recognising that sitting 10 hours a day at the desk is not what our bodies have evolved for. “So let’s go for a run!” If it comes to large scale coordination, however, things get trickier...
”I think what has changed since 2010 has been general awareness of transcending human limits as a realistic possibility.” → I agree with this and your following points.