Longtermism and Uncertainty
One of the brilliant innovations of EA was the idea that a donor wishing to donate to a charity should focus on the outcome of a contribution to the charity, specifically by seeking measurements of the impact on the target recipients, ideally by A/B testing or other means of investigation commonly employed in scientific studies. While this idea seems obvious in retrospect, and while a few philanthropists (like Bill Gates) have pursued this for many years, it is a radical shift from how giving to charity has typically been done.
Longtermism attempts to anticipate long-term future goods or ills in order to determine a moral value of pursuing them. However, it is obvious that in general, the further in the future a potential issue lies, the less certainty we can have about it.
Let us imagine a person in England in the year 1500 trying to think about how she could make the world a better place 500 years in the future. I will not bother to discuss the issues going on at the time, such as enclosure of land for sheep, inflation, increased disparity between rich and poor, and changes in farming techniques. It seems clear that any ideas this person had about what she could do to make the world a better place in the year 2000 would likely be far removed from the reality of what was to come, and any effort made based on those ideas, while they might improve the well-being of humanity for years or decades, could not realistically be expected to be improving humanity’s well-being in the year 2000. Well-meaning efforts to improve the world (such as the crusades to the Holy Land in the period between 1095 and 1291) could easily have long-term negative effects rather than positive ones. Morality in Europe during that time was completely based on Christian ideas and ideals, while that is not the case in Europe today.
Since 1500, the rate of change in the world has greatly increased, and the world seems increasingly unstable, in part because of our increasing ability . So it seems likely that we are even less able to predict the situation the world will be facing 500 years from now. And even more likely that we are unable to choose an intervention today, say a donation to a charity or political group or group of scientists, that we can have any level of confidence will produce a beneficial (or harmful) outcome this far in the future.
Based on this, it seems clear that it is hubris to plan some intervention for good based on populations that may or may not live thousands of year from now. My observation is that we greatly over-rate our abilities to predict the future. From time to time, Scientific American and other magazines run articles listing predictions from the past. Even though made by experts of the time, they tend to have at least as many errors as accurate predictions. (See “The History of Predicting the Future” for a discussion of some of the issues.)
Now admittedly, if some action has a significant risk of wiping out all of humanity within the next 50 or 100 years, this may be a realistic threat to attempt to prevent. But even there, the likelihood of humanity being made extinct by any near-term human actions seems small.
Humans are scattered far and wide over the surface of the earth. Many live in large cities, some are farmers, while others are hunter-gatherers as part of indigenous tribes. Humans are remarkably adaptable, and live from the tropics to the arctic, from below sea level to miles above sea level. So even the most devastating human-caused event we can imagine, from nuclear winter to an AGI set (for some odd reason) on eliminating humans, is unlikely to get rid of all of us. A malevolent, human-developed pathogen might kill most human, but there would doubtless be some who are immune, or some islands where the infection never reached.
In addition, unlike the dinosaurs, a great many human beings would be alerted to any potential extinction event in time to take precautions. The world is large, and no human-caused event could kill all humans immediately. We were able to develop a vaccine against Covid in a year; by a century or two from now, we might be able to develop one in days or weeks.
As a result, efforts in longtermism that attempt to calculate the benefit of preventing a future that leads to human extinction are, in my opinion, fundamentally flawed. Take for example the malevolent AGI. First of all, to calculate the probability that we will develop an AGI that is overall more intelligent than humans is likely an effort at throwing darts at a dartboard. Secondly, any kind of timeline on when that would occur is just a wild guess. Thirdly, the likelihood that such an AGI might undertake activities that would be inimical to humanity is a complete unknown. Fourthly, the chances that such activities could lead to the extinction of humanity (prior to humanity being made extinct for some other reason) seem very small.
My overall conclusion is that longtermism, at least as I am finding it discussed in the EA community, is not effective. We cannot have A/B trials of the impact of efforts on the far future. We greatly overestimate our ability to anticipate what the future will bring, or what efforts we might make to alter it might actually accomplish. For this reason, I feel that concern for issues like an AGI leading to human extinction should be given far less weight that either human health in impoverished areas, or topics like the climate crisis, which is already hurting the lives of millions, and about which science is fairly certain will get much worse over the coming decades.
Strong agree. I think the EA community far overestimates its ability to predictably affect the future, particularly the far future.
I enjoyed reading this post! It is well-written. I agree with the general sentiment that future prediction is increasingly difficult the longer the horizon. Though my reasons are slightly different, like you, I still remain to be convinced that humanity can do anything in the present day to reduce risk from AGI. I was looking forward to seeing other’s responses but so far this topic hasn’t gotten much attention.
I’m fairly new to EA, but I imagine this is not the first time such a topic has come up. Maybe that’s the reason nobody has responded yet? If any veterans to this forum are aware of previous iterations of similar questions and their responses, please share!