The Credibility of Apocalyptic Claims: A Critique of Techno-Futurism within Existential Risk

TLDR: Predictions about apocalyptic AI parallel both historical and contemporary Christain Apocalyptic claims(which I take to be untrustworthy). This is distinct from apocalyptic climate change which is non-analogous to such religious apocalyptic claims. Therefore we should treat the apocalyptic claims of climate scientists as more credible than those of AI researchers, and as a result, EA should place climate change as a higher priority than AI alignment. This is not to say that AI isn’t a risk, nor that alignment shouldn’t be a priority.

Acknowledgments: My thanks to the 80,000 Hours podcast for sending me a copy of Toby Ord’s The Precipice, and to my religious studies professor for giving me feedback on this paper.

Epistemic Transparency: I am an undergraduate student(going into my final year) studying philosophy and religious studies. I had an independent study, and have just completed a summer research project on Existential Risk. This paper was originally written in one of my religious studies classes and was edited for submission to this contest.

I am extremely confident(95%) sure that predictions of Apocalyptic AI parrels religious narratives, and that this should, at least to some degree, negatively affect the credibility of such claims. I am uncertain as to how much this should affect the credibility of such claims. I am personally extremely distrustful of anything that looks like a religious narrative. However, this is due to my own philosophical beliefs and those with different views on the nature of religion are likely to have different opinions.

Introduction:

Every time someone predicted that the world would come to an end in any year before the current year, they were incorrect. No one can ever look back on a history in which humanity has gone extinct; such an event makes the existence of such a person impossible. As a result, apocalyptic claims, i.e., claims about the nature, likelihood, and timeframe of the end of the world, have a unique epistemic status. Such claims are unverifiable, but not in the way that, for example, moral claims are. Rather, it is because we are human beings that information about the nature, likelihood, and timeframe of human extinction are unverifiable. Such an event would prevent us from reflecting upon it. This is a massive problem for Existential Risk Studies, and indeed for anyone who wishes to reduce the risk of human extinction, as an accurate risk assessment is necessary if organizing bodies are to effectively allocate resources to addressing threats to humanity. This is one of the many theses defended in Toby Ord’s book The Precipice, in which he gives his subjective probabilities for the chance of any given risk causing an existential catastrophe in the next 100 years. However, although I agree that a well-grounded risk assessment is necessary, I worry about the implicit assumptions that might bias such an assessment. Specifically, in the case of apocalyptic AI it seems that implicit religious narratives might warp our understanding. In my view, this results in Ord overemphasizing its danger relative to other risks, such as climate change. In another sphere, many Christian evangelicals predict that the world will soon come to an end, and as a result, are unconcerned with climate change. In this paper, I will compare the apocalyptic claims made by three groups: evangelical Christian, climate scientists, and AI researchers.

This paper will begin by discussing both the debate in existential risk literature regarding techno-utopianism in the field and the debate in religious studies regarding whether transhumanism is or is not a religious movement. Section 1 will focus on showing 1) that the apocalyptic claims made by evangelical Christians are untrustworthy, while the apocalyptic claims made by climate scientists are trustworthy, and 2) that the apocalyptic claims made by AI researchers are more analogous to claims made by evangelicals than they are to the claims made by climate scientists. Section 2 will argue that the religious context in which claims about apocalyptic AI exist explains why predictions regarding the future of artificial intelligence have been consistently incorrect. Together these sections attempt to show that Ord’s apocalyptic claims about AI are more analogous to the apocalyptic claims made by some evangelicals than to the apocalyptic claims made by climate scientists. Finally, section 3 will provide a process-oriented account of apocalyptic AI and discuss how the broad adoption of artificial intelligence might make climate change more difficult to address. This is done in order to show that 1) discussion of apocalyptic AI can be disentangled from religious narratives and 2) that despite these narratives AI does pose a very real risk to humanity.

Literature Review:

Existential Risk (also referred to as Global Catastrophic Risk) is the focus of a collection of nonprofits, think tanks, and research initiatives that aim to gain an accurate understanding of risks to humanity to prevent them from manifesting. These organizations are a natural extension of the Effective Altruism movement, and both groups share many of the same assumptions and thinkers. Both Ord and Bostrom are central figures within both Effective Altruism and Existential Risk.:

This paper attempts to bridge the gap between critiques of Existential Risk Studies and the discourse around treating transhumanism as a religious movement. This paper is both a critique and a defense of Bostrom and Ord. While many critiques of the techno-utopian elements of the field critique their entire conceptual framework, i.e. the combination of utilitarianism, long-termism, and transhumanism, I aim to offer a critique that shows these issues are rooted in transhumanism, rather than in the broader methodological assumptions.

There is significant disagreement within the literature as to how existential risk assessment should be done and how much risk assessments should be interpreted. On one side of the argument, techno-utopians such as Nick Bostrom and Toby Ord are focused on the threat of future technologies. While at the moment their techno-utopian approach is dominant within Existential Risk Studies this could change. The techno-utopian elements of the field have recently started to be held up to scrutiny. Many such as Cremer, Carla Zoe, and Luke Kemp’s paper “Democratizing Risks: In Search of a Methodology to Study Existential Risk” focus on how the dominance of utilitarianism, which is a non-representative moral view, might unduly bias the analysis of Existential Risk Studies, and as a result, make its risk assessments untrustworthy. Additionally, there is a concern about the influence that the discorporate prominence of those with hegemonic identities might have on the field. In their paper “Worlding beyond the End’ of the World’′ Audra Mitchell and Aadita Chaudhury discuss how such biases might undermine the otherwise benevolent intentions of the field. Many of these critiques seek to attack the entire framework employed by the techno-utopian elements of the field, and while they offer substantive critiques, it is unclear how the field could properly operate without its consequentialist assumptions.

A separate but tangentially related discourse surrounds whether or not transhumanism should be considered a religious movement. Authors like Robert M. Geraci argue that transhumanism should be understood as a religious movement due to its historical connections to apocalyptic Christianity. On the other side of this debate figures like Nick Bostrom argue that the scientific, or philosophical, the expertise of many transhumanists means that it, by definition, isn’t a religious movement.

Section 1: How to Assess Apocalyptic Claims

In The Precipice, Ord gives an outline of the risk landscape and gives rough estimates of what he takes to be the likelihood of each risk leading to an existential catastrophe in the next 100 years. This begins in chapter three where he discusses the natural risks that could lead to human extinction. Natural risks have existed for a long time and so one can study the frequency at which they have historically taken place. For example, the fossil record can be used to create a rough estimate of the frequency at which a supervolcanic eruption capable of causing a mass extinction event has occurred. I do not take this part of the text to be particularly contentious because 1) there is significantly more scientific consensus regarding how likely these risks are, 2) these risks are not particularly likely when compared to anthropogenic risks(i.e risks of human origin), and 3) these risks are of little relevance to what actions we should take because, at the moment, we cannot do much to meaningfully affect them. Ord’s assessment becomes much more uncertain when he begins his discussion of anthropogenic risks. These risks differ from natural risks because 1) these risks are relatively new, and as such, there is little relevant data that can be used to assess them, and 2) unlike natural risks assessments of anthropic risks are of massive political importance, as our political and economic systems are both the initial cause of these risks and a necessary part of any path towards lowering them. As we get into Ord’s discussion of anthropic risk his claims become increasingly contentious.

Ord focuses his discussion on which risks are likely to lead to human extinction in the next 100 years. This approach is not without its merits. As Ord says, “risks that strike later can be dealt with later, while those striking sooner cannot” (Ord 306). This seems to match up with a common-sense approach to risk management, it, for example, lines up with the type of decision one might expect people to make surrounding their health.(For example, Comedian Gabriel Iglesia when discussing changing to a diet higher in cholesterol after developing type-2 diabetes said that “Cholesterol’s gonna take 10 years to kill me, while diabetes’ is gonna kill me in 2. Right now I’m winning by 8.”) However, to be effectively applied this principle requires an accurate understanding of the particular risks as well as how soon they might strike. This is particularly difficult for risks that have delayed effects, such as climate change, as it is difficult to know exactly what constitutes a point of no return (at which point humanities extinction is guaranteed), and how close such a point might be.(Ord 280) Ord sets his time horizon to the next 100 years largely because of the threat of unaligned artificial intelligence. Ord gives unaligned artificial intelligence a 10% chance of causing an existential catastrophe in the next century. In the scenarios that Ord imagines, the AI’s superior intellect results in humanity losing control of our destiny and becoming just like any other animal(239). Necessarily, Ord views AI as being significantly more dangerous than climate change which he believes has about a .1% chance of causing an existential catastrophe in the next century (279). These theoretical risks ultimately play the role of downplaying the climate crisis, as despite Ord saying that all risks are deserving of attention, artificial intelligence, being by far the most immediate threat to humanity, takes center stage.

The belief in a sooner, more likely end of the world scenario resulting in a downplaying of the climate crisis is not unique to Ord. A 2014 poll showed that only 28% of white evangelical’s believed that human activity was responsible for climate change(as opposed to 50% of the general US population). (Pew) While not all evangelicals are apocalyptic, for those that are, apocalypticism justifies a lack of concern regarding the climate crisis. As Hillary Scanlon puts it “believing that “that the end times are near” (Scanlon 11), “evangelicals have shorter sociotropic time horizons, which makes them less likely to demonstrate concern for the environment and for policies that would address climate change” (Scanlon 12). In both the case of Ord and in the case of white American evangelicals this position is internally consistent, there is little reason to care about historical processes that will become irrelevant in the next 100 years.

Predictions that the world will soon come to an end, when taken as a general category, necessarily don’t have a great track record. As a result, unless a particular group can 1) draw a clear distinction between their prediction and other predictions of the end times and 2) show why this distinction gives them the epistemic credibility necessary to make an apocalyptic claim, then we should treat them as epistemically untrustworthy. The apocalyptic claims made by some Christain evangelicals are not meaningfully distinct from the long list of apocalyptic claims that have yielded false predictions. Such claims are generally made on biblical evidence, which lacks precedent for providing accurate predictions (Scanlon 13). On the other hand, claims made by climate scientists regarding the current climate crisis are both credible and apocalyptic. When the UN international panel on climate change said that “if human-caused global warming isn’t limited to just another couple tenths of a degree, an Earth now struck regularly by deadly heat, fires, floods and, drought in future decades will degrade in 127 ways with some being “potentially irreversible”(Borenstein) this claim should be treated as trustworthy. Climate scientists have made falsifiable predictions that have been consistently verified. An assessment of peer-reviewed climate models published between 1970 and 2000 notes that “all of the 17 models correctly projected global warming (as opposed to either no warming or even cooling)” and that most of the model projections (10 out of 17) produced global average surface warming projections that were quantitatively consistent with the observed warming rate. (Drake) Additionally, climate scientists do not predict a fundamental break from currently observable historical processes. While climate scientists do predict a coming set of cataclysms that will reshape the entire world, such cataclysms are the direct result of the political structures of the current world rather than being fundamentally separate from them, and our actions in the current world are relevant to either preventing, mitigating, or preparing for these cataclysms. I will now attempt to show how belief in apocalyptic AI is more analogous to the apocalyptic claims of evangelicals than it is to the apocalyptic claims of climate scientists; and as a result, Ord is unjustified in placing risk from unaligned artificial intelligence as high as he does.

While it is concerning that a focus on immediate threats undermines our ability to fight long-term risks such as climate change, this doesn’t necessarily make such a time horizon unjustifiable. If the development of an unaligned artificial intelligence in the next 100 years is highly probable, and its development would result in either the extinction of humanity or a new utopian society in which climate change would be immediately “fixed”, then there would be little reason to concern ourselves with fighting climate change. Ord’s main argument for placing AI as the greatest existential threat to humanity is based on the opinions of AI researchers, resting on the polling that shows that when “asked when an AI system would be ’able to accomplish every task better and more cheaply than human workers, on average they estimated a 50 percent chance of this happening by 2061” (Ord 141). Ord then argues that if a general artificial intelligence significantly more intelligent than humans is developed, it will take over the world. I will focus on disputing the legitimacy of Ord’s appeal to expertise by showing that the predictions of AI researchers have been particularly inaccurate.

The development of AI technology has a long history of failing to live up to its hype. Since its inception, AI research has fallen into a pattern of making overly ambitious promises that it fails to keep. Professor Lighthill’s report on AI research in the U.K. published in 1973 notes that “Most workers in AI research and in related fields confess to a pronounced feeling of disappointment in what has been achieved in the past twenty-five years. Workers entered the field around 1950, and even around 1960, with high hopes that are very far from having been released in 1972” (Lighthill). While this report is specific to AI research in the U.K., it also marks a broader trend of state and economic bodies becoming disillusioned with AI research during this time period. Artificial intelligence would eventually recover from this ‘AI winter’; however, after recovering, the field continued to make overly ambitious claims. For example, as HP Newquist notes in The Brain Makers: Genius, Ego, and Greed in the Search for Machines that Think, “On June 1, 1992, The Fifth Generation Project”–a much-hyped AI project, launched in 1981, into which the Japanese Ministry of International Trade and Industry poured $850 million–“ended not with a successful roar, but with a whimper.” The bursting of the Fifth Generation Project bubble led to another AI winter in the early 1990s.

The predictions of apocalyptic AI researchers are neither based on any concrete data set nor do they have a history of being accurate. As a result, they are not as credible as predictions of apocalyptic climate change as the predictions of climate scientists have a long history and are both based on a concrete data set and a long history of accuracy. This is not to say that the failures of AI research to live up to its hype imply that AI will not dramatically change the world we live in. However, in the face of climate catastrophe, considering unaligned artificial intelligence to be the largest risk to humanity is unjustified.

Section 2: Religious Narratives in Apocalyptic AI

Why do AI researchers keep making overly ambitious promises about the future of artificial intelligence? In general, one might expect that researchers in any given field will, on average, have an inflated sense of their importance; yet the tendency for AI researchers to make apocalyptic predictions is relatively unique. As a result, this general tendency does not fully explain this phenomenon. To properly understand the claims of AI apocalypticism, this tradition must be placed within the broader historical context of the apocalyptic religious movements that have influenced it. As Robert M. Geraci notes in his paper “Apocalyptic AI: Religion and the Promise of Artificial Intelligence”,

Early Jewish and Christian apocalyptic traditions share several basic characteristics which also appear in the twentieth-century popular science books on robotics and AI. Ancient Jews and Christians, caught in alienating circumstances, eagerly anticipated God’s intervention in history. After the end of history, God will create a new world and resurrect humanity in glorified new bodies to eternally enjoy that world. Apocalyptic AI advocates cannot rely upon divine forces to guarantee the coming Kingdom, so they turn to evolution as a transcendent guarantee for the new world. Even without God, evolution guarantees the coming of the Kingdom. Apocalyptic AI looks forward to a mechanical future in which human beings will upload their minds into machines and enjoy a virtual reality paradise in perfect virtual bodies. (140)

Advocates of apocalyptic AI are likely to resist this characterization by claiming that, unlike other apocalyptic claims, their claims are based on science. Yet this defense is ultimately unconvincing because it treats religion and science as necessarily mutually exclusive. Geraci notes that “[w]e commonly speak of science and religion as though they are two separate endeavors but, while they do have important distinctions that make such everyday usage possible, they are neither clearly nor permanently demarcated; the line separating the two changes from era to era and from individual to individual” (159). It is important to note that, while discussions of the singularity are currently conceived of as a topic of “scientific” interest, such theories are necessarily unfalsifiable and therefore cannot be developed through the scientific method. In his essay “Why I Want to be a Posthuman When I Grow Up,” Nick Bostrom, one of the founders of Existential Risk Studies, considers how awesome it would be to be a posthuman. Much of this essay focuses on comparing human lives to what post-human lives would be like. He remarks, “It seems to me fairly obvious why one might have reason to desire to become a posthuman in the sense of having a greatly enhanced capacity to stay alive and stay healthy” (6). At first, this seems sensible as it matches up with what I imagine a posthuman future might be like. Given, however, that Bostrom himself begins the paper by “setting aside issues of feasibility, costs, risks, side-effects, and social consequences” (Bostrom 3), it is difficult to see how his assessment can be meaningfully distinguished from an enjoyable example of speculative science fiction. Posthumans do not exist, and so their average lifespan is necessarily unknown. Bostrom’s conception of post-humanity parallels how early apocalyptics believed that God would give them immortal bodies after the end of the world (Geraci 145). Perhaps, one day, post-humanity will come to exist and at that point, the average lifespan of posthumans could be compared to the average lifespan of humans, but, until such things come to pass, promises of a posthuman future will remain reminiscent of apocalyptic Christians explaining the benefits of angelic bodies. Even if such theories are in some cultural sense scientific, they are directly analogous to claims that are unanimously agreed to be religious.(This is similar to how belief in alien encounters is often not classified as a religious belief despite aliens being phenomenologically similar to angelic encounters.) The historic ties that apocalyptic AI has to apocalyptic religious movements undermine the view that its apocalyptic claims are meaningfully distinct from the consistently false predictions of apocalyptic religious movements.

Section 3: An Artificial Polytheism

One of the main differences between apocalyptic claims made by climate scientists and the apocalyptic claims made by both evangelicals and AI researchers is how these groups understand the historical process. In the case of climate science, there is respect for causal relationships. It is not as if when CO2 reaches X parts per million climate change will happen. Rather the greenhouse effect is a causal relationship between that amount of greenhouse gasses in the atmosphere and the average global temperature, their apocalyptic claim is made only on the assumption that historically observable causal relationships won’t vanish. On the other hand, Evangelicals and advocates of apocalyptic AI claim that there will be a sudden, and unprecedented, breakdown of the historical process in the next 100 years. This is however not the only possible interpretation of AI that maintains its status as a threat to humanity. In this section, I will attempt to historicize AI within the broader context of the development of capitalist systems and argue that the current risks posed by AI are the same risks posed by any system that seeks the maximization of a given value at any cost.

Treating AI as either the ultimate risk or as the only hope for salvation is highly suspect. In both cases, unfounded religious narratives sneak their way into risk assessment. However, while I think that the risk of the singularity is massively overblown, this is not to say that AI doesn’t keep me up at night. Rather, focusing on the singularity results in us becoming blind to the more mundane risk posed by artificial intelligence. To address this blind spot, I will use the final section of this paper to discuss how Ord frames AI risk, and I will offer a different framing.

Ord begins his discussion of AI by asking us to consider

“What would happen if sometime this century researchers created an artificial general intelligence surpassing human abilities in almost every domain? In this act of creation, we would cede our status as the most intelligent entities on Earth. So without a very good plan to keep control, we should also expect to cede our status as the most powerful species, and the one that controls its own destiny.”(240)

In this passage, humanity is taken as a single actor, which currently possesses full control over its destiny, and focuses on a moment when artificial intelligence would, in a single moment, take that control away. This framing obfuscates more than it illuminates for two reasons: 1) “humanity” has never existed as a unified body, and has never spoken in a unified voice, and 2) power is disproportionately distributed such that some have more control over humanity’s collective destiny than others. A weaker version of Ord’s claim might simply mean that a group of humans currently makes all decisions relevant to humanity’s destiny. Yet this too is an overstatement, as it ignores the rapidly increasing prominence of machines making decisions on behalf of humans. In 2018 80% of the daily decisions made in the US stock market were made by machines (Amaro); however this reliance on machine intelligence is not limited to the stock market; the research firm IDC predicts that by 2024 80% of global 2000s companies will hire, fire, and train workers with automated systems(IDC). Ord imagines a scenario where a single super-intelligent machine takes control in an instant, and so misses that over the past few decades, those with the power to do so, have slowly handed control over our economy to a plethora of “intelligent” machines. As a result, rather than asking ourselves how “we” might maintain control of our collective destiny, readers might be better off asking themselves how they, or groups that they are a part of (necessarily including but not limited to humanity), could take back control over their destiny.

However, just because machine learning is exerting an increasing degree of control doesn’t necessarily make it an existential risk to humanity. This will necessarily depend on the real effects of these algorithms, and whether or not the values of these algorithms are aligned with so-called “human values”. In his essay “Ethical Issues in Advanced Artificial Intelligence,” Nick Bostrom discusses how artificial intelligence seeking an arbitrary goal might cause a global catastrophe.

“This could result, to return to the earlier example, in a super-intelligence whose top goal is the manufacturing of paperclips, with the consequence that it starts transforming first all of earth and then increasing portions of space into paperclip manufacturing facilities. More subtly, it could result in a superintelligence realizing a state of affairs that we might now judge as desirable but which in fact turns out to be a false utopia, in which things essential to human flourishing have been irreversibly lost. We need to be careful about what we wish for from a superintelligence because we might get it.” (Bostrom)

Our immediate circumstance is of course non-analogous to Bostrom’s example, as our economy is not at the moment controlled by a single superintelligence, rather it is increasingly controlled by a plethora of artificial intelligence of varying degrees of sophistication. Yet the concern remains: Are their values conducive to human flourishing? The answer is of course no. The switch to AI Management by corporations is done to maximize their profit, and the infinite maximization of this metric of value conflicts with human flourishing, and perhaps even human survival. So while the world might not be traded for paper clips, the algorithms will happily trade the world for whatever makes money. As such programs are programmed to merely maximize profit, they can’t ensure that a world remains in which such profit has value. This type of maximization is not unique to AI-run systems, as most corporations have an explicit obligation to maximize shareholder profit over all else. AI management is not a break from the previous historical process of capitalist profit extraction but rather the further streamlining of this process. As a result, while prophecies of a single apocalyptic AI seem to lack credibility, the prevalence of automated systems in the management of the global economy may make it more difficult to properly address the climate crisis.

Conclusion:

In this essay, I discussed 3 different groups that make apocalyptic claims: Christain Evangelicals, climate scientists, and AI researchers. I argued that the apocalyptic claims made by a subset of evangelicals are untrustworthy because 1) these claims are not based on a methodology that is generally truth-seeking, and 2) apocalyptic claims made by evangelicals have historically failed to come true. On the other hand, I argued that the apocalyptic claims made by climate scientists are trustworthy because 1) these claims are based on a methodology that is generally truth-seeking, and 2) predictions made by climate scientists have historically come true. I then argued that the apocalyptic claims made by AI researchers are more analogous to the claims of evangelicals because 1) these claims are not based on a methodology that is generally truth-seeking, 2) apocalyptic claims made by AI researchers have historically failed to come true, and 3) the narratives that underlie apocalyptic AI have a shared history with the apocalyptic claims made by evangelical Christians. As a result, these claims are untrustworthy and Ord is unjustified in placing risk from unaligned artificial intelligence as high as he does. However, it is important not to write off the dangers of artificial intelligence, as while it seems improbable that AI will make previous historical processes irrelevant, the development of this technology is part of a historical process that is directly at odds with human flourishing.

Bibliography:

Amaro, Silvia. “Sell-Offs Could Be down to Machines That Control 80% of the US Stock Market, Fund Manager Says.” CNBC, CNBC, 5 Dec. 2018, https://​​www.cnbc.com/​​2018/​​12/​​05/​​sell-offs-could-be-down-to-machines-that-control-80percent-of-us-stocks-fund-manager-says.html.

Borenstein, Seth. “UN Climate Report: ‘Atlas of Human Suffering’ Worse, Bigger.” AP NEWS, Associated Press, 28 Feb. 2022, https://​​apnews.com/​​article/​​climate-science-europe-united-nations-weather-8d5e277660f7125ffdab7a833d9856a3.

Bostrom, Nick. “Ethical Issues in Advanced Artificial Intelligence.” Ethical Issues In Advanced Artificial Intelligence, 2003, https://​​nickbostrom.com/​​ethics/​​ai.html.

Bostrom, Nick. “Why I Want to Be a Posthuman When I Grow up—Nick Bostrom.” Nickbostrom.com, 2006, https://​​nickbostrom.com/​​posthuman.pdf.

Cremer, Carla Zoe, and Luke Kemp. “Democratizing Risks: In Search of a Methodology to Study Existential Risk.” The Future of Humanity Institute & Centre for the Study of Existential Risk, 2021, https://​​doi.org/​​https://​​arxiv.org/​​pdf/​​2201.11214.pdf.

Drake, Henri. “Historical Climate Models Accurately Projected Global WarmingHenri.” Historical Climate Models Accurately Projected Global Warming | MIT Department of Earth, Atmospheric and Planetary Sciences, 10 Dec. 2019, https://​​eapsweb.mit.edu/​​news/​​2019/​​historical-climate-models-accurately-projected-global-warming.

Ord, Toby. The Precipice: Existential Risk and the Future of Humanity. Bloomsbury Publishing, 2021.

Geraci, Robert M. “Apocalyptic AI: Religion and the Promise of Artificial Intelligence.” Journal of the American Academy of Religion, vol. 76, no. 1, 2008, pp. 138–66, http://​​www.jstor.org/​​stable/​​40006028. Accessed 14 Apr. 2022.

“IDC FUTURESCAPE: Top 10 Predictions for the Future of Work.” IDC, 18 Nov. 2018, https://​​www.idc.com/​​getdoc.jsp?containerId=prUS48395221.

Lighthill, James. “Informatics Informatics Department Lighthill Report.” Lighthill Report, http://​​www.chilton-computing.org.uk/​​inf/​​literature/​​reports/​​lighthill_report/​​p001.htm.

Mitchell, Audra, and Aadita Chaudhury. “Worlding beyond ‘the’ ‘End’ of ‘the World’: White Apocalyptic Visions and BIPOC Futurisms.” SagePub, 2020, https://​​journals.sagepub.com/​​doi/​​pdf/​​10.1177/​​0047117820948936.

Newquist, Harvey P.. “The brain makers : [genius, ego, and greed in the quest for machines that think].” (1994).

Pew Research Center “Religion and Views on Climate and Energy Issues.” Pew Research Center Science & Society, Pew Research Center, 20 Aug. 2020, https://​​www.pewresearch.org/​​science/​​2015/​​10/​​22/​​religion-and-views-on-climate-and-energy-issues/​​.

Scanlon, Hillary. “Evangelicals and Climate Change.” Religion in Environmental and Climate Change: Suffering, Values, Lifestyles, Sept. 2020, https://​​doi.org/​​10.5040/​​9781472549266.ch-007.

Soper, Spencer. Bloomberg.com, Bloomberg, https://​​www.bloomberg.com/​​news/​​features/​​2021-06-28/​​fired-by-bot-amazon-turns-to-machine-managers-and-workers-are-losing-out.