Thanks Pablo and Ben. I already have tags below each argument for what I think it is arguing against. I do not plan on doing two separate posts as there are some arguments that are against longtermism and against the longtermist case for working to reduce existential risk. Each argument and its response are presented comprehensively, so the amount of space dedicated to each is based mostly on the amount of existing literature. And as noted in my comment above, I am excerpting responses to the arguments presented.
RandomEA
As an update, I am working on a full post that will excerpt 20 arguments against working to improve the long-term future and/or working to reduce existential risk as well as responses to those arguments. The post itself is currently at 26,000 words and there are six planned comments (one of which will add 10 additional arguments) that together are currently at 11,000 words. There have been various delays in my writing process but I now think that is good because there have been several new and important arguments that have been developed in the past year. My goal is to begin circulating the draft for feedback within three months.
For those who are curious,
in April 2015, GiveWell had 18 full-time staff, while
80,000 Hours currently has a CEO, a president, 11 core team members, and two freelancers and works with four CEA staff.
Hi Ben,
Thank you to you and the 80,000 Hours team for the excellent content. One issue that I’ve noticed is that a relatively large number of pages state that they are out of date (including several important ones). This makes me wonder why it is that 80,000 Hours does not have substantially more employees. I’m aware that there are issues with hiring too quickly, but GiveWell was able to expand from 18 full-time staff (8 in research roles) in April 2017 to 37 staff today (13 in research roles and 5 in content roles). Is the reason that 80,000 Hours cannot grow as rapidly that its research is more subjective in nature, making good judgment more important, and that judgment is quite difficult to assess?
It seems to me that there are two separate frameworks:
1) the informal Importance, Neglectedness, Tractability framework best suited to ruling out causes (i.e. this cause isn’t among the highest priority because it’s not [insert one or more of the three]); and
2) the formal 80,000 Hours Scale, Crowdedness, Solvability framework best used for quantitative comparison (by scoring causes on each of the three factors and then comparing the total).
Treating the second one as merely a formalization of the first one can be unhelpful when thinking through them. For example, even though the 80,000 Hours framework does not account for diminishing marginal returns, it justifies the inclusion of the crowdedness factor on the basis of diminishing marginal returns.
Notably, EA Concepts has separate pages for the informal INT framework and the 80,000 Hours framework.
In his blog post “Why Might the Future Be Good,” Paul Christiano writes:
What natural selection selects for is patience. In a thousand years, given efficient natural selection, the most influential people will be those who today cared what happens in a thousand years. Preferences about what happens to me (at least for a narrow conception of personal identity) will eventually die off, dominated by preferences about what society looks like on the longest timescales.
(Please read all of “How Much Altruism Do We Expect?” for the full context.)
Thanks Lucy! Readers should note that Elie’s answer is likely partly addressed to Lucy’s question.
What are your thoughts on the argument that the track record of robustly good actions is much better than that of actions contingent on high uncertainty arguments? (See here and here at 34:38 for pushback.)
Should non-suffering focused altruists cooperate with suffering-focused altruists by giving more weight to suffering than they otherwise would given their worldview (or given their worldview adjusted for moral uncertainty)?
Has your thinking about donor coordination evolved since 2016, and if so, how? (My main motivation for asking is that this issue is the focus of a chapter in a recent book on philosophical issues in effective altruism though the chapter appears to be premised on this blog post, which has an update clarifying that it has not represented GiveWell’s approach since 2016.)
How confident are you that the solution to infinite ethics is not discounting? How confident are you that the solution to the possibility of an infinitely positive/infinitely negative world automatically taking priority is not capping the amount of value we care about at a level low enough to undermine longtermism? If you’re pretty confident about both of these, do you think additional research on infinites is relatively low priority?
What do you think is the strongest argument against working to improve the long-term future? What do you think is the strongest argument against working to reduce existential risk?
(This comment assumes GiveWell would broadly agree with a characterization of its worldview as consequentialist.) Do you agree with the view that, given moral uncertainty, consequentialists should give some weight to non-consequentialist values? If so, do you think GiveWell should give explicit weight to the intrinsic value of gender equality apart from its instrumental value? And if yes, do you think that, in consider the moral views of the communities that GiveWell operates in, it would make sense to give substantially more weight to the views of women than of men on the value of gender equality?
There are many ways that technological development and economic growth could potentially affect the long-term future, including:
Hastening the development of technologies that create existential risk (see here)
Hastening the development of technologies that mitigate existential risk (see here)
Broadly empowering humanity (see here)
Reducing the chance of international armed conflict (see here)
Improving international cooperation (see the climate change mitigation debate)
Shifting the growth curve forward (see here)
Hastening the colonization of the accessible universe (see here and here)
What do you think is the overall sign of economic growth? Is it different for developing and developed countries?
Note: The fifth bullet point was added after Toby recorded his answers.
Do you think that “a panel of superforecasters, after being exposed to all the arguments [about existential risk], would be closer to [MacAskill’s] view [about the level of risk this century] than to the median FHI view”? If so, should we defer to such a panel out of epistemic modesty?
How much uncertainty is there in your case for existential risk? What would you put as the probability that, in 2100, the expected value of a substantial reduction in existential risk over the course of this century will be viewed by EA-minded people as highly positive? Do you think we can predict what direction future crucial considerations will point based on what direction past crucial considerations have pointed?
What do you think of applying Open Phil’s outlier opportunities principle to an individual EA? Do you think that, even in the absence of instrumental considerations, an early career EA who thinks longtermism is probably correct but possibly wrong should choose a substantial chance of making a major contribution to increasing access to pain relief in the developing world over a small chance of making a major contribution to reducing GCBRs?
Is the cause area of reducing great power conflict still entirely in the research stage or is there anything that people can concretely do? (Brian Tse’s EA Global talk seemed to mostly call for more research.) What do you think of greater transparency about military capabilities (click here and go to 24:13 for context) or promoting a more positive view of China (same link at 25:38 for context)? Do you think EAs should refrain from criticizing China on human rights issues (click here and search the transcript for “I noticed that over the last few weeks” for context)?
In an 80,000 Hours interview, Tyler Cowen states:
[44:06]
I don’t think we’ll ever leave the galaxy or maybe not even the solar system.
. . .
[44:27]
I see the recurrence of war in human history so frequently, and I’m not completely convinced by Steven Pinker [author of the book Better Angels of Our Nature, which argues that human violence is declining]. I agree with Steven Pinker, that the chance of a very violent war indeed has gone down and is going down, maybe every year, but the tail risk is still there. And if you let the clock tick out for a long enough period of time, at some point it will happen.
Powerful abilities to manipulate energy also mean powerful weapons, eventually powerful weapons in decentralized hands. I don’t think we know how stable that process is, but again, let the clock tick out, and you should be very worried.
How likely do you think it is that humans (or post-humans) will get to a point where existential risk becomes extremely low? Have you looked into the question of whether interstellar colonization will be possible in the future, and if so, do you broadly agree with Nick Beckstead’s conclusion in this piece? Do you think Cowen’s argument should push EAs towards forms of existential risk reduction (referenced by you in your recent 80,000 Hours interview) that are “not just dealing with today’s threats, [but] actually fundamentally enhancing our ability to understand and manage this risk”? Does positively shaping the development of artificial intelligence fall into that category?
Edit (likely after Toby recorded his answer): This comment from Pablo Stafforini also mentions the idea of “reduc[ing] the risk of extinction for all future generations.”
Thanks Ben. There is actually at least one argument in the draft for each alternative you named. To be honest, I don’t think you can get a good sense of my 26,000 word draft from my 570 word comment from two years ago. I’ll send you my draft when I’m done, but until then, I don’t think it’s productive for us to go back and forth like this.