As an update, I am working on a full post that will excerpt 20 arguments against working to improve the long-term future and/or working to reduce existential risk as well as responses to those arguments. The post itself is currently at 26,000 words and there are six planned comments (one of which will add 10 additional arguments) that together are currently at 11,000 words. There have been various delays in my writing process but I now think that is good because there have been several new and important arguments that have been developed in the past year. My goal is to begin circulating the draft for feedback within three months.
RandomEA
Hi Arden and the 80,000 Hours team,
Thank you for the excellent content that you produce for the EA community, especially the podcasts.
There is one issue that I want to raise. I gave serious thought to raising this via your survey, but I think it is better raised publicly.
In your article “The case for reducing extinction risk” (which is linked to in your “Key ideas” article), you write:
Here are some very rough and simplified figures to show how this could be possible. It seems plausible to us that $100 billion spent on reducing extinction risk could reduce it by over 1% over the next century. A one percentage point reduction in the risk would be expected to save about 100 million lives among the present generation (1% of about 10 billion people alive today). This would mean the investment would save lives for only $1000 per person.
At the top of the page, it says the article was published in October 2017 and last updated in October 2017. There are no footnotes indicating any changes were made to that section.
However, an archived copy of the article from June 2018 shows that, at the time, the article read:
We roughly estimate that if $10 billion were spent intelligently on reducing these risks, it could reduce the chance of extinction by 1 percentage point over the century. In other words, if the risk is 4% now, it could be reduced to 3%.
A one percentage point reduction in the risk would be expected to save about 100 million lives (1% of 10 billion). This would mean it saves lives for only $100 each.
I think it would be helpful to members of the community to indicate when and how an article has been substantively updated. There are many ways this can be done, including:
an article explaining how and why your views have changed (e.g. here, here/here, and here);
linking to an archived version of the article (as you do here) ideally with a change log; and
a footnote in the section indicating what it previously said and why your views have changed.
I understand that you have a large amount of content and limited staff capacity to review all of your old content. But what I’m talking about here is limited to changes you choose to make.
I’m sure it was just an oversight on the part of whoever made the change. You all have a lot on your plate, and it’s most convenient for an article to just present your current views on the subject.
But when it comes to something as important as the effectiveness of spending to reduce existential risk and something as major as a shift of an order of magnitude, I really think it’d be helpful to note and explain any change in your thinking.
Thank you for reading, and keep up the good work.
It seems to me there’s a fourth key premise:
0. Comparability: It is possible to make meaningful comparisons between very different kinds of contributions to the common good.
In an 80,000 Hours interview, Tyler Cowen states:
[44:06]
I don’t think we’ll ever leave the galaxy or maybe not even the solar system.
. . .
[44:27]
I see the recurrence of war in human history so frequently, and I’m not completely convinced by Steven Pinker [author of the book Better Angels of Our Nature, which argues that human violence is declining]. I agree with Steven Pinker, that the chance of a very violent war indeed has gone down and is going down, maybe every year, but the tail risk is still there. And if you let the clock tick out for a long enough period of time, at some point it will happen.
Powerful abilities to manipulate energy also mean powerful weapons, eventually powerful weapons in decentralized hands. I don’t think we know how stable that process is, but again, let the clock tick out, and you should be very worried.
How likely do you think it is that humans (or post-humans) will get to a point where existential risk becomes extremely low? Have you looked into the question of whether interstellar colonization will be possible in the future, and if so, do you broadly agree with Nick Beckstead’s conclusion in this piece? Do you think Cowen’s argument should push EAs towards forms of existential risk reduction (referenced by you in your recent 80,000 Hours interview) that are “not just dealing with today’s threats, [but] actually fundamentally enhancing our ability to understand and manage this risk”? Does positively shaping the development of artificial intelligence fall into that category?
Edit (likely after Toby recorded his answer): This comment from Pablo Stafforini also mentions the idea of “reduc[ing] the risk of extinction for all future generations.”
Do you think that “a panel of superforecasters, after being exposed to all the arguments [about existential risk], would be closer to [MacAskill’s] view [about the level of risk this century] than to the median FHI view”? If so, should we defer to such a panel out of epistemic modesty?
There are many ways that technological development and economic growth could potentially affect the long-term future, including:
Hastening the development of technologies that create existential risk (see here)
Hastening the development of technologies that mitigate existential risk (see here)
Broadly empowering humanity (see here)
Reducing the chance of international armed conflict (see here)
Improving international cooperation (see the climate change mitigation debate)
Shifting the growth curve forward (see here)
Hastening the colonization of the accessible universe (see here and here)
What do you think is the overall sign of economic growth? Is it different for developing and developed countries?
Note: The fifth bullet point was added after Toby recorded his answers.
How much further does your dollar go overseas?
Should there be an EA crowdfunding platform?
Hi Ben,
Thank you to you and the 80,000 Hours team for the excellent content. One issue that I’ve noticed is that a relatively large number of pages state that they are out of date (including several important ones). This makes me wonder why it is that 80,000 Hours does not have substantially more employees. I’m aware that there are issues with hiring too quickly, but GiveWell was able to expand from 18 full-time staff (8 in research roles) in April 2017 to 37 staff today (13 in research roles and 5 in content roles). Is the reason that 80,000 Hours cannot grow as rapidly that its research is more subjective in nature, making good judgment more important, and that judgment is quite difficult to assess?
With respect to the necessity of a constitutional amendment, I agree with you on presidential elections but respectfully disagree as to congressional elections.
For presidential elections, the proposal with the most traction is the National Popular Vote Interstate Compact, which requires compacting states to give their electoral votes to the presidential ticket with a plurality of votes nationwide but only takes effect after states collectively possessing a majority of all electoral votes join the compact. Proponents argue that it is constitutional (with many believing it can be done without congressional consent), while opponents say that it is unconstitutional and in any case would require congressional consent. See pages 21-30 of this Congressional Research Service report for a summary of the legal issues. Regardless of which side has the better argument, it’s unlikely that an interstate compact would be used to adopt instant runoff voting or approval voting for presidential elections because i) absent a law from Congress, it would be up to non-compacting states whether to switch from plurality voting in their own state (which could mean voters in some states would be limited to choosing one ticket) and ii) it is questionable whether Congress has the power to require non-compacting states to switch (though see pages 16-17 of this article arguing that it does).
As for congressional elections, it’s worth noting that the U.S. Constitution does not require plurality voting and does not even require single member districts. Indeed, ranked choice voting was used in Maine for congressional elections in 2018, and a federal judge rejected the argument that it is unconstitutional due to being contrary to historical practice. And while single member districts have been used uniformly for nearly two centuries, it was not the only method in use at the founding and courts tend to give special weight to founding era practice (see e.g. Evenwel v. Abbott for an example related to elections), which makes me think that FairVote’s single transferable vote proposal is on solid constitutional footing.
Thanks Howie.
Something else I hope you’ll update is the claim in that section that GiveWell estimates that it costs the Against Malaria Foundation $7,500 to save a life.
The archived version of the GiveWell page you cite does not support that claim; it states the cost per life saved of AMF is $5,500. (It looks like earlier archives of that same page do state $7,500 (e.g. here), so that number may have been current while the piece was being drafted.)
Additionally, the $5,500 number, which is based on GiveWell’s Aug. 2017 estimates (click here and see B84), is unusually high. Here are GiveWell’s estimates by year:
2017 (final version): $3,280 (click here and see B91)
2018 (final version): $4,104 (click here and see R109)
2019 (final version): $2,331 (click here and see B162) (downside adjustments seem to cancel with excluded effects)
2020 (Sep. 11th version): $4,450 (click here and see B219)
Once the AMF number is updated, the near-term existential risk number is less than five times as good as the AMF number. And if the existential risk number is adjusted for uncertainty (see here and here), then it could end up worse than the AMF number. That’s why I assumed the change on the page represented a shift in your views rather than an illustration. It puts the numbers so close to each other that it’s not obvious that the near-term existential risk number is better and it also makes it easier for factors like personal fit to outweigh the difference in impact.
What do you think is the strongest argument against working to improve the long-term future? What do you think is the strongest argument against working to reduce existential risk?
What percent of those who drifted from the 50% category ended up in the 10% category instead of out of the movement entirely?
And would the graph of the number of people remaining in the 50% category over time look roughly linear or was drifting concentrated at the beginning or near the end? What about for the 10% category?
I was planning to give some feedback on the 2017 survey instrument after the last post in that series, which I had assumed would finish before the 2018 survey was released. Since my assumption was wrong (sorry!), I’ll just post my feedback here to be considered for the 2019 survey:
One major aspect of EA is the regularly produced online content on this forum and elsewhere. It might be useful to ask about the average number of hours a week people spend reading EA content as that could help people evaluate the value of producing online content.
You could also ask people whether they’ve attended an EA Global conference. The responses could be used as a proxy to distinguish more involved and less involved EAs, which could be used in analyzing other issues like cause area preferences.
For the question about career path, you could add advocacy as a fourth option. (80,000 Hours treats it as one of the four broad options.)
For the same reasons that race was included in the 2017 survey, it could be useful to ask about parental education (as a proxy for socioeconomic background).
You could ask people how many of their acquaintances they have seriously attempted to persuade to join EA and how many of those did join. This could provide useful data on the effectiveness of personal outreach.
Another question that may be worth asking: “Have you ever seriously considered leaving EA?” For those that answer yes, you could ask them for their reasons.
I think it could be useful to have data on the percent of EAs who are living organ donors and the percent of EAs who intend to become living organ donors. The major downside is that it may cause people to think that being a living organ donor is part of EA.
Borrowing from Peter Singer, I propose asking: “Has effective altruism given you a greater sense of meaning and purpose in your life?”
You could also ask about systemic change: “How much do you think the EA community currently focuses on systemic change (on a scale of 1 to 10)?” and “How much do you think the EA community should focus on systemic change (on a scale of 1 to 10)?” You could include a box for people to explain their answers.
Lastly, you could ask questions about values. A) “Do you believe that preventing the suffering of a person living in your own country is more important than preventing an equal amount of suffering of a person living in a different country? Assume that there is no instrumental value to preventing the suffering of either and that in both cases the suffering is being prevented by means other than preventing existence or causing death.” B) “Do you believe that preventing the suffering of a human is more important than preventing an equal amount of suffering of a non-human animal? Assume that there is no instrumental value to preventing the suffering of either and that in both cases the suffering is being prevented by means other than preventing existence or causing death.” C) “Do you believe that preventing the suffering of a person living in the present is more important than preventing an equal amount of suffering of a person living several centuries from now? Assume that there is no instrumental value to preventing the suffering of either and that in both cases the suffering is being prevented by means other than preventing existence or causing death.” D) “Do you believe that it is bad if a person who would live a happy life is not brought into existence?”
Here’s what Lewis Bollard had to say about the talent vs. funding issue when asked about it on the 80,000 Hours podcast (in September 2017):
Robert Wiblin: My impression is that fa …. animal welfare organisations, at least the ones that I’m aware of, they are associated with Effective Altruism are often among the most funding constrained. That they often feel like they’re most limited by access to money. Does this suggest that people who are concerned with animal welfare should be more inclined to do earning to give and, perhaps, rather than work in the area, instead make money and give it away?
Lewis Bollard: I don’t think so. I think that that was true until two years ago, or it was true until eighteen months ago when we started ground making in this field. I think the situation has dramatically improved in terms of funding largely because of Open Phil. Entering this field, but also because there are a number of other very generous donors who’ve either entered the field or significantly increased their giving in the last two years.
Right now I think there is a bigger talent gap than financial gap for farm animal welfare groups. That’s not to say it will always be that way, and I certainly do think that someone whose aptitude or inclination is heavily toward earning to give, it could still well make sense. If someone has great quantitative skills and enjoys working at a hedge fund, then I would say earn to give. That could be still a really powerful way and we will more and more funders over time to continue scaling up the movement, but all things equal, I would encourage someone to focus more on the talent piece now because I do think that things have really flipped in the last few years, and I’m pretty optimistic that the funding will continue to grow in this space for animal welfare.
Robert Wiblin: What makes you confident about that? You don’t expect to be fired in the next few years?
Lewis Bollard: First, I hope I won’t be fired, but I think there’s a deep commitment from the Open Philanthropy Project to continue strong funding in this space, to continue funding on at least the level we’re funding currently and hopefully more.
I’ve also just seen a number of new large-ish funders coming online. Just in the last two years I would say the number of funders giving more than two hundred thousand dollars a year has doubled, and I’ve started to see real interest from some other major potential funders.
I think it’s natural that, as this issue has gained public prominence, so were there a lot of potential donors, or people who have great wealth, have realised that this is something important and this is something that they can make a great difference.
Which five books would you recommend to an 18 year old?
Should non-suffering focused altruists cooperate with suffering-focused altruists by giving more weight to suffering than they otherwise would given their worldview (or given their worldview adjusted for moral uncertainty)?
Do you think there are any actions that would obviously decrease existential risk? (I took this question from here.) If not, does this significantly reduce the expected value of work to reduce existential risk or is it just something that people have to be careful about (similar to limited feedback loops, information hazards, unilateralist’s curse etc.)?
This makes me feel more strongly that there should be a separate career advice organization focused on near term causes. (See here for my original comment proposing this idea.)
A near term career advice organization could do the following:
Write in-depth problem profiles on causes that could be considered to be among the most pressing from a near term perspective but that are not considered to be among the most pressing from a long term perspective (e.g. U.S. criminal justice reform, developing country mental health, policy approaches to global poverty, food innovation approaches to animal suffering, biomedical research focused on aging)
Write in-depth career reviews of careers that could be considered to be among the highest impact from a near term perspective but that are not considered to be among the highest impact from a long term perspective (e.g. careers that correspond with the problems listed in the previous bullet point, specific options in the global poverty space, specific options in the animal suffering space)
Produce a podcast that focuses on interviewing people working on issues that could be considered to be among the most pressing from a near term perspective but that are not considered to be among the most pressing from a long term perspective
Become deeply familiar with the global poverty space, the animal suffering space, and other cause areas that are much more likely to be prioritized by near term people and form close connections to organizations working in such cause areas
Provide job postings, career coaching, and referrals based on the information gained through the previous bullet point
I think the proposed organization would actually complement 80,000 Hours by expanding the number of cause areas for which there’s in-depth career advice and coaching; the two organizations could even establish a partnership where they refer people to each other as appropriate.
(As noted in my original comment, I think it’s better to have a separate organization do this since a long-term focused organization understandably wants to focus its efforts on causes that are effective from its perspective.)
This approach could have various benefits including:
directly increasing impact by providing better advice to individual EAs who are unable to contribute to causes that are considered to be among the most pressing from a long term perspective
benefiting the long-term space by keeping individuals who have the potential to contribute to the long term space involved with EA while they gain more skills and experience
benefiting the long-term space by increasing the number of people who are able to benefit from EA career advice and thus the number of people who will refer others to 80,000 Hours (directly or through this proposed organization)
benefiting the long-term space through the various benefits of worldview diversification (learning from feedback loops, community image, option value)
benefiting individual EAs by helping them find a more fulfilling career (their utility counts too!)
Here are ten reasons you might choose to work on near-term causes. The first five are reasons you might think near term work is more important, while the latter five are why you might work on near term causes even if you think long term future work is more important.
You might think the future is likely to be net negative. Click here for why one person initially thought this and here for why another person would be reluctant to support existential risk work (it makes space colonization more likely, which could increase future suffering).
Your view of population ethics might cause you to think existential risks are relatively unimportant. Of course, if your view was merely a standard person affecting view, it would be subject to the response that work on existential risk is high value even if only the present generation is considered. However, you might go further and adopt an Epicurean view under which it is not bad for a person to die a premature death (meaning that death is only bad to the extent it inflicts suffering on oneself or others).
You might have a methodological objection to applying expected value to cases where the probability is small. While the author attributes this view to Holden Karnofsky, Karnofsky now puts much more weight on the view that improving the long term future is valuable.
You might think it’s hard to predict how the future will unfold and what impact our actions will have. (Note that the post is from five years ago and may no longer reflect the views of the author.)
You might think that AI is unlikely to be a concern for at least 50 years (perhaps based on your conversations with people in the field). Given that ongoing suffering can only be alleviated in the present, you might think it’s better to focus on that for now.
You might think that when there is an opportunity to have an unusually large impact in the present, you should take it even if the impact is smaller than the expected impact of spending that money on long term future causes.
You might think that the shorter feedback loops of near term causes allow us to learn lessons that may help with the long term future. For example, Animal Charity Evaluators may help us get a better sense of how to estimate cost-effectiveness with relatively weak empirical evidence, Wild Animal Suffering Research may help us learn how to build a new academic field, and the Good Food Institute may help us gain valuable experience influencing major economic and political actors.
You might feel like you are a bad fit for long term future causes because they require more technical expertise (making it hard to contribute directly) and are less funding constrained (making it hard to contribute financially).
You might feel a spiritual need to work on near term causes. Relatedly, you might feel like you’re more likely to do direct work long term if you can feel motivated by videos of animal suffering (similar to how you might donate a smaller portion of your income because you think it’s more likely to result in you giving long term).
As you noted, you might think there are public image or recruitment benefits to near term work.
Note: I do not necessarily agree with any of the above.