For those who are curious,
in April 2015, GiveWell had 18 full-time staff, while
80,000 Hours currently has a CEO, a president, 11 core team members, and two freelancers and works with four CEA staff.
Thank you to you and the 80,000 Hours team for the excellent content. One issue that I’ve noticed is that a relatively large number of pages state that they are out of date (including several important ones). This makes me wonder why it is that 80,000 Hours does not have substantially more employees. I’m aware that there are issues with hiring too quickly, but GiveWell was able to expand from 18 full-time staff (8 in research roles) in April 2017 to 37 staff today (13 in research roles and 5 in content roles). Is the reason that 80,000 Hours cannot grow as rapidly that its research is more subjective in nature, making good judgment more important, and that judgment is quite difficult to assess?
It seems to me that there are two separate frameworks:
1) the informal Importance, Neglectedness, Tractability framework best suited to ruling out causes (i.e. this cause isn’t among the highest priority because it’s not [insert one or more of the three]); and
2) the formal 80,000 Hours Scale, Crowdedness, Solvability framework best used for quantitative comparison (by scoring causes on each of the three factors and then comparing the total).
Treating the second one as merely a formalization of the first one can be unhelpful when thinking through them. For example, even though the 80,000 Hours framework does not account for diminishing marginal returns, it justifies the inclusion of the crowdedness factor on the basis of diminishing marginal returns.
Notably, EA Concepts has separate pages for the informal INT framework and the 80,000 Hours framework.
In his blog post “Why Might the Future Be Good,” Paul Christiano writes:
What natural selection selects for is patience. In a thousand years, given efficient natural selection, the most influential people will be those who today cared what happens in a thousand years. Preferences about what happens to me (at least for a narrow conception of personal identity) will eventually die off, dominated by preferences about what society looks like on the longest timescales.
(Please read all of “How Much Altruism Do We Expect?” for the full context.)
Thanks Lucy! Readers should note that Elie’s answer is likely partly addressed to Lucy’s question.
What are your thoughts on the argument that the track record of robustly good actions is much better than that of actions contingent on high uncertainty arguments? (See here and here at 34:38 for pushback.)
Should non-suffering focused altruists cooperate with suffering-focused altruists by giving more weight to suffering than they otherwise would given their worldview (or given their worldview adjusted for moral uncertainty)?
Has your thinking about donor coordination evolved since 2016, and if so, how? (My main motivation for asking is that this issue is the focus of a chapter in a recent book on philosophical issues in effective altruism though the chapter appears to be premised on this blog post, which has an update clarifying that it has not represented GiveWell’s approach since 2016.)
How confident are you that the solution to infinite ethics is not discounting? How confident are you that the solution to the possibility of an infinitely positive/infinitely negative world automatically taking priority is not capping the amount of value we care about at a level low enough to undermine longtermism? If you’re pretty confident about both of these, do you think additional research on infinites is relatively low priority?
What do you think is the strongest argument against working to improve the long-term future? What do you think is the strongest argument against working to reduce existential risk?
(This comment assumes GiveWell would broadly agree with a characterization of its worldview as consequentialist.) Do you agree with the view that, given moral uncertainty, consequentialists should give some weight to non-consequentialist values? If so, do you think GiveWell should give explicit weight to the intrinsic value of gender equality apart from its instrumental value? And if yes, do you think that, in consider the moral views of the communities that GiveWell operates in, it would make sense to give substantially more weight to the views of women than of men on the value of gender equality?
There are many ways that technological development and economic growth could potentially affect the long-term future, including:
Hastening the development of technologies that create existential risk (see here)
Hastening the development of technologies that mitigate existential risk (see here)
Broadly empowering humanity (see here)
Improving human values (see here and here)
Reducing the chance of international armed conflict (see here)
Improving international cooperation (see the climate change mitigation debate)
Shifting the growth curve forward (see here)
Hastening the colonization of the accessible universe (see here and here)
What do you think is the overall sign of economic growth? Is it different for developing and developed countries?
Note: The fifth bullet point was added after Toby recorded his answers.
Do you think that “a panel of superforecasters, after being exposed to all the arguments [about existential risk], would be closer to [MacAskill’s] view [about the level of risk this century] than to the median FHI view”? If so, should we defer to such a panel out of epistemic modesty?
How much uncertainty is there in your case for existential risk? What would you put as the probability that, in 2100, the expected value of a substantial reduction in existential risk over the course of this century will be viewed by EA-minded people as highly positive? Do you think we can predict what direction future crucial considerations will point based on what direction past crucial considerations have pointed?
What do you think of applying Open Phil’s outlier opportunities principle to an individual EA? Do you think that, even in the absence of instrumental considerations, an early career EA who thinks longtermism is probably correct but possibly wrong should choose a substantial chance of making a major contribution to increasing access to pain relief in the developing world over a small chance of making a major contribution to reducing GCBRs?
Is the cause area of reducing great power conflict still entirely in the research stage or is there anything that people can concretely do? (Brian Tse’s EA Global talk seemed to mostly call for more research.) What do you think of greater transparency about military capabilities (click here and go to 24:13 for context) or promoting a more positive view of China (same link at 25:38 for context)? Do you think EAs should refrain from criticizing China on human rights issues (click here and search the transcript for “I noticed that over the last few weeks” for context)?
In an 80,000 Hours interview, Tyler Cowen states:
I don’t think we’ll ever leave the galaxy or maybe not even the solar system.
. . .
I see the recurrence of war in human history so frequently, and I’m not completely convinced by Steven Pinker [author of the book Better Angels of Our Nature, which argues that human violence is declining]. I agree with Steven Pinker, that the chance of a very violent war indeed has gone down and is going down, maybe every year, but the tail risk is still there. And if you let the clock tick out for a long enough period of time, at some point it will happen.
Powerful abilities to manipulate energy also mean powerful weapons, eventually powerful weapons in decentralized hands. I don’t think we know how stable that process is, but again, let the clock tick out, and you should be very worried.
How likely do you think it is that humans (or post-humans) will get to a point where existential risk becomes extremely low? Have you looked into the question of whether interstellar colonization will be possible in the future, and if so, do you broadly agree with Nick Beckstead’s conclusion in this piece? Do you think Cowen’s argument should push EAs towards forms of existential risk reduction (referenced by you in your recent 80,000 Hours interview) that are “not just dealing with today’s threats, [but] actually fundamentally enhancing our ability to understand and manage this risk”? Does positively shaping the development of artificial intelligence fall into that category?
Edit (likely after Toby recorded his answer): This comment from Pablo Stafforini also mentions the idea of “reduc[ing] the risk of extinction for all future generations.”
What are your thoughts on these questions from page 20 of the Global Priorities Institute research agenda?
How likely is it that civilisation will converge on the correct moral theory given enough time? What implications does this have for cause prioritisation in the nearer term?
How likely is it that the correct moral theory is a ‘Theory X’, a theory radically different from any yet proposed? If likely, how likely is it that civilisation will discover it, and converge on it, given enough time? While it remains unknown, how can we properly hedge against the associated moral risk?
How important do you think those questions are for the value of existential risk reduction vs. (other) trajectory change work? (The idea for this question comes from the informal piece listed after each of the above two paragraphs in the research agenda.)
Edited to add: What is your credence in there being a correct moral theory? Conditional on there being no correct moral theory, how likely do you think it is that current humans, after reflection, would approve of the values of our descendants far in the future?
Do you think there are any actions that would obviously decrease existential risk? (I took this question from here.) If not, does this significantly reduce the expected value of work to reduce existential risk or is it just something that people have to be careful about (similar to limited feedback loops, information hazards, unilateralist’s curse etc.)?
In the new 80,000 Hours interview of Toby Ord, Arden Koehler asks:
Arden Koehler: So I’m curious about this second stage: the long reflection. It felt, in the book, like this was basically sitting around and doing moral philosophy. Maybe lots of science and other things and calmly figuring out, how can we most flourish in the future? I’m wondering whether it’s more likely to just look like politics? So you might think if we come to have this big general conversation about how the world should be, our most big general public conversation right now is a political conversation that has a lot of problems. People become very tribal and it’s just not an ideal discourse, let’s say. How likely is it do you think that the long reflection will end up looking more like that? And is that okay? What do you think about that?
Ord then gives a lengthy answer, with the following portion the most directly responsive:
Toby Ord: . . . I think that the political discourse these days is very poor and definitely doesn’t live up to the kinds of standards that I loftily suggest it would need to live up to, trying to actually track the truth and to reach a consensus that stands the test of time that’s not just a political battle between people based on the current levels of power today, at the point where they’ll stop fighting, but rather the kind of thing that you expect people in a thousand years to agree with. I think there’s a very high standard and I think that we’d have [to] try very hard to have a good public conversation about it.