William MacAskill is an associate professor in philosophy at Oxford University. He was educated at Cambridge, Princeton, and Oxford, and is one of the progenitors of the effective altruism movement. His book on the topic, Doing Good Better, was published by Penguin Random House in 2015. He is the co-founder of three non-profits in the effective altruism movement: Giving What We Can, 80,000 Hours, and the Centre for Effective Altruism, and is also a research fellow at the Global Priorities Institute.
We’ve lightly edited this Q&A for clarity. You can also watch it on YouTube and read it on effectivealtruism.org.
The Talk
Habiba Islam (Moderator): Hello, and welcome to this live Q&A with Professor Will MacAskill at EAGxVirtual.
I’m Habiba Islam. I’ll be emceeing this session. I’ll start with a brief intro, and then we’ll dive straight into questions.
Will is an associate professor in philosophy at Oxford University and the author of Doing Good Better. He was educated at Cambridge, Princeton, and Oxford, and is one of the co-founders of the effective altruism movement. In fact, he has co-founded three nonprofits based on effective altruist principles: Giving What We Can, 80,000 Hours, and the Centre for Effective Altruism. He’s also a senior research fellow at the Global Priorities Institute and a director of the Forethought Foundation. [...]
I want to start off, Will, by just talking a bit about what you’re currently working on. I understand that you’re working on a book about longtermism. Would you like to tell us a bit about what that will cover?
Will: Terrific. This is what I’m spending almost all my time on at the moment. It presents the case for longtermism — the moral reasons why we should care about, and how we can influence, the very long-run future. We shouldn’t discount future generations; it’s of tremendous moral importance that we try to make the long-run future better.
Then, [the book will provide] a long exploration answering the question “Given [the case for longtermism], what follows? What should we do?” There will be chapters on values changing over time, AI, the idea of extinction, and civilizational collapse — the idea of economic stagnation (i.e. perhaps we never reach a very technologically advanced state, even though there’s no major catastrophe).
I also have chapters on the value of the future, population ethics, whether we should be investing now or moving resources later, what longtermist society looks like, and how best you, as an individual, can make sure the long-run future goes well.
Habiba: When are you expecting that book to come out? And who’s the key audience?
Will: So it will probably be a while. The initial deadline I had in my head for submitting the manuscript was March 20, 2021. Then I did a bit of reference class forecasting, and I think that I’ll more likely finish it sometime in 2022, and then it would come out six to 12 months after that. So you’re going to have to hold on a little while.
In the meantime, read Toby Ord’s book, The Precipice.
Habiba: I’m also getting a lot of advance tastes of the content of the book, because you did a tour and have been testing the ideas as you go.
Will: Exactly. And if you go on the Global Priorities website, you can see a talk I gave for Steven Pinker’s class, which you’ll see is kind of a teaser for the content the book will cover.
Habiba: So how are you finding the process of writing the book this time different from writing Doing Good Better?
Will: It’s really different in a lot of ways. My guess is it’s something like 10 to 20 times the work. And that’s for a few reasons. First, it’s just going to be a physically bigger book. There’s no way of getting around that.
Also, you asked about the audience: I’m aiming for it to be among the small number of books that can be both widely cited academically, but also accessible enough to be widely read. I thinkAnimal Liberation, The Better Angels of Our Nature, and Guns, Germs, and Steel are in this category.
And then, as I’ve been writing it, it has ended up involving more novel research. I’m trying to [approach the book by] thinking, “Okay, I want to understand this stuff myself.” And more often than not, I was coming up with views that perhaps haven’t been defended before. [That makes it] a much bigger task.
However, we also have a lot more resources. There are a lot of amazing people [helping]. I have two full-time research assistants [Aron Vallinder and Luisa Rodriguez] working on it. And in many cases, there’s some issue I want to know more about, and I can just contact someone; I have a small army of contacts who are working on specific research topics. It’s so nice that we’re now in a situation where that’s possible.
Habiba: Yes. A lot of people have specific questions about particular research issues or issues that you’ve considered in the past. I want to dive into your thoughts on a few of those. Before we get to the first one, do you have a preferred term for “hingeyness” at the moment [in reference to the extent to which we might be living at the “hinge of history” — i.e. the most influential time in human history]?
Will: I did and I think I’ve forgotten it.
Habiba: I’m going to use “hingeyness.”
Will: We can go for “hingeyness” — that’s fine.
Habiba: [Where do you stand] on the debate around how likely it is that we’re currently at a “hinge of history”? Have you changed your opinion since writing your EA Forum post on this topic?
Will: I haven’t changed my position very much. In that discussion, one thing that I think was a bit of a shame was how people focused a lot on the “ur prior” [hypothetical prior] of whether it’s right to use what is essentially a self-sampling assumption — or the principle of indifference — in terms of whether you’re the most important person out of all people who ever lived.
But I think more of the meat is in identifying particular things about the current world that are very distinctive — in particular, the growth rate, which I covered in the post. The current rate is [so fast] that it’s not very plausible that it’s sustainable for tens of thousands of years. That’s one way in which the current time is very distinctive.
There are some arguments that you could use to move from there to say, “We’re going faster through the space of possible technologies, so for that reason, we’re more likely to be at a very ‘hingey’ moment.”
I think I have, recently, moved somewhat further in that direction. I think that’s a pretty good argument — the strongest one [on the topic]. It would be good to do more work on it. And I’ve become somewhat more sympathetic recently to the idea that we might have fairly fast growth rates over the coming century. Three percent growth at the moment is already very fast, but maybe it will increase to 5%, 10%, or even more. And that’s very fast. That has moved my opinion a little bit.
The second thing I would emphasize, though, is that the “hingeyness” of the current century is quite irrelevant to assessing how big some of the claims being made are. But it’s often the most relevant [criterion] for action-related purposes. [In those cases, the question is] “How ‘hingey’ is it now, as in, this year?” If you’re a longtermist, should you donate now? Should you be spending your time [on a certain cause] now, versus in a century’s time or a decade’s time? Should you invest to give later? That becomes the obvious case for thinking that now is not a very “hingey” time. I wish I’d emphasized that a little bit more.
Habiba: Yes. Hearing Phil Trammell talk about this on the 80,000 Hours podcast, I was struck by how it seems very unlikely that the best thing to do is to donate your 10% at the end of the month — that is not necessarily the most effective moment to be donating, or the most effective thing to be doing with that money compared to all of the other things that you could do, in that moment.
Will: Think about [the timeframe of] your whole life. The amount of influence you can have at different moments in your life goes up and down. Most of the time you’re just waiting. And then you pounce and invest everything. That’s especially true if you’re a donor with a very small amount of resources compared to the world at large.
Habiba: Other questions about specific kinds of cause questions are about climate change. Has your opinion on that topic changed?
Will: Yes, it has. You’re making me feel guilty right now because I have this very long blog post that I have not yet [published] about climate change from a longtermist perspective. Digging into climate change [for my book] definitely made me feel more concerned about it for a few reasons:
* I think the standard view that climate change isn’t an existential risk is a bit confused. It’s kind of like saying, “Failure to exercise is not a risk of death.” We should clearly be thinking about climate change as an existential risk factor. There is a meta-analysis suggesting that climate change is a significant aggravator of war. I’m somewhat skeptical of it; I probably believe the signs, but not the magnitude, of the effect. * There are also more recent studies suggesting that climate change is not just based on the output at the time; it has not only a level effect, but also a growth-rate effect. * I’ve become more worried recently about this idea of long-run economic stagnation, which has received almost no attention. And if you think about humanity, we start off as apes, and then we aim for this extremely technologically advanced society. One way we can fail to get there is just by killing ourselves. But another way is by failing to grow. And climate change could certainly be a contributor to that, because there is recent evidence seeming to suggest that climate change affects the growth rate. * [Climate change interventions are] just so robustly good, especially when it comes to what Founders Pledge typically champions funding the most: clean tech. Renewables, super hot rock geothermal, and other sorts of clean energy technologies are really good in a lot of worlds, over the very long term — and we have very good evidence to think that. A lot of the other stuff we’re doing is much more speculative. So I’ve started to view [working on climate change] as the GiveDirectly of longtermist interventions. It’s a fairly safe option. * Given the [efficacy of climate change interventions], I think the neglect of this argument is enormous. I think the recent COVID pandemic shows the [danger] of neglecting AI and biothreats in a very visceral way. While that still holds, my attitude if someone is working on climate change is: “This is amazing. This is awesome.” Maybe there are some [causes that are] even better, like working on bed nets or vaccines within global development. They’re both making things a lot better, and there may be some things that are even better.
I’m getting rained on a lot, so I’m going inside. [Laughs.]
Habiba: [Laughs and waits.] A few people have asked about your previous studies of moral certainty and normative uncertainty. What are some important takeaways from that work that are most relevant to the EA movement?
Will: I think that the case for longtermism looks really good under [the lens of] normative uncertainty. I think it may change what you care about from a longtermist perspective. I think it somewhat strengthens the case for trajectory change over extinction-risk reduction. (By “trajectory change,” I mean making the future better, conditional on survival; “extinction-risk reduction” is just whether we die off or not.)
Even better would be a category measuring whether we get to a high level of technological advancement. That would slightly strengthen [the case] for us to care more about the worst case outcomes [of failing to make the future better], rather than being neutral on achieving good or bad outcomes. Also, we might [be more inclined to] view the issue as one related to population ethics, which might cause us to care more about ensuring that we have a good future.
But I think the really big takeaway [centers on] where we should try to aim. Our natural approach is to think, “I have moral view X, so that’s the sort of future I want to get to.” But [what if I take] the model of uncertainty really seriously, and appreciate how far away we are from the scientific truth? Consider the fundamental laws of physics and how much work has gone into that field. A tiny fraction of that has gone into moral philosophy. It’d be a really remarkable thing if people reached the correct answer.
In that case, what you want to do is aim for an intermediate goal. In general, you want to try to build what I call a “morally exploratory society” in which innovators generate tons of ideas, yet the structure of the society is such that good arguments and reasons win out.
I think that’s very hard to do. Think of all of the other pressures that affect civilization. For example, which set of views has the most military power? What culture or set of views cares most about going to space? There are all sorts of reasons why society might not converge on the best moral view. I think that’s the most important takeaway.
Habiba: That’s actually very closely related to the next question: Is long reflection something that humanity can plausibly do? Human expansion history doesn’t seem to follow philosophy.
Will: Yes, well, something I’ve been learning a lot about recently is the abolition of slavery. I think it is a case study of [long reflection] working. I’m not at all claiming that this is common; I don’t think that long reflection is very likely to be the default. We must try to do it. But in the early 18th century, there was a proliferation of different religious groups, which were like small experiments in moral thinking.
The Quakers, in particular, were a melting pot of moral radicalism. There were people who were hardcore about their moral views. In the book I’ll talk about Benjamin Lay and how he opposed the death penalty.
[Video freezes and a portion of the explanation is inaudible.]
[This was the] first time the idea of abolition really [took hold]. Slavery had been utterly persistent. [Video freezes.] [Then people began] to think, “Slave owning is bad. It corrupts the soul.”
But the idea that it should be abolished, and there should be no slavery in the world, really didn’t happen until the early 18th century because of this greater liberalism [and flourishing] of ideas. Then, the abolitionists started to develop arguments. Over the course of the next 100 years, they managed to convince the British elite and the British public, who controlled most of the world at the time, to act on the basis of those arguments. Britain did that, and it’s an astonishing thing, because Britain went from being among the most barbaric of the slave-trading societies, to [reversing course] over several decades.
The country took an enormous economic hit in order to abolish slavery. Then, they bribed other nations to abolish slavery as well. They set up a force in the Navy to police and capture slave ships and set the slaves free. It’s just an incredible example of those in power being convinced by moral arguments, changing their ways, and acting essentially against their natural self-interests.
So there’s at least one case in history.
It is interesting to reflect on the ways in which that might not have happened. If there hadn’t been a diversity of moral perspectives and a liberal society, it wouldn’t have been possible. I think if the British hadn’t had such a hegemony in the world at the time — if there’d been more competition between them and the other colonial powers — maybe the pain of taking that economic hit would have been worse.
I think it’s super hard [to take a long-term, moral view]. But I don’t think it’s impossible.
Habiba: Yes. With that question, we had some slight problems with your video, but I think it’s mostly working, so we’re going to soldier on.
One last question around these different research questions and longtermism: Do you have a view on how we should distribute resources between mitigating existential risks and uncovering or trying to mitigate risks of suffering?
Will: I don’t really use those categorizations; I find “existential risk,” in particular, a bit vague as a term. I prefer to think about the probability that we become a very technologically advanced civilization, and then ask, “How good or bad is that civilization?”
In that sense, I see suffering risks or s-risks as a type of existential risk [related to] trajectory change. But an s-risk could also just be anything that includes the value of the future. My personal view at the moment is to be much more bullish on trajectory change compared to the risk of extinction. And the main reason for that is the more I look into it, the harder it seems to kill everybody. That seems extremely difficult to do. Similarly, the more I look into civilization, the more robust it seems. The idea of civilizational collapse seems less likely than I would have thought at the outset [of my research].
On the other hand, things like changes in values are much more persistent. There’s a whole literature on this. Values can persist over many, many centuries, for thousands of years. In a way that [makes the future] predictable, because if you tell me it’s the year 3000 and the only bit of information you give me is that slave owning is legal, [I’ll envision] a dystopia. I don’t need to know anything else about the world. We can have remarkable prescience based on values.
Therefore, I’m much more interested in the question of where do we go, rather than whether we get there. One thing I should say, to be clear, is that I put AI in this “trajectory change” category. I do that for two reasons:
First, even if you hold the Bostrom-McCloskey view presented with their “paperclip” scenarios, you still have an enormous civilization stretching out for many billions of years. It’s just a very weird and alien one. So, if you’re changing it from paperclips to something else, you’re not changing the size of civilization; you’re changing how good it is.
Habiba: Even if it’s just one superintelligence in a universe full of paper clips — that still feels like the same size civilization to you?
Will: I think it would be very unlikely to be one superintelligence, because you’d have all of these little paper-making bots, and they would go all kinds of places. I also think it’s extremely unlikely that it would be paperclips. It’s going to be something else. It might be the lock-in of a particular set of people’s views — something for which the values are a bit wrong, but they’re locked in and persist for a very long time.
I tend to find that framing more clear in my head. It’s also relevant if you think about risks that could stop the possibility of other life, too; most of the extinction risks we think about wouldn’t prevent the possibility of other life evolving on earth, and potentially in other areas, whereas trajectory changes do preclude that possibility like AI does.
Sorry — there’s a lot more [to say on this question], but the more I answer it, the more I’m on the “trajectory change” side.
Habiba: I’m going to switch into questions that are more about the effective altruism community and movement building. What, in your mind, would success look like for the EA movement? If you had to imagine what the EA movement could be like in 10 years’ time, how would you describe it?
Will: I might extend the horizon by a bit: What’s it like in a century’s time or a few centuries’ time? Again, as we’ve been learning, our cultural norms are extremely fragile. They’re virtually unimportant and highly contingent.
[I can envision a world where you’re considered] a jerk if you don’t make helping others a very significant part of your life, and think very carefully about how to do that. I think that’s the sort of world we want to get to, a world in which people don’t even know about the term “EA,” or don’t think of it as a novel, interesting thing. They’d just think of it as common sense.
That’s what I think about for the long term.
Habiba: So, that’s success in 100 years.
Will: Yes, that’s success in 100 years. I think the current focus for the EA movement over the next 10 years is to figure out the defining ideas. It could be the case that these core EA ideas become extremely popular, but some of them are totally wrong.
Look at environmentalism. Let’s say you’re keen on the stated principles of environmentalism. Then you’re asked, “What about nuclear power?”, and therefore, you’re against nuclear power, but nuclear power is great from an environmental perspective. Or it could be that you used to think the planet was overpopulated, and now you’ve decided those arguments are really bad.
But ideas are very sticky. And bad ideas [that a movement develops] early on can stay indefinitely. Therefore, if you have really ambitious aims for the movement as a whole, over the course of the next 10 years you want to make sure you haven’t latched onto some bad early ideas.
I like the startup analogy. There’s a period where you’re developing the product. Then there’s a period where you’re just marketing it. And you really want to get it just so.
Habiba: You want to clear the hurdle of the development stage.
Will: Yes. This is consistent with my views on “hingeyness.” I think that it’s much more important to get it right in the coming years than it is to go really fast.
Habiba: I do find it somewhat surprising that you think that ideas have so much longevity, and that things can stick for such long time periods. I’m wondering if you have any suggested readings that people can look into if they’re similarly surprised by that claim.
Will: Okay, terrific. I kind of forget that I spent the last two years trying to learn a lot more history. And that’s been one of the biggest updates, actually. This gives me a bit of a kick in the butt to create an overview of the persistence literature in economics.
I examined things like how some agricultural societies adopted plow agriculture, and others relied on the hoe and digging stick. Use of the plow requires a lot of upper-body strength, metric strength, and grip strength. Therefore, societies that relied on the plow had men in the fields and women working in the home. In societies that used the hoe or digging stick, the work was much more evenly distributed.
Then, you can look at those cultures 1,000 years later, and examine the gender norms there around female labor force participation, firm ownership, and participation in politics. You can even see norms around the second-generation immigrants from those cultures into the United States. And you find that those cultures that adopted the plow have less equal gender norms.
[I’m realizing that] we should publish this report so that the world can see it.
Habiba: It is staggering that things can last that long.
Will: It’s wild. The whole literature is wild. And this is one of the really nice things about being able to commission research — because come on, this all seems too interesting to be true. So I commissioned Jaime Sevilla, who’s a graduate student, to look at this literature and [determine whether] it checks out. He has just begun this project. He went into it feeling quite skeptical, but now he’s saying, “Actually, the study’s a better runner. It’s looking good.”
I think your analogy of [the EA movement] being in the product development stage maybe answered some of these questions, but I want to ask the second top-rated question: Should EA be a mass movement in the way that animal rights or the feminist movement is — i.e., having an end goal where everyone is in the movement, while risking resistance?
Should we be doing that right now? Should we be aiming to grow big?
Will: I think my view there is ultimately yes, and in the short run, no. I think longtermism is the key. I think of longtermism for EA as being similar [in terms of its development] to where socialism was in the mid-20th century.
In that case, the core moral insight was that workers’ rights really matter. And then the question became: What do you want to do about that? It’s very, very non-obvious. You might think that we need complete state control, and the state decides how much tax revenue is sent where. And it turns out that works really badly.
But the social democracy of the Nordic countries works really well. There, you have low regulation and high redistribution. That was extremely non-obvious from the mid-18th century through the mid-19th century. But it was really important to figure that out before turning societies into Communist states.
The question with longtermism is whether benefiting the very long-run future is easier or harder than benefiting workers. I think it’s harder. And in looking at the history of ideas, again, it can take many decades to actually work through ideas before we figure out what’s going on. If you go too fast, then perhaps you’re just spreading really bad ideas. But when it comes to the other cause areas, there’s more of a case.
With animals, just don’t torture them. The conclusion is a lot easier, I think. And global health development is between the two. There were efforts in the 1970s to decrease global poverty and they were enormous, enormous failures because people didn’t really know what they were doing.
So my view is stay small and get better. Eventually, over time, we’re aiming for [a much larger movement].
Habiba: When we’re in this small stage, there’s maybe a risk that the types of people who are in the EA community might be more homogenous. So the question that we’ve had is “Do you feel that the EA community is currently adequately diverse or well-represented? If not, which groups or views do you think are least represented?”
Will: I think the answer’s clearly no, and that’s true in terms of both demographic and epistemic diversity. [The lack of] demographic diversity is clear just from the statistics. There’s underrepresentation of women, enormous underrepresentation of people of color, and underrepresentation of people from other countries. And if we’re thinking of these very long-term aims, what will the biggest economies be in 2100? China and India. COVID is making us think that maybe the grip of the US’s hegemony in the world is weakening a bit.
[Turning to epistemic diversity], I’ve mentioned history. It took me ages to find a story that I felt I could communicate well with, and I’ve learned an enormous amount that has shaped my worldview in quite a significant way [from studying history, which I did because I knew of] no one I knew in the EA movement who was a historian. We’re very blind from that perspective. I think there’s plausibly many more [blind spots] of that nature.
Habiba: As in, there are quite a lot more areas where we need to find more deep experts?
Will: Yes, I would say so. I even find that to be the case in economics, especially since I’ve realized it’s wrong to think of economics as a field rather than just a collection of different disciplines. If five years ago we’d had real experts in growth models in the EA movement, or people who knew the persistence literature, I think we could have made a lot of progress more quickly.
Habiba: Do you have any best guesses for what those “unknowns” might be — those other areas where we could be benefiting from dipping into this deep literature?
Will: I found the study of history even more valuable than I was expecting. And my answer will be very biased by the things I’m thinking about. For example, I [can see the benefit of having more] specialists on mechanism design. That’s an area I’m quite excited about. The economist’s attitude tends to be less [along the lines of] “there’s this problem, so you need to do this thing.” [But specialists in mechanism design think about] how to structure incentives such that people acting in rational self-interest act in a good way.
The field of mechanism design is very hot in economics. And there’s an enormous challenge if you care about the long-run future because structuring incentives for people to take more long-term-oriented actions is a big challenge. But for that reason, I belatedly [think the movement could benefit from] more people in political science, and generally across social science.
I increasingly think growth theorists [could be helpful partners] as well. [Their knowledge] is most relevant to questions around AI and the idea of what the result will be over the next 100 years, whether it’s an intelligence explosion or something that’s much broader across society. There are now growth models on this. Bill Nordhaus, who is a Nobel Laureate, wrote a paper on the topic, as did Charles Jones, one of the world’s best growth theorists [see Jones’s paper here]. This work was not a result of EA; 100 years of formal modeling exists and [can inform the large amount of work still to be done, and to which the EA movement can contribute].
Habiba: We don’t have to reinvent the wheel in these spaces. We can just inherit the thinking.
Will: Yes, exactly.
Habiba: Now for the top-voted question: What mistakes do you think most EAs, or people in the effective altruism community, are making?
Will: It’s a great question. I think that over-deference is an issue. Of course, you can’t think of everything, and sometimes you just need to make a decision and move on. But I do worry that people think, “There’s this core of researchers, and they’ve figured out X, Y, and Z. So I’m going to go do X, Y, and Z.”
You then talk to those core people and they say, “Well, I don’t really know. I could change my [opinion] tomorrow. I need to look into it a lot more.”
That doesn’t necessarily mean that everyone should engage [with their work] in a shallow way. I do think it means more people should just be of the mind that they want to figure out the basics, not take anything for granted, and not defer to [others’ opinions].
At the moment, I think there’s a very small number of people doing that, even though I regard it as the core of what EA is about. And it does mean that if you’re choosing a career on the basis of longtermist aims, you must accept that you’re making a bet. [What we think now] could well get overturned; we could have very different views in 10 years’ time.
I really just don’t want people to make decisions on the basis of feeling that [current views] are more robust than they are.
Habiba: So are you in favor of people who are thinking about what to study to err more on the side of just taking jobs (if they’re not going to be the kind of person who’s becoming an academic researcher)?
[Will puts his hands up to deflect a falling object.] Are you okay?
Will: A massive door is falling on me. Okay, I’m still alive. You know, I nearly got killed by a herd of gulls the other day. It would have been a terrible way for me to go.
Habiba: This feels like we’ve got you on a game show where we ask you questions about philosophy whilst you’re being assaulted by the elements and bits of your house. But I assure you, we haven’t set this up. [Will and Habiba laugh.]
The question was: For people who are choosing what to do with their careers, some can specialize in becoming researchers focusing on core decisions. But given some of the ideas you’ve shared — for example, on how we don’t really know what the definitive answers are yet — are you in favor of building generalist skills and being ready to pivot?
Will: I think that does [make sense] in theory — but in practice, often the best thing to do is just to start doing something really valuable.
Different career paths have different time lengths associated with them. If you’re in the mode of thinking, “Okay, we’ve figured everything out, we just need to implement these ideas,” you might find the option of becoming a politician (which could be 20 or 30 years away) or an academic (which would require doing a PhD and take quite a long time) less appealing.
But if you’re in the mode of thinking, “We’re still working things out,” options with longer time periods become more promising again. I think you should expect most of your impact not to come immediately, but later on.
I also think the net effect of an early emphasis on building general, all-purpose career capital for the whole community can be bad. There’s a psychological element of this that I hadn’t appreciated before. Suppose you go into a particular field and become a specialist, and it turns out to just not be very useful. That sucks on an individual level. But is it exactly what the community needs, because having five specialists — two of whom are super-useful — is way better than having five generalists?
That’s tough. And I think the solution to it, as a community, is perhaps having more rewards and status heaped on process and attempts rather than outcomes. Otherwise, you’re going to incentivize those.
Habiba: Yeah, I think about this a lot, because many of the things that we’ve [identified] as being promising may have a low probability of actually turning out to be important in our lifetimes. I think we have to be okay with that fact. For example, say there’s some sort of AI winter, and [that field is] put on pause for a bit. We should still be okay with the fact that, ex ante, it was the right decision for us to focus some effort on these things.
Will: Yes, exactly.
Habiba: We’re [reaching the end of this Q&A], so I’d like to cover the last section. It’s about your views on whether certain ideas are overrated, underrated, or appropriately rated.
[The first one is] earning to give as an impactful career choice.
Will: Underrated, I would say, and I’m partly to blame. I don’t know — sometimes it’s so hard to be sufficiently granular with the messaging. [One moment the message is] “80,000 Hours is all about earning to give!” and then it’s “No, no, 80,000 Hours isn’t at all about earning to give!”
But I think you can appreciate that it’s not necessarily about the amount of good that money can do, or that you mustn’t _only_ work at opportunities for impact now. It’s more like, “What’s the best opportunity for impact over the course of your entire career?” That seems a lot better.
Habiba: What about this one: efforts to generally improve people’s reasoning and rationality?
Will: I think maybe that one is extremely context-specific; it depends on the project. Maybe it’s a bit overrated. It’s just really hard to do. I don’t know if we have good evidence that you can actually make people [become more rational] who wouldn’t otherwise pick it up. So there’s obviously loads of context that’s super-important, and you need to pick up those skills along the way, or read up on them.
But if you’re going to do that rather than perhaps learn subject-specific skills, it’s not obvious to me that the former’s better. Again, it’s maybe a bit of a generalist-versus-specialist distinction.
Habiba: Spending time and money to increase your personal productivity: overrated or underrated?
Will: Again, I think it depends on the circumstances, but probably underrated, especially for young people. Many don’t think about this correctly. Let’s say you’re a student at a great university, and you have an illustrious career [ahead of you]. You choose to work part-time, rather than just being more frugal in order to do well at school, because the returns early on to your own human capital are really high. It’s always easy to think about things as upfront costs as opposed to long-term gains.
And certainly, when I look back at my life and think about things that I’m really happy I did, often they involved upfront costs. For example, I’m really happy I learned to meditate. I’m really happy I decided to work on mental health. Those just keep paying off over time.
The caveat I’d make is if you’re in this unusual position where you think, “I actually know how well I do at this thing, and doing it right now is going to be a huge determinant of how well my entire career goes.” Often, that can be the case early on. Focusing on that, in a sense, can increase your productivity.
Habiba: Keeping up-to-date with current affairs: overrated or underrated?
Will: I think overrated. I think it’s often just bad for us. I think of it as more of a compulsion or something. The news media are just not incentivized to give you an accurate picture of the world. For every hour that you’re spending on the BBC, you could be spending that looking closely at the data, or reading history, and getting a much richer picture of what the world is like.
Habiba: Having in-person meetings or conversations?
Will: I feel like they’re great, and are highly rated, so I’m going to say they’re appropriately rated.
Habiba: Efforts to build the EA community outside of the English-speaking world?
Will: Probably underrated. Again, it’s just a very predictable fact that in the next century, the non-English-speaking world is going to get more and more powerful relative to the English-speaking world. You also get a variety of diversity benefits [from expanding the community in this way].
Plus, people worry a lot about coordination problems; the biggest ones are between countries. Perhaps this is a way in which the EA community could help, because it’s like a nation that straddles multiple nations.
Habiba: Setting up new EA projects or organizations?
Will: Underrated. I heard, I think from Jade Leung (and I can’t remember if it was from interviews she’d been doing or from a quantitative survey), that there’s a very common view in the EA community that it’s [best not] to set up anything new, because if it already exists, then it’s good — and if not, then it’s probably bad.
This [utterly goes] against what we think. Sure, don’t set up things that are really harmful. Avoid tainting the well and stepping on others’ toes. But [the movement is built to accommodate] enormous numbers of new, great projects. I would love to see that a lot more. Again, most projects fail. It’s another situation where we want to be rewarding process, effort, and attention, rather than necessarily the outcome.
Habiba: Yes. And lastly, being famous: overrated or underrated?
Will: On the basis of famous people I know, I’m going to say it’s overrated. If you look at Hollywood actors, the gains from having more money are really small, I think, and the negatives are terrible. You have to move house regularly, you get stalkers.
I’m not saying we should all shed a tear for Hollywood celebrities or other famous people. But I wouldn’t want to be them.
Habiba: Nice. Here’s the very last question: What’s most important or interesting to you outside of effective altruism?
Will: Great question. I was actually thinking I wanted to give a whole talk on how great it is developing multiple identities outside of EA (not as in a personality disorder).
Habiba: I’m glad you clarified. I wasn’t sure if you meant superhero identities or something.
Will: I think my number-one outside interest at the moment is music. Over the course of the last five years or so, I’ve re-cultivated it. Prior to EA, I was really into music; I was in a couple of bands. EA started, and I thought, “Well, that’s useless. I should stop.” Now, it’s back again — both listening and making music. It’s really enriching my life. It’s this constant source of joy. You can listen to any of the best music, at the most amazing level of sound quality, at any time.
It’s one of the great things that I think the past has gifted us, and that I feel very thankful for.
Habiba: Lovely. We’re going to wrap it up there. Thank you so much, Will, for answering all of those questions.
Q&A with Will MacAskill
Link post
William MacAskill is an associate professor in philosophy at Oxford University. He was educated at Cambridge, Princeton, and Oxford, and is one of the progenitors of the effective altruism movement. His book on the topic, Doing Good Better, was published by Penguin Random House in 2015. He is the co-founder of three non-profits in the effective altruism movement: Giving What We Can, 80,000 Hours, and the Centre for Effective Altruism, and is also a research fellow at the Global Priorities Institute.
We’ve lightly edited this Q&A for clarity. You can also watch it on YouTube and read it on effectivealtruism.org.
The Talk
Habiba Islam (Moderator): Hello, and welcome to this live Q&A with Professor Will MacAskill at EAGxVirtual.
I’m Habiba Islam. I’ll be emceeing this session. I’ll start with a brief intro, and then we’ll dive straight into questions.
Will is an associate professor in philosophy at Oxford University and the author of Doing Good Better. He was educated at Cambridge, Princeton, and Oxford, and is one of the co-founders of the effective altruism movement. In fact, he has co-founded three nonprofits based on effective altruist principles: Giving What We Can, 80,000 Hours, and the Centre for Effective Altruism. He’s also a senior research fellow at the Global Priorities Institute and a director of the Forethought Foundation. [...]
I want to start off, Will, by just talking a bit about what you’re currently working on. I understand that you’re working on a book about longtermism. Would you like to tell us a bit about what that will cover?
Will: Terrific. This is what I’m spending almost all my time on at the moment. It presents the case for longtermism — the moral reasons why we should care about, and how we can influence, the very long-run future. We shouldn’t discount future generations; it’s of tremendous moral importance that we try to make the long-run future better.
Then, [the book will provide] a long exploration answering the question “Given [the case for longtermism], what follows? What should we do?” There will be chapters on values changing over time, AI, the idea of extinction, and civilizational collapse — the idea of economic stagnation (i.e. perhaps we never reach a very technologically advanced state, even though there’s no major catastrophe).
I also have chapters on the value of the future, population ethics, whether we should be investing now or moving resources later, what longtermist society looks like, and how best you, as an individual, can make sure the long-run future goes well.
Habiba: When are you expecting that book to come out? And who’s the key audience?
Will: So it will probably be a while. The initial deadline I had in my head for submitting the manuscript was March 20, 2021. Then I did a bit of reference class forecasting, and I think that I’ll more likely finish it sometime in 2022, and then it would come out six to 12 months after that. So you’re going to have to hold on a little while.
In the meantime, read Toby Ord’s book, The Precipice.
Habiba: I’m also getting a lot of advance tastes of the content of the book, because you did a tour and have been testing the ideas as you go.
Will: Exactly. And if you go on the Global Priorities website, you can see a talk I gave for Steven Pinker’s class, which you’ll see is kind of a teaser for the content the book will cover.
Habiba: So how are you finding the process of writing the book this time different from writing Doing Good Better?
Will: It’s really different in a lot of ways. My guess is it’s something like 10 to 20 times the work. And that’s for a few reasons. First, it’s just going to be a physically bigger book. There’s no way of getting around that.
Also, you asked about the audience: I’m aiming for it to be among the small number of books that can be both widely cited academically, but also accessible enough to be widely read. I think Animal Liberation, The Better Angels of Our Nature, and Guns, Germs, and Steel are in this category.
And then, as I’ve been writing it, it has ended up involving more novel research. I’m trying to [approach the book by] thinking, “Okay, I want to understand this stuff myself.” And more often than not, I was coming up with views that perhaps haven’t been defended before. [That makes it] a much bigger task.
However, we also have a lot more resources. There are a lot of amazing people [helping]. I have two full-time research assistants [Aron Vallinder and Luisa Rodriguez] working on it. And in many cases, there’s some issue I want to know more about, and I can just contact someone; I have a small army of contacts who are working on specific research topics. It’s so nice that we’re now in a situation where that’s possible.
Habiba: Yes. A lot of people have specific questions about particular research issues or issues that you’ve considered in the past. I want to dive into your thoughts on a few of those. Before we get to the first one, do you have a preferred term for “hingeyness” at the moment [in reference to the extent to which we might be living at the “hinge of history” — i.e. the most influential time in human history]?
Will: I did and I think I’ve forgotten it.
Habiba: I’m going to use “hingeyness.”
Will: We can go for “hingeyness” — that’s fine.
Habiba: [Where do you stand] on the debate around how likely it is that we’re currently at a “hinge of history”? Have you changed your opinion since writing your EA Forum post on this topic?
Will: I haven’t changed my position very much. In that discussion, one thing that I think was a bit of a shame was how people focused a lot on the “ur prior” [hypothetical prior] of whether it’s right to use what is essentially a self-sampling assumption — or the principle of indifference — in terms of whether you’re the most important person out of all people who ever lived.
But I think more of the meat is in identifying particular things about the current world that are very distinctive — in particular, the growth rate, which I covered in the post. The current rate is [so fast] that it’s not very plausible that it’s sustainable for tens of thousands of years. That’s one way in which the current time is very distinctive.
There are some arguments that you could use to move from there to say, “We’re going faster through the space of possible technologies, so for that reason, we’re more likely to be at a very ‘hingey’ moment.”
I think I have, recently, moved somewhat further in that direction. I think that’s a pretty good argument — the strongest one [on the topic]. It would be good to do more work on it. And I’ve become somewhat more sympathetic recently to the idea that we might have fairly fast growth rates over the coming century. Three percent growth at the moment is already very fast, but maybe it will increase to 5%, 10%, or even more. And that’s very fast. That has moved my opinion a little bit.
The second thing I would emphasize, though, is that the “hingeyness” of the current century is quite irrelevant to assessing how big some of the claims being made are. But it’s often the most relevant [criterion] for action-related purposes. [In those cases, the question is] “How ‘hingey’ is it now, as in, this year?” If you’re a longtermist, should you donate now? Should you be spending your time [on a certain cause] now, versus in a century’s time or a decade’s time? Should you invest to give later? That becomes the obvious case for thinking that now is not a very “hingey” time. I wish I’d emphasized that a little bit more.
Habiba: Yes. Hearing Phil Trammell talk about this on the 80,000 Hours podcast, I was struck by how it seems very unlikely that the best thing to do is to donate your 10% at the end of the month — that is not necessarily the most effective moment to be donating, or the most effective thing to be doing with that money compared to all of the other things that you could do, in that moment.
Will: Think about [the timeframe of] your whole life. The amount of influence you can have at different moments in your life goes up and down. Most of the time you’re just waiting. And then you pounce and invest everything. That’s especially true if you’re a donor with a very small amount of resources compared to the world at large.
Habiba: Other questions about specific kinds of cause questions are about climate change. Has your opinion on that topic changed?
Will: Yes, it has. You’re making me feel guilty right now because I have this very long blog post that I have not yet [published] about climate change from a longtermist perspective. Digging into climate change [for my book] definitely made me feel more concerned about it for a few reasons:
* I think the standard view that climate change isn’t an existential risk is a bit confused. It’s kind of like saying, “Failure to exercise is not a risk of death.” We should clearly be thinking about climate change as an existential risk factor. There is a meta-analysis suggesting that climate change is a significant aggravator of war. I’m somewhat skeptical of it; I probably believe the signs, but not the magnitude, of the effect.
* There are also more recent studies suggesting that climate change is not just based on the output at the time; it has not only a level effect, but also a growth-rate effect.
* I’ve become more worried recently about this idea of long-run economic stagnation, which has received almost no attention. And if you think about humanity, we start off as apes, and then we aim for this extremely technologically advanced society. One way we can fail to get there is just by killing ourselves. But another way is by failing to grow. And climate change could certainly be a contributor to that, because there is recent evidence seeming to suggest that climate change affects the growth rate.
* [Climate change interventions are] just so robustly good, especially when it comes to what Founders Pledge typically champions funding the most: clean tech. Renewables, super hot rock geothermal, and other sorts of clean energy technologies are really good in a lot of worlds, over the very long term — and we have very good evidence to think that. A lot of the other stuff we’re doing is much more speculative. So I’ve started to view [working on climate change] as the GiveDirectly of longtermist interventions. It’s a fairly safe option.
* Given the [efficacy of climate change interventions], I think the neglect of this argument is enormous. I think the recent COVID pandemic shows the [danger] of neglecting AI and biothreats in a very visceral way. While that still holds, my attitude if someone is working on climate change is: “This is amazing. This is awesome.” Maybe there are some [causes that are] even better, like working on bed nets or vaccines within global development. They’re both making things a lot better, and there may be some things that are even better.
I’m getting rained on a lot, so I’m going inside. [Laughs.]
Habiba: [Laughs and waits.] A few people have asked about your previous studies of moral certainty and normative uncertainty. What are some important takeaways from that work that are most relevant to the EA movement?
Will: I think that the case for longtermism looks really good under [the lens of] normative uncertainty. I think it may change what you care about from a longtermist perspective. I think it somewhat strengthens the case for trajectory change over extinction-risk reduction. (By “trajectory change,” I mean making the future better, conditional on survival; “extinction-risk reduction” is just whether we die off or not.)
Even better would be a category measuring whether we get to a high level of technological advancement. That would slightly strengthen [the case] for us to care more about the worst case outcomes [of failing to make the future better], rather than being neutral on achieving good or bad outcomes. Also, we might [be more inclined to] view the issue as one related to population ethics, which might cause us to care more about ensuring that we have a good future.
But I think the really big takeaway [centers on] where we should try to aim. Our natural approach is to think, “I have moral view X, so that’s the sort of future I want to get to.” But [what if I take] the model of uncertainty really seriously, and appreciate how far away we are from the scientific truth? Consider the fundamental laws of physics and how much work has gone into that field. A tiny fraction of that has gone into moral philosophy. It’d be a really remarkable thing if people reached the correct answer.
In that case, what you want to do is aim for an intermediate goal. In general, you want to try to build what I call a “morally exploratory society” in which innovators generate tons of ideas, yet the structure of the society is such that good arguments and reasons win out.
I think that’s very hard to do. Think of all of the other pressures that affect civilization. For example, which set of views has the most military power? What culture or set of views cares most about going to space? There are all sorts of reasons why society might not converge on the best moral view. I think that’s the most important takeaway.
Habiba: That’s actually very closely related to the next question: Is long reflection something that humanity can plausibly do? Human expansion history doesn’t seem to follow philosophy.
Will: Yes, well, something I’ve been learning a lot about recently is the abolition of slavery. I think it is a case study of [long reflection] working. I’m not at all claiming that this is common; I don’t think that long reflection is very likely to be the default. We must try to do it. But in the early 18th century, there was a proliferation of different religious groups, which were like small experiments in moral thinking.
The Quakers, in particular, were a melting pot of moral radicalism. There were people who were hardcore about their moral views. In the book I’ll talk about Benjamin Lay and how he opposed the death penalty.
[Video freezes and a portion of the explanation is inaudible.]
[This was the] first time the idea of abolition really [took hold]. Slavery had been utterly persistent. [Video freezes.] [Then people began] to think, “Slave owning is bad. It corrupts the soul.”
But the idea that it should be abolished, and there should be no slavery in the world, really didn’t happen until the early 18th century because of this greater liberalism [and flourishing] of ideas. Then, the abolitionists started to develop arguments. Over the course of the next 100 years, they managed to convince the British elite and the British public, who controlled most of the world at the time, to act on the basis of those arguments. Britain did that, and it’s an astonishing thing, because Britain went from being among the most barbaric of the slave-trading societies, to [reversing course] over several decades.
The country took an enormous economic hit in order to abolish slavery. Then, they bribed other nations to abolish slavery as well. They set up a force in the Navy to police and capture slave ships and set the slaves free. It’s just an incredible example of those in power being convinced by moral arguments, changing their ways, and acting essentially against their natural self-interests.
So there’s at least one case in history.
It is interesting to reflect on the ways in which that might not have happened. If there hadn’t been a diversity of moral perspectives and a liberal society, it wouldn’t have been possible. I think if the British hadn’t had such a hegemony in the world at the time — if there’d been more competition between them and the other colonial powers — maybe the pain of taking that economic hit would have been worse.
I think it’s super hard [to take a long-term, moral view]. But I don’t think it’s impossible.
Habiba: Yes. With that question, we had some slight problems with your video, but I think it’s mostly working, so we’re going to soldier on.
One last question around these different research questions and longtermism: Do you have a view on how we should distribute resources between mitigating existential risks and uncovering or trying to mitigate risks of suffering?
Will: I don’t really use those categorizations; I find “existential risk,” in particular, a bit vague as a term. I prefer to think about the probability that we become a very technologically advanced civilization, and then ask, “How good or bad is that civilization?”
In that sense, I see suffering risks or s-risks as a type of existential risk [related to] trajectory change. But an s-risk could also just be anything that includes the value of the future. My personal view at the moment is to be much more bullish on trajectory change compared to the risk of extinction. And the main reason for that is the more I look into it, the harder it seems to kill everybody. That seems extremely difficult to do. Similarly, the more I look into civilization, the more robust it seems. The idea of civilizational collapse seems less likely than I would have thought at the outset [of my research].
On the other hand, things like changes in values are much more persistent. There’s a whole literature on this. Values can persist over many, many centuries, for thousands of years. In a way that [makes the future] predictable, because if you tell me it’s the year 3000 and the only bit of information you give me is that slave owning is legal, [I’ll envision] a dystopia. I don’t need to know anything else about the world. We can have remarkable prescience based on values.
Therefore, I’m much more interested in the question of where do we go, rather than whether we get there. One thing I should say, to be clear, is that I put AI in this “trajectory change” category. I do that for two reasons:
First, even if you hold the Bostrom-McCloskey view presented with their “paperclip” scenarios, you still have an enormous civilization stretching out for many billions of years. It’s just a very weird and alien one. So, if you’re changing it from paperclips to something else, you’re not changing the size of civilization; you’re changing how good it is.
Habiba: Even if it’s just one superintelligence in a universe full of paper clips — that still feels like the same size civilization to you?
Will: I think it would be very unlikely to be one superintelligence, because you’d have all of these little paper-making bots, and they would go all kinds of places. I also think it’s extremely unlikely that it would be paperclips. It’s going to be something else. It might be the lock-in of a particular set of people’s views — something for which the values are a bit wrong, but they’re locked in and persist for a very long time.
I tend to find that framing more clear in my head. It’s also relevant if you think about risks that could stop the possibility of other life, too; most of the extinction risks we think about wouldn’t prevent the possibility of other life evolving on earth, and potentially in other areas, whereas trajectory changes do preclude that possibility like AI does.
Sorry — there’s a lot more [to say on this question], but the more I answer it, the more I’m on the “trajectory change” side.
Habiba: I’m going to switch into questions that are more about the effective altruism community and movement building. What, in your mind, would success look like for the EA movement? If you had to imagine what the EA movement could be like in 10 years’ time, how would you describe it?
Will: I might extend the horizon by a bit: What’s it like in a century’s time or a few centuries’ time? Again, as we’ve been learning, our cultural norms are extremely fragile. They’re virtually unimportant and highly contingent.
[I can envision a world where you’re considered] a jerk if you don’t make helping others a very significant part of your life, and think very carefully about how to do that. I think that’s the sort of world we want to get to, a world in which people don’t even know about the term “EA,” or don’t think of it as a novel, interesting thing. They’d just think of it as common sense.
That’s what I think about for the long term.
Habiba: So, that’s success in 100 years.
Will: Yes, that’s success in 100 years. I think the current focus for the EA movement over the next 10 years is to figure out the defining ideas. It could be the case that these core EA ideas become extremely popular, but some of them are totally wrong.
Look at environmentalism. Let’s say you’re keen on the stated principles of environmentalism. Then you’re asked, “What about nuclear power?”, and therefore, you’re against nuclear power, but nuclear power is great from an environmental perspective. Or it could be that you used to think the planet was overpopulated, and now you’ve decided those arguments are really bad.
But ideas are very sticky. And bad ideas [that a movement develops] early on can stay indefinitely. Therefore, if you have really ambitious aims for the movement as a whole, over the course of the next 10 years you want to make sure you haven’t latched onto some bad early ideas.
I like the startup analogy. There’s a period where you’re developing the product. Then there’s a period where you’re just marketing it. And you really want to get it just so.
Habiba: You want to clear the hurdle of the development stage.
Will: Yes. This is consistent with my views on “hingeyness.” I think that it’s much more important to get it right in the coming years than it is to go really fast.
Habiba: I do find it somewhat surprising that you think that ideas have so much longevity, and that things can stick for such long time periods. I’m wondering if you have any suggested readings that people can look into if they’re similarly surprised by that claim.
Will: Okay, terrific. I kind of forget that I spent the last two years trying to learn a lot more history. And that’s been one of the biggest updates, actually. This gives me a bit of a kick in the butt to create an overview of the persistence literature in economics.
I examined things like how some agricultural societies adopted plow agriculture, and others relied on the hoe and digging stick. Use of the plow requires a lot of upper-body strength, metric strength, and grip strength. Therefore, societies that relied on the plow had men in the fields and women working in the home. In societies that used the hoe or digging stick, the work was much more evenly distributed.
Then, you can look at those cultures 1,000 years later, and examine the gender norms there around female labor force participation, firm ownership, and participation in politics. You can even see norms around the second-generation immigrants from those cultures into the United States. And you find that those cultures that adopted the plow have less equal gender norms.
[I’m realizing that] we should publish this report so that the world can see it.
Habiba: It is staggering that things can last that long.
Will: It’s wild. The whole literature is wild. And this is one of the really nice things about being able to commission research — because come on, this all seems too interesting to be true. So I commissioned Jaime Sevilla, who’s a graduate student, to look at this literature and [determine whether] it checks out. He has just begun this project. He went into it feeling quite skeptical, but now he’s saying, “Actually, the study’s a better runner. It’s looking good.”
You might search for Nathan Nunn’s articles on persistence. Alternatively, just wait for this report.
Habiba: That sounds good.
I think your analogy of [the EA movement] being in the product development stage maybe answered some of these questions, but I want to ask the second top-rated question: Should EA be a mass movement in the way that animal rights or the feminist movement is — i.e., having an end goal where everyone is in the movement, while risking resistance?
Should we be doing that right now? Should we be aiming to grow big?
Will: I think my view there is ultimately yes, and in the short run, no. I think longtermism is the key. I think of longtermism for EA as being similar [in terms of its development] to where socialism was in the mid-20th century.
In that case, the core moral insight was that workers’ rights really matter. And then the question became: What do you want to do about that? It’s very, very non-obvious. You might think that we need complete state control, and the state decides how much tax revenue is sent where. And it turns out that works really badly.
But the social democracy of the Nordic countries works really well. There, you have low regulation and high redistribution. That was extremely non-obvious from the mid-18th century through the mid-19th century. But it was really important to figure that out before turning societies into Communist states.
The question with longtermism is whether benefiting the very long-run future is easier or harder than benefiting workers. I think it’s harder. And in looking at the history of ideas, again, it can take many decades to actually work through ideas before we figure out what’s going on. If you go too fast, then perhaps you’re just spreading really bad ideas. But when it comes to the other cause areas, there’s more of a case.
With animals, just don’t torture them. The conclusion is a lot easier, I think. And global health development is between the two. There were efforts in the 1970s to decrease global poverty and they were enormous, enormous failures because people didn’t really know what they were doing.
So my view is stay small and get better. Eventually, over time, we’re aiming for [a much larger movement].
Habiba: When we’re in this small stage, there’s maybe a risk that the types of people who are in the EA community might be more homogenous. So the question that we’ve had is “Do you feel that the EA community is currently adequately diverse or well-represented? If not, which groups or views do you think are least represented?”
Will: I think the answer’s clearly no, and that’s true in terms of both demographic and epistemic diversity. [The lack of] demographic diversity is clear just from the statistics. There’s underrepresentation of women, enormous underrepresentation of people of color, and underrepresentation of people from other countries. And if we’re thinking of these very long-term aims, what will the biggest economies be in 2100? China and India. COVID is making us think that maybe the grip of the US’s hegemony in the world is weakening a bit.
[Turning to epistemic diversity], I’ve mentioned history. It took me ages to find a story that I felt I could communicate well with, and I’ve learned an enormous amount that has shaped my worldview in quite a significant way [from studying history, which I did because I knew of] no one I knew in the EA movement who was a historian. We’re very blind from that perspective. I think there’s plausibly many more [blind spots] of that nature.
Habiba: As in, there are quite a lot more areas where we need to find more deep experts?
Will: Yes, I would say so. I even find that to be the case in economics, especially since I’ve realized it’s wrong to think of economics as a field rather than just a collection of different disciplines. If five years ago we’d had real experts in growth models in the EA movement, or people who knew the persistence literature, I think we could have made a lot of progress more quickly.
Habiba: Do you have any best guesses for what those “unknowns” might be — those other areas where we could be benefiting from dipping into this deep literature?
Will: I found the study of history even more valuable than I was expecting. And my answer will be very biased by the things I’m thinking about. For example, I [can see the benefit of having more] specialists on mechanism design. That’s an area I’m quite excited about. The economist’s attitude tends to be less [along the lines of] “there’s this problem, so you need to do this thing.” [But specialists in mechanism design think about] how to structure incentives such that people acting in rational self-interest act in a good way.
The field of mechanism design is very hot in economics. And there’s an enormous challenge if you care about the long-run future because structuring incentives for people to take more long-term-oriented actions is a big challenge. But for that reason, I belatedly [think the movement could benefit from] more people in political science, and generally across social science.
I increasingly think growth theorists [could be helpful partners] as well. [Their knowledge] is most relevant to questions around AI and the idea of what the result will be over the next 100 years, whether it’s an intelligence explosion or something that’s much broader across society. There are now growth models on this. Bill Nordhaus, who is a Nobel Laureate, wrote a paper on the topic, as did Charles Jones, one of the world’s best growth theorists [see Jones’s paper here]. This work was not a result of EA; 100 years of formal modeling exists and [can inform the large amount of work still to be done, and to which the EA movement can contribute].
Habiba: We don’t have to reinvent the wheel in these spaces. We can just inherit the thinking.
Will: Yes, exactly.
Habiba: Now for the top-voted question: What mistakes do you think most EAs, or people in the effective altruism community, are making?
Will: It’s a great question. I think that over-deference is an issue. Of course, you can’t think of everything, and sometimes you just need to make a decision and move on. But I do worry that people think, “There’s this core of researchers, and they’ve figured out X, Y, and Z. So I’m going to go do X, Y, and Z.”
You then talk to those core people and they say, “Well, I don’t really know. I could change my [opinion] tomorrow. I need to look into it a lot more.”
That doesn’t necessarily mean that everyone should engage [with their work] in a shallow way. I do think it means more people should just be of the mind that they want to figure out the basics, not take anything for granted, and not defer to [others’ opinions].
At the moment, I think there’s a very small number of people doing that, even though I regard it as the core of what EA is about. And it does mean that if you’re choosing a career on the basis of longtermist aims, you must accept that you’re making a bet. [What we think now] could well get overturned; we could have very different views in 10 years’ time.
I really just don’t want people to make decisions on the basis of feeling that [current views] are more robust than they are.
Habiba: So are you in favor of people who are thinking about what to study to err more on the side of just taking jobs (if they’re not going to be the kind of person who’s becoming an academic researcher)?
[Will puts his hands up to deflect a falling object.] Are you okay?
Will: A massive door is falling on me. Okay, I’m still alive. You know, I nearly got killed by a herd of gulls the other day. It would have been a terrible way for me to go.
Habiba: This feels like we’ve got you on a game show where we ask you questions about philosophy whilst you’re being assaulted by the elements and bits of your house. But I assure you, we haven’t set this up. [Will and Habiba laugh.]
The question was: For people who are choosing what to do with their careers, some can specialize in becoming researchers focusing on core decisions. But given some of the ideas you’ve shared — for example, on how we don’t really know what the definitive answers are yet — are you in favor of building generalist skills and being ready to pivot?
Will: I think that does [make sense] in theory — but in practice, often the best thing to do is just to start doing something really valuable.
Different career paths have different time lengths associated with them. If you’re in the mode of thinking, “Okay, we’ve figured everything out, we just need to implement these ideas,” you might find the option of becoming a politician (which could be 20 or 30 years away) or an academic (which would require doing a PhD and take quite a long time) less appealing.
But if you’re in the mode of thinking, “We’re still working things out,” options with longer time periods become more promising again. I think you should expect most of your impact not to come immediately, but later on.
I also think the net effect of an early emphasis on building general, all-purpose career capital for the whole community can be bad. There’s a psychological element of this that I hadn’t appreciated before. Suppose you go into a particular field and become a specialist, and it turns out to just not be very useful. That sucks on an individual level. But is it exactly what the community needs, because having five specialists — two of whom are super-useful — is way better than having five generalists?
That’s tough. And I think the solution to it, as a community, is perhaps having more rewards and status heaped on process and attempts rather than outcomes. Otherwise, you’re going to incentivize those.
Habiba: Yeah, I think about this a lot, because many of the things that we’ve [identified] as being promising may have a low probability of actually turning out to be important in our lifetimes. I think we have to be okay with that fact. For example, say there’s some sort of AI winter, and [that field is] put on pause for a bit. We should still be okay with the fact that, ex ante, it was the right decision for us to focus some effort on these things.
Will: Yes, exactly.
Habiba: We’re [reaching the end of this Q&A], so I’d like to cover the last section. It’s about your views on whether certain ideas are overrated, underrated, or appropriately rated.
[The first one is] earning to give as an impactful career choice.
Will: Underrated, I would say, and I’m partly to blame. I don’t know — sometimes it’s so hard to be sufficiently granular with the messaging. [One moment the message is] “80,000 Hours is all about earning to give!” and then it’s “No, no, 80,000 Hours isn’t at all about earning to give!”
But I think you can appreciate that it’s not necessarily about the amount of good that money can do, or that you mustn’t _only_ work at opportunities for impact now. It’s more like, “What’s the best opportunity for impact over the course of your entire career?” That seems a lot better.
Habiba: What about this one: efforts to generally improve people’s reasoning and rationality?
Will: I think maybe that one is extremely context-specific; it depends on the project. Maybe it’s a bit overrated. It’s just really hard to do. I don’t know if we have good evidence that you can actually make people [become more rational] who wouldn’t otherwise pick it up. So there’s obviously loads of context that’s super-important, and you need to pick up those skills along the way, or read up on them.
But if you’re going to do that rather than perhaps learn subject-specific skills, it’s not obvious to me that the former’s better. Again, it’s maybe a bit of a generalist-versus-specialist distinction.
Habiba: Spending time and money to increase your personal productivity: overrated or underrated?
Will: Again, I think it depends on the circumstances, but probably underrated, especially for young people. Many don’t think about this correctly. Let’s say you’re a student at a great university, and you have an illustrious career [ahead of you]. You choose to work part-time, rather than just being more frugal in order to do well at school, because the returns early on to your own human capital are really high. It’s always easy to think about things as upfront costs as opposed to long-term gains.
And certainly, when I look back at my life and think about things that I’m really happy I did, often they involved upfront costs. For example, I’m really happy I learned to meditate. I’m really happy I decided to work on mental health. Those just keep paying off over time.
The caveat I’d make is if you’re in this unusual position where you think, “I actually know how well I do at this thing, and doing it right now is going to be a huge determinant of how well my entire career goes.” Often, that can be the case early on. Focusing on that, in a sense, can increase your productivity.
Habiba: Keeping up-to-date with current affairs: overrated or underrated?
Will: I think overrated. I think it’s often just bad for us. I think of it as more of a compulsion or something. The news media are just not incentivized to give you an accurate picture of the world. For every hour that you’re spending on the BBC, you could be spending that looking closely at the data, or reading history, and getting a much richer picture of what the world is like.
Habiba: Having in-person meetings or conversations?
Will: I feel like they’re great, and are highly rated, so I’m going to say they’re appropriately rated.
Habiba: Efforts to build the EA community outside of the English-speaking world?
Will: Probably underrated. Again, it’s just a very predictable fact that in the next century, the non-English-speaking world is going to get more and more powerful relative to the English-speaking world. You also get a variety of diversity benefits [from expanding the community in this way].
Plus, people worry a lot about coordination problems; the biggest ones are between countries. Perhaps this is a way in which the EA community could help, because it’s like a nation that straddles multiple nations.
Habiba: Setting up new EA projects or organizations?
Will: Underrated. I heard, I think from Jade Leung (and I can’t remember if it was from interviews she’d been doing or from a quantitative survey), that there’s a very common view in the EA community that it’s [best not] to set up anything new, because if it already exists, then it’s good — and if not, then it’s probably bad.
This [utterly goes] against what we think. Sure, don’t set up things that are really harmful. Avoid tainting the well and stepping on others’ toes. But [the movement is built to accommodate] enormous numbers of new, great projects. I would love to see that a lot more. Again, most projects fail. It’s another situation where we want to be rewarding process, effort, and attention, rather than necessarily the outcome.
Habiba: Yes. And lastly, being famous: overrated or underrated?
Will: On the basis of famous people I know, I’m going to say it’s overrated. If you look at Hollywood actors, the gains from having more money are really small, I think, and the negatives are terrible. You have to move house regularly, you get stalkers.
I’m not saying we should all shed a tear for Hollywood celebrities or other famous people. But I wouldn’t want to be them.
Habiba: Nice. Here’s the very last question: What’s most important or interesting to you outside of effective altruism?
Will: Great question. I was actually thinking I wanted to give a whole talk on how great it is developing multiple identities outside of EA (not as in a personality disorder).
Habiba: I’m glad you clarified. I wasn’t sure if you meant superhero identities or something.
Will: I think my number-one outside interest at the moment is music. Over the course of the last five years or so, I’ve re-cultivated it. Prior to EA, I was really into music; I was in a couple of bands. EA started, and I thought, “Well, that’s useless. I should stop.” Now, it’s back again — both listening and making music. It’s really enriching my life. It’s this constant source of joy. You can listen to any of the best music, at the most amazing level of sound quality, at any time.
It’s one of the great things that I think the past has gifted us, and that I feel very thankful for.
Habiba: Lovely. We’re going to wrap it up there. Thank you so much, Will, for answering all of those questions.
Will: Thanks.