In this 2016 talk, 80,000 Hours’ Robert Wiblin argues that the indirect long-term effects of our actions are both very important and very hard to estimate. He also argues that the most promising interventions include targeted work to reduce existential risk, along with promotion of peace and wisdom.
The below transcript is lightly edited for readability.
The Talk
This talk is about flow through effects and the effect of different actions on the very long term. I should give you a health warning first. This is the talk about flow through effects that your mother warned you about. This is not going to be super inspiring, necessarily. It can be a little bit demoralizing to think about how hard it can be to affect the long term.
I also have more questions, really, than answers here. I don’t have a simple thesis that I’m just going to be pushing on you. I’m going to be describing some of the issues that exist here and some of the questions that are still open. It’s not going to have a simple ending, necessarily. It can be quite hard to forecast what effects our actions are going to have. Things that initially seem bad can end up being good in the long run. We’ve probably all experienced this in our own lives as well. I think this isn’t a reason to not bother thinking about the long-term effects of our actions, because if we can’t predict what effect our actions are going to have, even on a balance of probability standard, then they’re probably not very valuable to do in the first place.
So first I just want to define some terms here because there’s a lot of different words that people use to describe flow through effects, and that was the initial name for this talk, but Toby Ord convinced me that we should do a bit of rebranding here. Get rid of the term flow through effect, which is a bit unnecessarily vague, and start talking about indirect effects, which I’m going to define as effects that affect someone other than the person you initially intended to target, and long-term effects, which are effects that occur after the present generation is dead, at least assuming we have normal human lifespans. In the long term, all effects are going to be indirect.
I’ll just describe some of the hypotheses that are relevant to whether flow through effects matter very much. The first one is the astronomical stakes idea, which Nick Bostrom came up with it, and he gave it that name, and in fact I’m stealing a lot of ideas from Bostrom in this talk. The idea here is that what matters most of all is what happens to the vast amount of energy in the universe. Currently there’s an enormous number of stars out there, an enormous amount of matter and energy, but as far as we can tell, it’s basically producing something like zero value. It’s just hydrogen sitting there in deep space, the suns burning up. It’s not something that we would regard as particularly valuable or particularly harmful.
But if we organized it in the right way, then it could be extremely valuable or extremely harmful. The scale of the potential value that the whole rest of the universe produces is trillions of trillions times larger than what we could do just on Earth with current technology. This seems to me to be a pretty likely hypothesis. It’s obvious that this is a pretty compelling idea if you’re a utilitarian, but even if you place just some probability on consequences being something that really matter, and creating new positive things being a valuable thing to do. Then, because the scale of the potential benefit is so enormous, trillions of times larger than the things that we can accomplish on Earth, then in expected value terms, the astronomical stakes stuff is going to dominate your calculation of what’s most important.
This pretty naturally leads onto the long-term effects hypothesis, which is that the majority of the value of our actions is from their effects on people who don’t yet exist. I think if you buy the astronomical stakes argument, you probably have to buy this one as well. How can we affect what things happen in a hundred years’ time, or a thousand years’ time, or a million years’ time? You would have to do it indirectly, through an indirect series of cause and effect: one person affecting the next generation, affecting the next generation, and so on, so it has to be indirect. I think even if you don’t buy the astronomical stakes hypothesis, if you think that there are more than ten generations of people yet to come, if humans are going to continue existing even for another 500 years or so, then I think the effect on future generations, generations after the present one, are likely to dominate the moral effects of your various actions.
You have to trade off the fact that your effects on future generations, generations after the present one, are more uncertain, but there’s also going to be a whole lot more of them. I’m just ballparking it here, but I think if there’s more than ten yet to come, then you would have to say that the long-term effects of your actions are more important. So, Bostrom, thinking about this, was wondering if the long term is really what matters, what should we be looking to do?
The first suggestion, from a paper in 2003, was to minimize the risks that humanity faces. A good signpost for one day achieving astronomical gains would be to achieve an okay outcome today. The logic here is that so long as we don’t permanently ruin things, either by going extinct or by having some horrible dictatorship from which we can never escape, then humanity survives and we still have the scientific method, and we still have our brains, and we can live to improve the world another day. We can correct our mistakes and do better. This sometimes appears with the name “maxipok”, which is an idea that maximizing the probability of an okay outcome is what we should be aspiring to do in the short term.
How might we go about doing this? Approach one, which I think is the one that most effective altruists are implicitly taking, is they’re buying the better position hypothesis, which is that faster human empowerment, so reducing poverty, having better economic growth, improving people’s health, improving their education and their understanding of the world, this kind of thing reliably makes the future more promising, basically because it puts us in a better position to deal with future challenges. Whatever the threats are to humanity’s success in the longer term, if we have a lot of wealth, and we’re healthy, and we’re educated, then we’ll be in a good position to deal with those problems.
I think this makes complete sense if you think we live in a world where the main conflict is between humans and nature, people and nature, which is a classic story archetype, because empowerment helps us to deal, clearly, with natural disasters like supervolcanoes, asteroids, diseases and pandemics. If we’re smarter we can come up with vaccines more quickly, and we can prevent them from spreading. Wild animal suffering is another thing we could deal with if humans were better empowered, and so on, and so on. So in our people versus nature world, I think this theory is very compelling.
What if that’s not the kind of narrative that’s going on in the universe? What if we’ve seen the enemy, and it’s us? What if we live in a person vs person conflict story? In that case, the better position hypothesis is a whole lot less clear because education, say, puts us in a better position to both solve and create problems more quickly. It empowers both good and bad things that humanity can do. For example, if we’re better educated and we have a better economy, we will invent nuclear weapons sooner, perhaps, but we’re then in a better position to invent ways not to use them because we’ll come up with game theory and mutually assured destruction, and we’ll figure out a way to deal with nuclear weapons without killing ourselves.
As an example of how development can create risks, the Soviet Union in the late ’20s through the late ’40s went through an absolutely explosive period of economic growth, one of the most rapid modernization processes that we’ve ever seen, something like China in the modern era. Millions and millions of people moving off of very unproductive jobs in farms into factories. From a human empowerment point of view, this looked absolutely fantastic because you’ve got lots of people escaping poverty, improved health, improved education. And probably it was a positive thing, but it also created some risks because the fact that the USSR industrialized so quickly meant that they were able to develop nuclear weapons very soon after the US developed them, creating, again, the potential for world-destroying nuclear war that wouldn’t have existed, say, if the US had been the only power. In addition to that, the USSR was controlled by Stalin, one of the most monomaniacal, totalitarian dictators ever. A really evil guy who became a lot more powerful because he had this enormous economy behind him. So the USSR developing wasn’t an unmitigated positive thing. It created some risks to humanity as well.
So here I want to present the person versus person hypothesis, which is that most of the threats to the long term are human created. I think this is true, because except for pandemics, with most natural risks that would be very damaging, like volcanoes, supervolcanoes or asteroids, the annual risk is really, really low, and we can usually recover from most of those things. It’s very, very hard for a supervolcano to go out and absolutely kill everyone. The Future for Humanity Institute has a paper forthcoming talking about this, how anthropogenic risk is significantly larger than the risk from nature, probably like 10 or 100 times higher.
To get an idea of how you would go about modeling whether human empowerment is positive or negative in a person versus person world, it’s definitely not easy because you have to think what is the risk to human civilization proportional to. Is it a per year risk, like you’d have with asteroids, where just every year there’s a risk that an asteroid or a comet is going to come by the earth? Or maybe some of the risks that we face are proportional to the annual rate of growth? Perhaps if we grow faster, then we have less time to prepare for changes, and so we’re less well able to deal them when they arrive? In which case going more slowly will be better because we can have more potential for forethought?
If it’s per year, with the nuclear weapons example, you’ve got a risky time between when you invent nuclear weapons and when you invent a technology that neuters nuclear weapons, like mutually assured destruction that stops us from using them. In that case, if it’s just in a transition between two things, then going faster is fine, because you’re shortening the time between when you invent the problem and when you solve it. So in that case, maybe you want to go fast. But basically the modeling gets tricky here pretty quickly, so it’s hard to come up with an overwhelming argument one way or the other.
Another thing that might be relevant would be the ratio between human prudence versus our power, our technological ability. One idea that you might have would be we should only obtain technological capabilities once we’re actually ready to deal with them. That’s the thing that’s going to limit the risks to humanity in the long term. We do want to have technologies, but only once we’re able to use them safely. For example, we give kids scissors but we don’t give them guns, because scissors are useful to a child, and the risks that they pose are not so large because they know that they’re going to cut themselves, and even if they do it’s not the end of the world. Well we don’t give them guns because they don’t understand them, and they don’t yet know how to use them safely, necessarily, so we wait until they’re somewhat older and more mature.
The question I want to pose to you is do we have the unity, the compassion, and the maturity to wield new technologies of mass destruction? I think the answer is pretty clearly no, but if you’re not convinced then this genius can tell you why.
I think he’s got a very strong case here, that we need to have more brains before we develop technologies that are extremely risky to us. He’s a very trustworthy fellow.
So this would suggest a different approach to going about doing good, which would be differential speed-up. If you’re trying to do differential speed-up, rather than just general human empowerment, then you’ll want to be thinking, “What things do we most need before other things? What is it beneficial to have first, and also what things seem least likely to backfire, at least if we get them immediately?” This leads to a question. What are signposts for a good future? Here I’m basically stealing a bunch of this analysis, again, from Bostrom, and the talk that he gave in Oxford two years ago.
He went through a whole lot of possible things that are proxies for good long-term outcomes that we could measure in the short term. An analogy is with a chess game. Early on in the game, obviously you want to be capturing your opponent’s pieces, but you can’t always see exactly how the game is going to go later on. You don’t know how it’s going to end, and that’s not how good players figure out what the best next move is. They don’t map it out to actually check mating their opponent. Instead they’re looking for proxies in the short term, like do I control a lot of the board, for example, or am I capturing my opponent’s pieces. That’s the kind of thing that we’re looking for here. Things that we can actually measure in the short term but are a reliable and a consistent guide to whether we’re making the future more promising.
Here are a number of things that he suggested. I don’t have time to go through all of them. You notice, for example, that economic growth has a bit of a question mark next to it because it’s just not really clear whether faster economic growth is so good or bad in the long term. But here are some of the ones that did seem more promising as possible signposts to guide us into the future. One would be a biological cognitive enhancement to making people smarter in the hope that they’ll be able to deal with future challenges in a more wise and prudent way than we are, something that works like education, but more so. International peace and cooperation to prevent conflict, because a large way in which technologies can go wrong is if they’re used as, say, weapons of war, or used by people against others. Solutions to the AI alignment problem. Obviously if we’re going to create machines that are smarter than humans, then we want them to be aligned with human interests. There are good reasons to think that if we don’t make a special effort to do this, they’re not going to be aligned with higher interests.
Then one that I’d add in is better moral attitudes, so trying to encourage people to care about the welfare of all. I think it’s harder to see how that could backfire than to see how economic growth might backfire, though they both have some similar values. If we can get future generations to care about everyone equally, to be very compassionate and not just pursue their own selfish interests, I think that’s a reasonably good signpost for improving the future.
An interesting thing to observe about this whole framing about differential speed-up is that it actually brings effective altruism somewhat close to traditional ideas about how you might do good. People sometimes say, “Why are you just focused on curing malaria, like is that really the best way to change the world?” I think sometimes people have a point that that just might not be the best way to go about it. More traditional ideas might focus on wise leadership of a country, capacity and institution building to deal with problems, improving people’s moral attitudes, and also just being wary of rapid change in a way that many of us are not. It’s mostly conservatism being worried that if everything is just up-ended and we totally change society overnight, maybe that’s going to go wrong and we should be crossing the river by feeling the stones, so to speak.
What do I think are probably the most important causes? My guess is things like working on risks in biotechnology, AI value alignment, climate change, preventing war and promoting peace, and a sense that we ought to be cooperating with one another and avoiding conflict, improving intelligence within government, such as forecasting the future and making good collective decisions, and similar such things, which I think are probably a more reliable guide to improving the future than simply trying to increase GDP growth.
What about reducing poverty? Am I saying that reducing poverty isn’t good? No, I don’t think that that’s the case because reducing poverty also raises global sanity through more education and people who are smarter, which leads to more cosmopolitan moral values, and leads to better government as well. People who’ve put a lot more thought into this than me generally think that it’s probably good overall. All I’m saying is that it might not be as good as it initially appears to be, but I’m certainly not saying that it’s neutral or negative. If you’d like to explore this more, a really good thesis is On The Overwhelming Importance of Shaping the Far Future, where one of our trustees, Nick Beckstead, concludes that improving economic growth is probably a positive, just maybe not as positive as it might first look. Another one is The Moral Consequences of Economic Growth by Benjamin Friedman, which talks about the changes that you get in a society when it stagnates economically, and how you often get quite rapid moral regression. People are reverting to more tribal values, being less likely to cooperate with one another, and being less empathetic. Quite an interesting book from about 2006, I think.
Something to know is that poverty really isn’t that neglected in the scheme of things. It’s not a terribly unusual cause. Which is one reason that we talk about it a lot, because it’s easy to explain to people that it’s good if you save lives and reduce poverty. Reducing poverty absorbs something like more than half, let’s say, of all effort by effective altruists, certainly more than half of all donations, so it’s a really large focus area, I think, maybe relative to the strength of the arguments. It’s a significant fraction of all actions by the poor themselves, of billions of people who are in relative poverty or global poverty. They’re trying to get out of poverty themselves, and this is something we should consider when thinking is this really a neglected opportunity. It’s true they don’t have necessarily the same resources, but if you add it all up, I think there’s a lot of work going into trying to prevent poverty, plus the many foundations that are focused on this, including many of the largest, and quite a lot of government aid.
I think poverty is neglected relative to some things, but it’s probably not among the most neglected problems in the world. If you compare that with, say, how many NGOs and foundations there are working on international coordination, new dangerous technologies, peace, improving forecasting, this kind of work is reasonably obscure by comparison. I think there might be more low hanging fruit here because fewer people are working on it. Lots of people say, “I want to end poverty,” but if you meet a 16-year-old who says “I really want to improve forecasting ability within the intelligence services,” that is not a common thing for teenagers to dream of doing with their career.
In addition, I think these other ways of doing good can be quite a good fit for us. Effective altruism is sometimes accused of being filled with elites. I think that is potentially quite problematic in some ways, in that we can be very out-of-touch, maybe we just haven’t experienced poverty ourselves, so that could blinker us, but it also creates some opportunities potentially if we have a lot of connections with people in government or within academia. I think effective altruism, many people here would have an unusually good shot at guiding governance and public service, rising to the top levels within important institutions within society, guiding specific new technologies because we’re particularly clued into what things are being developed in the next five or ten years, and think about how we can make sure that they’re used in wise ways rather than risky ways.
We might be in a good position to improve society’s moral values as well, though I think this is a bit more questionable. Many of us might be out of touch with a lot of people in society. I personally often don’t feel like I’m super in-touch with a lot of people, and so maybe on the one hand we have potentially a large audience, but are we actually good at persuading? Are most people in society receptive to changing their moral values and caring more about people overseas? I think that that’s an open question.
So the bottom lines here are these. The indirect effects are crucial, though they’re really hard to estimate. I think peace and collective wisdom are somewhat underrated by people in this community. There’s probably an excessive focus on economic growth and health relative to other cause areas that might be more reliable signposts to improving the future and be somewhat more neglected. It may be a cliché to say, but I think further research really is needed this time because this could be one of the things that is causing us to actually not do things that are terribly valuable.
Of course, people have known about his problem for years. I started working at the Centre for Effective Altruism four years ago. This isn’t a new concern, but because it’s a bit demoralizing to think about how hard it can be to predict the long-term effects of your actions and how hard it is to have really good insights here, this topic just kind of goes a bit neglected in my view. I think it would be valuable to get more smart people really thinking seriously about this and putting in months or years of work. You know, coming up with their own ideas and their own models for how we can improve society in the long term.
All that said, given how hard it is to think about this topic, all above might be misguided. I wouldn’t like it if too much stuck to any specific thing that I’ve said, but I think the overall issue is quite important, so I’d like to have great conversations here in the rest of the conference. Thanks so much. So can I take questions for five minutes? All right, go for it.
Q&A
Question: What are the most effective interventions you are aware of for increasing peace? Maybe if you could focus especially on the Middle East?
Rob Wiblin: Right. Unfortunately, I’m not an expert on that, and the Middle East sounds particularly tricky. I think the thing that I’d be focusing on if I was working on peace would be trying to get China and the US to get along so they don’t have a massive war. Trying to get greater understanding of the interests of the Chinese government and Chinese people in the US, and vice versa, so they don’t accidentally come to blows. Potentially also things around nuclear security. Unfortunately, I haven’t gotten to that level of thinking, exactly how would you do peace interventions. I think by changing values so that people just find war more abhorrent. That’s something that’s already happened and it’s been very valuable. There are organizations like Ploughshares Foundation which do work on trying to improve cooperation and peace. Generally going into the Foreign Service, for example, and just having a very anti-war attitude, I think could be positive, but I haven’t thought that much about it at that specific level. Yeah.
Question: You suggested that biological cognitive enhancement could be a promising route forward, but you seem less confident, about, say, education? I was wondering if you could contrast those.
Rob Wiblin: Yeah, you’ll have to read Bostrom’s paper on that because I was just copying off what he said there, and I’m actually not entirely sure what the logic is. They did do some modeling on the effects of cognitive enhancement as opposed to general improvement, and I think they thought there were some different effects. If you look at the Future of Humanity website or Nick Bostrom’s website, I think you’ll get the answer in there.
Question: Yeah, kind of going off of that question, is there any focus on the distribution of technologies like cognitive enhancement and just any kind of biotech, because it seems like with the distribution of current technologies in healthcare and economic growth, that there seem to be flaws in the distribution, so it would be unequal, and would probably be centered around people who were also in EA, and going back to that elitism of who would get these technologies once they’re made, and would it really help everyone, or would it really help those who can afford it?
Rob Wiblin: Yeah. There are a lot of issues there. I think that’s one reason to prefer increasing economic growth in the developing world rather than the rich world. Imagine increasing equality and ability to solve problems. Increasing income equality and education equality is probably a safer bet than improving the frontier where you’re more likely to invent new, dangerous things with that education. However it’s not completely obvious that having greater equality of access to all these new things would necessarily be positive. If something is dangerous and destabilizing, then maybe you want to just try it on a tiny number of people first before it’s scaled up to lots of people. I think if you look at most technologies, initially they’re very expensive and they’re just used by an elite, but then they filter out to everyone gradually as it becomes cheap to produce them, like smart phones, that process is happening, and it’s happening with other education technology as well.
Depending on the technology, it might be bad for everyone to have access to it. Like with nuclear weapons, I keep coming back to that example, it could have been good if the US was the only country that had nuclear weapons. There were a lot of negotiations, potentially, of all countries deciding not to have nuclear weapons, or the US considered just trying to maintain a complete advantage and being the only country that ever has them. It’s possible to imagine that either of those scenarios could be more stable than what we have now, where one person could potentially destroy the entire world with the click of a button. So yeah, it’s really complicated.
Question: In your talk you mentioned about moral values and having more shared moral values. So if you could elaborate more in terms of what techniques or apparatus we would use to establish and propagate that?
Rob Wiblin: Yeah, to some extent I think what we’re doing this already. For example, I think the effective altruism movement as a whole encourages people to take a global perspective on things and gives significant weight to the effects on, say, people in other countries. I think most people when they’re deciding what policies their government should implement just think how this affects the citizen of this country, which I can often find quite confronting, and people have that mindset. But we’re saying, “No, you’ve got to think about not just like healthcare in your own country, but like what could your country do to improve health globally?” That’s like one of those shifts in mindset that’s very valuable.
We’re trying to encourage people to be concerned about the welfare of animals, both in farms, where we treat them badly, or even animals in nature, as a more extreme case. We also try to get people to think long term, to think about impacts on future generations, not just their children’s children, but far, far beyond that. I think those are potentially the most important moral shifts I think you can get. Basically this idea that we shouldn’t just be focused on ourselves, or our family, or people who look like us, or sound like us, but thinking about the effects of our actions on everyone and all kinds of beings at all times. We do kind of push that, but I think we could potentially be more focused on changing moral values and reaching a lot of people with that message, if that was what we were explicitly trying to do.
Robert Wiblin: Making sense of long-term indirect effects
Link post
In this 2016 talk, 80,000 Hours’ Robert Wiblin argues that the indirect long-term effects of our actions are both very important and very hard to estimate. He also argues that the most promising interventions include targeted work to reduce existential risk, along with promotion of peace and wisdom.
The below transcript is lightly edited for readability.
The Talk
This talk is about flow through effects and the effect of different actions on the very long term. I should give you a health warning first. This is the talk about flow through effects that your mother warned you about. This is not going to be super inspiring, necessarily. It can be a little bit demoralizing to think about how hard it can be to affect the long term.
I also have more questions, really, than answers here. I don’t have a simple thesis that I’m just going to be pushing on you. I’m going to be describing some of the issues that exist here and some of the questions that are still open. It’s not going to have a simple ending, necessarily. It can be quite hard to forecast what effects our actions are going to have. Things that initially seem bad can end up being good in the long run. We’ve probably all experienced this in our own lives as well. I think this isn’t a reason to not bother thinking about the long-term effects of our actions, because if we can’t predict what effect our actions are going to have, even on a balance of probability standard, then they’re probably not very valuable to do in the first place.
So first I just want to define some terms here because there’s a lot of different words that people use to describe flow through effects, and that was the initial name for this talk, but Toby Ord convinced me that we should do a bit of rebranding here. Get rid of the term flow through effect, which is a bit unnecessarily vague, and start talking about indirect effects, which I’m going to define as effects that affect someone other than the person you initially intended to target, and long-term effects, which are effects that occur after the present generation is dead, at least assuming we have normal human lifespans. In the long term, all effects are going to be indirect.
I’ll just describe some of the hypotheses that are relevant to whether flow through effects matter very much. The first one is the astronomical stakes idea, which Nick Bostrom came up with it, and he gave it that name, and in fact I’m stealing a lot of ideas from Bostrom in this talk. The idea here is that what matters most of all is what happens to the vast amount of energy in the universe. Currently there’s an enormous number of stars out there, an enormous amount of matter and energy, but as far as we can tell, it’s basically producing something like zero value. It’s just hydrogen sitting there in deep space, the suns burning up. It’s not something that we would regard as particularly valuable or particularly harmful.
But if we organized it in the right way, then it could be extremely valuable or extremely harmful. The scale of the potential value that the whole rest of the universe produces is trillions of trillions times larger than what we could do just on Earth with current technology. This seems to me to be a pretty likely hypothesis. It’s obvious that this is a pretty compelling idea if you’re a utilitarian, but even if you place just some probability on consequences being something that really matter, and creating new positive things being a valuable thing to do. Then, because the scale of the potential benefit is so enormous, trillions of times larger than the things that we can accomplish on Earth, then in expected value terms, the astronomical stakes stuff is going to dominate your calculation of what’s most important.
This pretty naturally leads onto the long-term effects hypothesis, which is that the majority of the value of our actions is from their effects on people who don’t yet exist. I think if you buy the astronomical stakes argument, you probably have to buy this one as well. How can we affect what things happen in a hundred years’ time, or a thousand years’ time, or a million years’ time? You would have to do it indirectly, through an indirect series of cause and effect: one person affecting the next generation, affecting the next generation, and so on, so it has to be indirect. I think even if you don’t buy the astronomical stakes hypothesis, if you think that there are more than ten generations of people yet to come, if humans are going to continue existing even for another 500 years or so, then I think the effect on future generations, generations after the present one, are likely to dominate the moral effects of your various actions.
You have to trade off the fact that your effects on future generations, generations after the present one, are more uncertain, but there’s also going to be a whole lot more of them. I’m just ballparking it here, but I think if there’s more than ten yet to come, then you would have to say that the long-term effects of your actions are more important. So, Bostrom, thinking about this, was wondering if the long term is really what matters, what should we be looking to do?
The first suggestion, from a paper in 2003, was to minimize the risks that humanity faces. A good signpost for one day achieving astronomical gains would be to achieve an okay outcome today. The logic here is that so long as we don’t permanently ruin things, either by going extinct or by having some horrible dictatorship from which we can never escape, then humanity survives and we still have the scientific method, and we still have our brains, and we can live to improve the world another day. We can correct our mistakes and do better. This sometimes appears with the name “maxipok”, which is an idea that maximizing the probability of an okay outcome is what we should be aspiring to do in the short term.
How might we go about doing this? Approach one, which I think is the one that most effective altruists are implicitly taking, is they’re buying the better position hypothesis, which is that faster human empowerment, so reducing poverty, having better economic growth, improving people’s health, improving their education and their understanding of the world, this kind of thing reliably makes the future more promising, basically because it puts us in a better position to deal with future challenges. Whatever the threats are to humanity’s success in the longer term, if we have a lot of wealth, and we’re healthy, and we’re educated, then we’ll be in a good position to deal with those problems.
I think this makes complete sense if you think we live in a world where the main conflict is between humans and nature, people and nature, which is a classic story archetype, because empowerment helps us to deal, clearly, with natural disasters like supervolcanoes, asteroids, diseases and pandemics. If we’re smarter we can come up with vaccines more quickly, and we can prevent them from spreading. Wild animal suffering is another thing we could deal with if humans were better empowered, and so on, and so on. So in our people versus nature world, I think this theory is very compelling.
What if that’s not the kind of narrative that’s going on in the universe? What if we’ve seen the enemy, and it’s us? What if we live in a person vs person conflict story? In that case, the better position hypothesis is a whole lot less clear because education, say, puts us in a better position to both solve and create problems more quickly. It empowers both good and bad things that humanity can do. For example, if we’re better educated and we have a better economy, we will invent nuclear weapons sooner, perhaps, but we’re then in a better position to invent ways not to use them because we’ll come up with game theory and mutually assured destruction, and we’ll figure out a way to deal with nuclear weapons without killing ourselves.
As an example of how development can create risks, the Soviet Union in the late ’20s through the late ’40s went through an absolutely explosive period of economic growth, one of the most rapid modernization processes that we’ve ever seen, something like China in the modern era. Millions and millions of people moving off of very unproductive jobs in farms into factories. From a human empowerment point of view, this looked absolutely fantastic because you’ve got lots of people escaping poverty, improved health, improved education. And probably it was a positive thing, but it also created some risks because the fact that the USSR industrialized so quickly meant that they were able to develop nuclear weapons very soon after the US developed them, creating, again, the potential for world-destroying nuclear war that wouldn’t have existed, say, if the US had been the only power. In addition to that, the USSR was controlled by Stalin, one of the most monomaniacal, totalitarian dictators ever. A really evil guy who became a lot more powerful because he had this enormous economy behind him. So the USSR developing wasn’t an unmitigated positive thing. It created some risks to humanity as well.
So here I want to present the person versus person hypothesis, which is that most of the threats to the long term are human created. I think this is true, because except for pandemics, with most natural risks that would be very damaging, like volcanoes, supervolcanoes or asteroids, the annual risk is really, really low, and we can usually recover from most of those things. It’s very, very hard for a supervolcano to go out and absolutely kill everyone. The Future for Humanity Institute has a paper forthcoming talking about this, how anthropogenic risk is significantly larger than the risk from nature, probably like 10 or 100 times higher.
To get an idea of how you would go about modeling whether human empowerment is positive or negative in a person versus person world, it’s definitely not easy because you have to think what is the risk to human civilization proportional to. Is it a per year risk, like you’d have with asteroids, where just every year there’s a risk that an asteroid or a comet is going to come by the earth? Or maybe some of the risks that we face are proportional to the annual rate of growth? Perhaps if we grow faster, then we have less time to prepare for changes, and so we’re less well able to deal them when they arrive? In which case going more slowly will be better because we can have more potential for forethought?
If it’s per year, with the nuclear weapons example, you’ve got a risky time between when you invent nuclear weapons and when you invent a technology that neuters nuclear weapons, like mutually assured destruction that stops us from using them. In that case, if it’s just in a transition between two things, then going faster is fine, because you’re shortening the time between when you invent the problem and when you solve it. So in that case, maybe you want to go fast. But basically the modeling gets tricky here pretty quickly, so it’s hard to come up with an overwhelming argument one way or the other.
Another thing that might be relevant would be the ratio between human prudence versus our power, our technological ability. One idea that you might have would be we should only obtain technological capabilities once we’re actually ready to deal with them. That’s the thing that’s going to limit the risks to humanity in the long term. We do want to have technologies, but only once we’re able to use them safely. For example, we give kids scissors but we don’t give them guns, because scissors are useful to a child, and the risks that they pose are not so large because they know that they’re going to cut themselves, and even if they do it’s not the end of the world. Well we don’t give them guns because they don’t understand them, and they don’t yet know how to use them safely, necessarily, so we wait until they’re somewhat older and more mature.
The question I want to pose to you is do we have the unity, the compassion, and the maturity to wield new technologies of mass destruction? I think the answer is pretty clearly no, but if you’re not convinced then this genius can tell you why.
I think he’s got a very strong case here, that we need to have more brains before we develop technologies that are extremely risky to us. He’s a very trustworthy fellow.
So this would suggest a different approach to going about doing good, which would be differential speed-up. If you’re trying to do differential speed-up, rather than just general human empowerment, then you’ll want to be thinking, “What things do we most need before other things? What is it beneficial to have first, and also what things seem least likely to backfire, at least if we get them immediately?” This leads to a question. What are signposts for a good future? Here I’m basically stealing a bunch of this analysis, again, from Bostrom, and the talk that he gave in Oxford two years ago.
He went through a whole lot of possible things that are proxies for good long-term outcomes that we could measure in the short term. An analogy is with a chess game. Early on in the game, obviously you want to be capturing your opponent’s pieces, but you can’t always see exactly how the game is going to go later on. You don’t know how it’s going to end, and that’s not how good players figure out what the best next move is. They don’t map it out to actually check mating their opponent. Instead they’re looking for proxies in the short term, like do I control a lot of the board, for example, or am I capturing my opponent’s pieces. That’s the kind of thing that we’re looking for here. Things that we can actually measure in the short term but are a reliable and a consistent guide to whether we’re making the future more promising.
Here are a number of things that he suggested. I don’t have time to go through all of them. You notice, for example, that economic growth has a bit of a question mark next to it because it’s just not really clear whether faster economic growth is so good or bad in the long term. But here are some of the ones that did seem more promising as possible signposts to guide us into the future. One would be a biological cognitive enhancement to making people smarter in the hope that they’ll be able to deal with future challenges in a more wise and prudent way than we are, something that works like education, but more so. International peace and cooperation to prevent conflict, because a large way in which technologies can go wrong is if they’re used as, say, weapons of war, or used by people against others. Solutions to the AI alignment problem. Obviously if we’re going to create machines that are smarter than humans, then we want them to be aligned with human interests. There are good reasons to think that if we don’t make a special effort to do this, they’re not going to be aligned with higher interests.
Then one that I’d add in is better moral attitudes, so trying to encourage people to care about the welfare of all. I think it’s harder to see how that could backfire than to see how economic growth might backfire, though they both have some similar values. If we can get future generations to care about everyone equally, to be very compassionate and not just pursue their own selfish interests, I think that’s a reasonably good signpost for improving the future.
An interesting thing to observe about this whole framing about differential speed-up is that it actually brings effective altruism somewhat close to traditional ideas about how you might do good. People sometimes say, “Why are you just focused on curing malaria, like is that really the best way to change the world?” I think sometimes people have a point that that just might not be the best way to go about it. More traditional ideas might focus on wise leadership of a country, capacity and institution building to deal with problems, improving people’s moral attitudes, and also just being wary of rapid change in a way that many of us are not. It’s mostly conservatism being worried that if everything is just up-ended and we totally change society overnight, maybe that’s going to go wrong and we should be crossing the river by feeling the stones, so to speak.
What do I think are probably the most important causes? My guess is things like working on risks in biotechnology, AI value alignment, climate change, preventing war and promoting peace, and a sense that we ought to be cooperating with one another and avoiding conflict, improving intelligence within government, such as forecasting the future and making good collective decisions, and similar such things, which I think are probably a more reliable guide to improving the future than simply trying to increase GDP growth.
What about reducing poverty? Am I saying that reducing poverty isn’t good? No, I don’t think that that’s the case because reducing poverty also raises global sanity through more education and people who are smarter, which leads to more cosmopolitan moral values, and leads to better government as well. People who’ve put a lot more thought into this than me generally think that it’s probably good overall. All I’m saying is that it might not be as good as it initially appears to be, but I’m certainly not saying that it’s neutral or negative. If you’d like to explore this more, a really good thesis is On The Overwhelming Importance of Shaping the Far Future, where one of our trustees, Nick Beckstead, concludes that improving economic growth is probably a positive, just maybe not as positive as it might first look. Another one is The Moral Consequences of Economic Growth by Benjamin Friedman, which talks about the changes that you get in a society when it stagnates economically, and how you often get quite rapid moral regression. People are reverting to more tribal values, being less likely to cooperate with one another, and being less empathetic. Quite an interesting book from about 2006, I think.
Something to know is that poverty really isn’t that neglected in the scheme of things. It’s not a terribly unusual cause. Which is one reason that we talk about it a lot, because it’s easy to explain to people that it’s good if you save lives and reduce poverty. Reducing poverty absorbs something like more than half, let’s say, of all effort by effective altruists, certainly more than half of all donations, so it’s a really large focus area, I think, maybe relative to the strength of the arguments. It’s a significant fraction of all actions by the poor themselves, of billions of people who are in relative poverty or global poverty. They’re trying to get out of poverty themselves, and this is something we should consider when thinking is this really a neglected opportunity. It’s true they don’t have necessarily the same resources, but if you add it all up, I think there’s a lot of work going into trying to prevent poverty, plus the many foundations that are focused on this, including many of the largest, and quite a lot of government aid.
I think poverty is neglected relative to some things, but it’s probably not among the most neglected problems in the world. If you compare that with, say, how many NGOs and foundations there are working on international coordination, new dangerous technologies, peace, improving forecasting, this kind of work is reasonably obscure by comparison. I think there might be more low hanging fruit here because fewer people are working on it. Lots of people say, “I want to end poverty,” but if you meet a 16-year-old who says “I really want to improve forecasting ability within the intelligence services,” that is not a common thing for teenagers to dream of doing with their career.
In addition, I think these other ways of doing good can be quite a good fit for us. Effective altruism is sometimes accused of being filled with elites. I think that is potentially quite problematic in some ways, in that we can be very out-of-touch, maybe we just haven’t experienced poverty ourselves, so that could blinker us, but it also creates some opportunities potentially if we have a lot of connections with people in government or within academia. I think effective altruism, many people here would have an unusually good shot at guiding governance and public service, rising to the top levels within important institutions within society, guiding specific new technologies because we’re particularly clued into what things are being developed in the next five or ten years, and think about how we can make sure that they’re used in wise ways rather than risky ways.
We might be in a good position to improve society’s moral values as well, though I think this is a bit more questionable. Many of us might be out of touch with a lot of people in society. I personally often don’t feel like I’m super in-touch with a lot of people, and so maybe on the one hand we have potentially a large audience, but are we actually good at persuading? Are most people in society receptive to changing their moral values and caring more about people overseas? I think that that’s an open question.
So the bottom lines here are these. The indirect effects are crucial, though they’re really hard to estimate. I think peace and collective wisdom are somewhat underrated by people in this community. There’s probably an excessive focus on economic growth and health relative to other cause areas that might be more reliable signposts to improving the future and be somewhat more neglected. It may be a cliché to say, but I think further research really is needed this time because this could be one of the things that is causing us to actually not do things that are terribly valuable.
Of course, people have known about his problem for years. I started working at the Centre for Effective Altruism four years ago. This isn’t a new concern, but because it’s a bit demoralizing to think about how hard it can be to predict the long-term effects of your actions and how hard it is to have really good insights here, this topic just kind of goes a bit neglected in my view. I think it would be valuable to get more smart people really thinking seriously about this and putting in months or years of work. You know, coming up with their own ideas and their own models for how we can improve society in the long term.
All that said, given how hard it is to think about this topic, all above might be misguided. I wouldn’t like it if too much stuck to any specific thing that I’ve said, but I think the overall issue is quite important, so I’d like to have great conversations here in the rest of the conference. Thanks so much. So can I take questions for five minutes? All right, go for it.
Q&A
Question: What are the most effective interventions you are aware of for increasing peace? Maybe if you could focus especially on the Middle East?
Rob Wiblin: Right. Unfortunately, I’m not an expert on that, and the Middle East sounds particularly tricky. I think the thing that I’d be focusing on if I was working on peace would be trying to get China and the US to get along so they don’t have a massive war. Trying to get greater understanding of the interests of the Chinese government and Chinese people in the US, and vice versa, so they don’t accidentally come to blows. Potentially also things around nuclear security. Unfortunately, I haven’t gotten to that level of thinking, exactly how would you do peace interventions. I think by changing values so that people just find war more abhorrent. That’s something that’s already happened and it’s been very valuable. There are organizations like Ploughshares Foundation which do work on trying to improve cooperation and peace. Generally going into the Foreign Service, for example, and just having a very anti-war attitude, I think could be positive, but I haven’t thought that much about it at that specific level. Yeah.
Question: You suggested that biological cognitive enhancement could be a promising route forward, but you seem less confident, about, say, education? I was wondering if you could contrast those.
Rob Wiblin: Yeah, you’ll have to read Bostrom’s paper on that because I was just copying off what he said there, and I’m actually not entirely sure what the logic is. They did do some modeling on the effects of cognitive enhancement as opposed to general improvement, and I think they thought there were some different effects. If you look at the Future of Humanity website or Nick Bostrom’s website, I think you’ll get the answer in there.
Question: Yeah, kind of going off of that question, is there any focus on the distribution of technologies like cognitive enhancement and just any kind of biotech, because it seems like with the distribution of current technologies in healthcare and economic growth, that there seem to be flaws in the distribution, so it would be unequal, and would probably be centered around people who were also in EA, and going back to that elitism of who would get these technologies once they’re made, and would it really help everyone, or would it really help those who can afford it?
Rob Wiblin: Yeah. There are a lot of issues there. I think that’s one reason to prefer increasing economic growth in the developing world rather than the rich world. Imagine increasing equality and ability to solve problems. Increasing income equality and education equality is probably a safer bet than improving the frontier where you’re more likely to invent new, dangerous things with that education. However it’s not completely obvious that having greater equality of access to all these new things would necessarily be positive. If something is dangerous and destabilizing, then maybe you want to just try it on a tiny number of people first before it’s scaled up to lots of people. I think if you look at most technologies, initially they’re very expensive and they’re just used by an elite, but then they filter out to everyone gradually as it becomes cheap to produce them, like smart phones, that process is happening, and it’s happening with other education technology as well.
Depending on the technology, it might be bad for everyone to have access to it. Like with nuclear weapons, I keep coming back to that example, it could have been good if the US was the only country that had nuclear weapons. There were a lot of negotiations, potentially, of all countries deciding not to have nuclear weapons, or the US considered just trying to maintain a complete advantage and being the only country that ever has them. It’s possible to imagine that either of those scenarios could be more stable than what we have now, where one person could potentially destroy the entire world with the click of a button. So yeah, it’s really complicated.
Question: In your talk you mentioned about moral values and having more shared moral values. So if you could elaborate more in terms of what techniques or apparatus we would use to establish and propagate that?
Rob Wiblin: Yeah, to some extent I think what we’re doing this already. For example, I think the effective altruism movement as a whole encourages people to take a global perspective on things and gives significant weight to the effects on, say, people in other countries. I think most people when they’re deciding what policies their government should implement just think how this affects the citizen of this country, which I can often find quite confronting, and people have that mindset. But we’re saying, “No, you’ve got to think about not just like healthcare in your own country, but like what could your country do to improve health globally?” That’s like one of those shifts in mindset that’s very valuable.
We’re trying to encourage people to be concerned about the welfare of animals, both in farms, where we treat them badly, or even animals in nature, as a more extreme case. We also try to get people to think long term, to think about impacts on future generations, not just their children’s children, but far, far beyond that. I think those are potentially the most important moral shifts I think you can get. Basically this idea that we shouldn’t just be focused on ourselves, or our family, or people who look like us, or sound like us, but thinking about the effects of our actions on everyone and all kinds of beings at all times. We do kind of push that, but I think we could potentially be more focused on changing moral values and reaching a lot of people with that message, if that was what we were explicitly trying to do.